content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Sa[BB]ermetrics: The Hidden Value of Motivation in Backdoor Success Rate - 08/20/13 – RobHasAwebsite.com
Big Brother 15We Know Big Brother
Sa[BB]ermetrics: The Hidden Value of Motivation in Backdoor Success Rate – 08/20/13
With 13 seasons of Big Brother (USA), 1 season of Big Brother (CAN) and one oft-forgotten season of the Glass House mislabeled as a Big Brother in the books, the sample size is just large enough to
do what no one has tried to do before: analyze Big Brother game play and strategy by the numbers.
Andy with last week’s POV.[/caption]
Much of the strategy discussion this week in the Big Brother house centered on whether the presumed eviction target, Helen, should be nominated straight away or backdoored. The argument against
option B is that even though nominating her straight away, ensures she plays in the veto competition, it doesn’t significantly change the probability of winning POV since only 8 people remain in the
house. If we assume that Helen is an average competitor (a reasonable assumption as per our BB15 competition metrics) she holds a 1/6 chance of winning the veto when nominated straightaway. This
probability drops to 1/9 (2/3 chance she is picked to play in the veto * 1/6 that she wins) if she is left off of the block. If we assume that whomever wins the veto will use it (either Helen when on
the block, or another competitor in order to backdoor Helen), this means the chances that Helen remains on the block come eviction night are 83% if frontdoored and 89% if backdoored. This marginal
difference in success rate is probably not large enough to offset the small chance that Helen wins the veto while off the block and uses it on Elissa, causing both the primary and secondary targets
to be safe from eviction. Thus, from this rudimentary statistical calculation, nominating Helen straight up appears to be the better move.
This analysis requires the assumption that the vantage from which you play veto competitions — that is, whether you are HOH, a nominee or a randomly chosen competitor — has no effect. However, it
seems likely that motivation would play a role in how players perform in the competition. Intuitively, it makes sense that nominees will fight harder to win the veto since they need to save
themselves from the block. On the other hand, non-nominees will often throw these competitions in order to avoid making waves and/or being forced to decide between using and not using the veto. If
this postulate is correct, it should be visible in the stats.
The ability of the houseguests playing in veto competitions as either HOH, a nominee or a non-nominee should even out towards average ability over a large enough sample size. Thus, if motivation has
no effect, we would expect that competitors of each type (HOH, nominee, other) would win POV at a similar rate over our 140-veto competition sample. The expected probability of a win in this case
would simply be one divided by the total number of competitors in that particular competition. Represented below is the expected and observed Veto competition win rate by type of participant for big
brother (USA) seasons 4 (the first season where all veto’s competition were played for the golden power of veto) through 14 and big brother (CAN) season one:
│ │Expected Win Frequency^1 │Observed Win Frequency│
│HOH │0.177 │0.162 │
│Nominees │0.177 │0.230 │
│Non-Nominees│0.177 │0.138 │
^1Probabilities are calculated for 106 6-person veto comps, 12, 5-person veto competions and 12, 4-person veto competions included in the sample
The statistics show that observed win frequencies for nominees and other houseguests deviate significantly from the expected frequency in the direction we would predict if motivation plays a roll.
Specifically, nominees win POV at a greater frequency than random chance whereas non-nominees win POV at a frequency lesser than expected. The win probability of the HOH is similar but still lower
than the expected rate. This suggests that the vantage of the player significantly affects their likelihood of winning the veto competition, most likely due to nominees having increased motivation to
win the competitions.
POV Competition during the double eviction.
In turn, this effect of motivation on veto performance must be factored into the calculations of the efficacy of backdooring. The advantage of backdooring (assuming the victim doesn’t see it coming)
is seen in both the chance they aren’t picked to play in the veto competition, and in their decreased motivation to win that competition when they are picked. If we substitute the observed win
frequencies into our calculations from above, when Helen is frontdoored, she will remain on the block 77% of the time. However, if she were backdoored she would win the veto at a lower rate (12%
versus 23% when a nominee) during the 66% of the time she is randomly selected to play. Thus, a backdoor will result in Helen being on the block on eviction night 91% of the time, a much higher
frequency than when she is frontdoored. When the hidden value of motivation is taken into account, the correct move by 3AM would have been to backdoor Helen; the numbers were in their favor. | {"url":"https://robhasawebsite.com/blog/sabbermetrics-the-hidden-value-of-motivation-in-backdoor-success-rate-s15e23-colin-harvey-lewis/","timestamp":"2024-11-04T23:51:59Z","content_type":"text/html","content_length":"70247","record_id":"<urn:uuid:b0ae6e68-4884-455a-996f-9d49e15940ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00317.warc.gz"} |
Quantum mechanic
Q quantum mechanic
Difficulty 9
Attacks Claw 1d4 teleport
Base level 7
Base experience 92
Speed 12
Base AC 3
Base MR 10
Alignment 0
Frequency (by normal mechanisms) Average
Genocidable Yes
Weight 1450
Nutritional value 20
Size Medium
Resistances Poison
Resistances conveyed None
A quantum mechanic:
• has a head, a couple of arms, and a torso.
• can teleport.
Other attributes • is poisonous to eat.
• eats corpses and fruits.
• always starts as hostile.
• can be seen through infravision.
Reference monst.c#line1859
A quantum mechanic is a monster with an attack that causes its target to teleport when it hits. The teleport is subject to magic cancellation. It follows the usual rules of self-teleportation, being
disallowed on non-teleport levels, and permitting you to use teleport control.
Eating a quantum mechanic corpse will toggle intrinsic speed. Quantum mechanics are not considered human (or any other playable race for that matter) so you won't suffer the effects of cannibalism.
The quantum mechanic is the only member of the Q quantum mechanic monster class
Schroedinger's Cat[ ]
There is a 5% chance that a quantum mechanic carries a large box^[1]. Upon opening the box, a housecat named "Schroedinger's Cat" is generated and has a 50% chance of being dead (in which case you
see its corpse inside the box) and a 50% chance of being alive (it will be peaceful). The state of the cat is not determined until the box is opened. In fact, before opening, the large box will be
empty in terms of gameplay. There is nothing special about this cat; it is just a physics joke.
Messages[ ]
"Your position suddenly seems very uncertain!"
A quantum mechanic hit you and you attempted to teleport.
"Your velocity suddenly seems very uncertain!"
You ate a quantum mechanic corpse and your intrinsic speed is about to be toggled.
"You seem slower."
Your intrinsic speed was toggled off.
"You seem faster."
Your intrinsic speed was toggled on.
Strategy[ ]
Eating a Quantum Mechanic corpse is a useful way to gain the speed intrinsic. If you already have intrinsic speed, however, you will lose it, so be careful!
Pet[ ]
Be careful, having a quantum mechanic as a pet may be useful, but dangerous. It may teleport shopkeepers out of their shops if its level is high enough, which angers them.
Origin[ ]
This monster's name is a play on words with "Quantum Mechanics", a branch of physics. The messages that accompany the monster's teleporting attack and the toggling of intrinsic speed are jokes based
on the Heisenberg uncertainty principle, which states that measuring the position of a particle makes its velocity more uncertain, and vice versa. Toggling the speed intrinsic could also be a
reference to the transitions that occur between discrete energy levels in quantum mechanics.
Schrödinger's cat is a famous thought experiment in quantum mechanics involving imagining locking a cat in a box with a mechanism that has a 50% chance of killing the cat, depending upon the final
state of a quantum system, for example whether an unstable nucleus has decayed within a certain time. The orthodox Copenhagen interpretation of quantum mechanics asserts that the cat in the box is in
a superposition of possible outcomes, in half of which the cat is dead, and half of which it is alive. Only when the box is opened and "observed" does the quantum wavefunction collapse, and the fate
of the cat become determined. Schrödinger asserted that this was absurd, and thus so was the Copenhagen interpretation. Physicists are still divided on this matter. Note that this was a thought
experiment; no actual cats were killed, or even half-killed.
Encyclopaedia entry[ ]
These creatures are not native to this universe; they seem
to have strangely derived powers, and unknown motives.
References[ ] | {"url":"https://nethack.fandom.com/wiki/Quantum_mechanic","timestamp":"2024-11-04T05:53:51Z","content_type":"text/html","content_length":"192906","record_id":"<urn:uuid:865d9adb-6046-4949-8f2b-a5901fcef037>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00438.warc.gz"} |
[Solved] Examine, whether the following number are rational or ... | Filo
Examine, whether the following number are rational or irrational:
Not the question you're searching for?
+ Ask your question
Let be the rational number
Squaring on both sides, we get
is a rational number
is a rational number
But we know that is an irrational number
So, we arrive at a contradiction
So is an irrational number.
Was this solution helpful?
Video solutions (5)
Learn from their 1-to-1 discussion with Filo tutors.
10 mins
Uploaded on: 5/31/2023
Was this solution helpful?
7 mins
Uploaded on: 6/17/2023
Was this solution helpful?
Found 3 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Mathematics Class 9 (RD Sharma)
View more
Practice more questions from Number Systems
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Examine, whether the following number are rational or irrational:
Updated On Jul 5, 2023
Topic Number Systems
Subject Mathematics
Class Class 9
Answer Type Text solution:1 Video solution: 5
Upvotes 561
Avg. Video Duration 8 min | {"url":"https://askfilo.com/math-question-answers/examine-whether-the-following-number-are-rational-or-irrational-sqrt-5-2","timestamp":"2024-11-06T08:48:22Z","content_type":"text/html","content_length":"434562","record_id":"<urn:uuid:e8e55642-95e2-41e7-a777-09b8511ba450>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00472.warc.gz"} |
Relationship of Bandpass and Lowpass Signals
This article assumes that you have some concepts of Signals and Systems and basic knowledge of communication systems.
Content of this article is adapted from “Digital Communication by John G. Proakis”
Digital communication varies a lot from the traditional analog communication system. In digital communication, especially when we talk about designing a practical radio system using Software-Defined
Radios, we design everything on the software side and then transmit the signals over the air using SDRs. It is just like creating sound or any audio data in your computer and using the speaker to
transmit in the air. The only difference here is you can actually listen to what you have designed whereas, in the communication system, you can’t listen (unless it is in audible range).
Lowpass & Bandpass: Now, in a modern communication system, one can design everything on their computers and will need an SDR device to broadcast the signal into the RF band. The signals which we
design on our computers lie in a low-frequency band or aka Lowpass signals, and the signal which we actually transmit over the air are known as Bandpass signals, aka High-Frequency signals.
Why transmit on Bandpass and not on Lowpass? A straightforward question one can ask is why there is a need to transmit the signals on Bandpass frequencies (aka high frequencies) why don’t we transmit
in Lowpass?
Well, there are multiple reasons for that. One of the reasons is to avoid crosstalk and interference. Let’s take a very simple example: Suppose you are sitting in a classroom, or any public gathering
and all of the people there start speaking at the same time now this will create noise, and it will present a scenario of a fish market. Another reason is the availability of high bandwidth. At high
frequencies, there is a lot of bandwidth available, which means a high data rate. Very important reasoning in this context is the suitable characteristics of the channel. Bandpass signals support
different types of communication channels like fiber optics, copper cables, coaxial cable, twisted pair, wireless channel, etc.
Designing signals in Baseband only: Baseband is another term used for Lowpass signals. Earlier I said that we design signals at low frequencies and then transmit over the high frequencies using SDR.
Why is that? Why don’t we design the signals directly at high frequencies?
Answer: Well! The designing of electronic circuits at high frequencies is cumbersome and expensive. One needs to design everything from all the way to modulator/differentiator/integrator/encoder/
decoder etc. at high frequencies, and this requires a lot of effort, and thus, designing the circuits at high frequency is expensive. So, the easy solution is to design everything at low frequencies
and use a frequency upconverter or SDR to shift the signal at high frequency.
This is one of the major reasons why we first study the difference of Bandpass and Lowpass signals and how they can be represented in terms of one another.
Some facts about Bandpass and Lowpass signals:
• All Bandpass signals are real signals (i.e., whatever you transmit in the air is a real signal)
• All Lowpass signals are complex in nature. They can be real, but it depends on the way they are designed.
• Bandpass signals lies on high frequencies but lowpass signals are always designed at [latex]f_c = 0[/latex]
Representation of Baseband & Bandpass signals:
As discussed earlier, the importance of Lowpass signals and their easiness in designing them. A big question comes that: Is there a some mathematical way to represent these signals in terms of one
another ?
The answer is Yes. Any real signal (bandpass signal) follows hermitian symmetry, which states that for a real signal [latex]x(t)[/latex], the magnitude of [latex]X(f)[/latex] is an even signal and
the phase is odd. So, now we can say that:
[latex]x(t) \xrightarrow{\mathcal{F}} X(f)[/latex]
[latex]X(f) = X^*(-f)[/latex]
[latex]|X(f)| = |X(-f)|[/latex]
[latex]\angle X^*(f) = -\angle X(f)[/latex]
where [latex]X^*(f)[/latex] represents the conjugate of [latex]X(f)[/latex]. From this property, we can say that if a certain signal (Lowpass in nature) have hermitian symmetry spectrum, we can use
positive frequencies i.e., we would only need a positive side of the spectrum, and we can reconstruct the negative side completely by just using the information stored in the positive side of the
spectrum. So, we can say that positive side of the spectrum can be written as: [latex]X_+(f) = X(f)u(f)[/latex] and negative side by [latex]X_-(f) = X(f)u(-f)[/latex]. Where [latex]u(f)[/latex] is a
unit step signal. So, basically [latex]X(f)[/latex] is the combination of both [latex]X_+(f)[/latex] and [latex]X_-(f)[/latex] and we can write it as:
[latex]X(f) = X_+(f) + X_-(f)[/latex]
[latex]X(f) = X_+(f) + X^*_+(-f)[/latex]
Where we have used hermitian symmetry to represent [latex]X_-(f)[/latex] as [latex]X^*_+(-f)[/latex] where the conjugate sign denotes the odd of phase and [latex]-f[/latex] denotes the flip of
magnitude. In general, [latex]X(f)[/latex] has the following time-domain representation:
[latex]x(t) = A(t) \cos (2\pi f_c t + \phi(t)[/latex]
This representation can be used to describe any analog communication by assuming that both [latex]A(t)[/latex] and [latex]\phi(t)[/latex] are slow varying signals.
1. A.M: If we set [latex]\phi(t) = 0[/latex] and [latex]A(t) = 1+\mu m(t)[/latex]. And [latex]m(t)[/latex] is the message signal
2. P.M: If we set [latex]\phi(t) = k m(t)[/latex] and [latex]A(t) = A[/latex]. And [latex]m(t)[/latex] is the message signal
3. F.M: If we set [latex]\phi(t) = k \int_{-\infty}^t m(\tau) d\tau[/latex] and [latex]A(t) = A[/latex]. And [latex]m(t)[/latex] is the message signal
Time Domain representation of [latex]x_l(t)[/latex] and [latex]x(t)[/latex]: Before describing the time domain representation of [latex]x_l(t)[/latex] and [latex]x(t)[/latex], first let us describe
the concept of Hilbert Transform:
Time domain representation of [latex]X_+(f)[/latex] is:
x_+(t) =& \mathcal{F^{-1}} \{X(f)u(f)\} \\
=& x(t) * \frac{1}{2} \left\{ \delta(t) + \frac{j}{\pi t}\right\}
x_+(t) =& \frac{1}{2} * x(t) + \frac{j}{2} x(t) * \frac{1}{\pi t} \\
=& \frac{1}{2} x(t) + \frac{j}{2} \hat{ x }(t)
Where, [latex]\hat{x}(t)[/latex] denotes the hilbert transform of [latex]x(t)[/latex]. If you don’t get the mathematical understanding of hilbert transform then don’t worry, just remember this:
Hilbert transform introduce a phase shift of: [latex]-\frac{\pi}{2}, f>0 \;\;\; \& \;\; \frac{\pi}{2}, f<0[/latex] on [latex]x(t)[/latex]. This means that in frequency domain we are multiplying
[latex]X(f)[/latex] with [latex]-j\text{sgn}(f)[/latex] so we can write it as:[latex] \hat{X}(f) = -j\text{sgn}(f)X(f)[/latex], where [latex]\text{sgn}(f)[/latex] is a signum function. Now, lets
represent [latex]x_l(t)[/latex] first. Spectrum of lowpass signal is [latex]X_l(f)[/latex] then in terms of [latex]X_+(f)[/latex] we can write:
[latex] \begin{split}
X_l(f) =& 2X_+(f+f_o) \\
=& 2X(f+f_o) u(f+f_o)
Where we have [latex]f_o[/latex] is the center frequency of the bandpass signal and 2 is multiplied to compensate the energy loss. That means, when a lowpass signal is shifted to bandpass, it creates
a signal in both negative and positive sides, which reduces the lowpass signal energy in half. So now if we take inverse Fourier transform of [latex]X_l(f)[/latex] then we get:
x_l(t) =& \mathcal{F^{-1}} \{X_l(f)\} \\
=& \mathcal{F^{-1}} \{2X_+(f+f_o) \} \\
=& 2x_+(t) e^{-j2\pi f_o t} \\
=& (x(t) + j \hat{x}(t) ) e^{-j2 \pi f_o t}
Multiply [latex]e^{j2\pi f_o t}[/latex] on both sides:
[latex]x_l(t) e^{j2\pi f_o t} = x(t) + j \hat{x}(t)[/latex]
Taking Real parts on both sides:
[latex]x(t) = \text{Re} \{ x_l(t) e^{j2\pi f_o t}\}[/latex]
and earlier we have:
[latex]x_l(t) = \{ x(t) + j \hat{x}(t) \}e^{-j2\pi f_o t}[/latex]
The above two equations represents the whole mathematical relationship between bandpass and baseband signals. So, in practice we design the signal [latex]x_l(t)[/latex] which is a low pass signal,
shifts it to [latex]f_o [/latex] by multiplying it with [latex]e^{j2\pi f_o t}[/latex] and takes it real part only, then this becomes a bandpass signal. And to found baseband signal from bandpass, we
receive bandpass signal, take its hilbert transform, multiply it with [latex]j[/latex] and finally multiply it with [latex]e^{-j2\pi f_o t}[/latex] to recover [latex]x_l(t)[/latex] This is the
fundamental working principle of all practical IQ modulators or Software-Defined Radio’s. If you have any questions or suggestions related to this article, please leave us a comment.
0 Comments
Inline Feedbacks
View all comments | {"url":"https://bravelearn.com/relationship-of-bandpass-and-lowpass-signals/","timestamp":"2024-11-04T20:26:50Z","content_type":"text/html","content_length":"138723","record_id":"<urn:uuid:9ad73db4-5771-447c-906e-1ee986332c4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00812.warc.gz"} |
The mathematical writings of Évariste Galois | EMS Press
The mathematical writings of Évariste Galois
Corrected 2nd printing, September 2013
• Peter M. Neumann
University of Oxford, UK
A subscription is required to access this book.
Although Évariste Galois was only 20 years old when he died, shot in a mysterious early-morning duel in 1832, his ideas, when they were published 14 years later, changed the course of algebra. He
invented what is now called Galois Theory, the modern form of what was classically the Theory of Equations. For that purpose, and in particular to formulate a precise condition for solubility of
equations by radicals, he also invented groups and began investigating their theory. His main writings were published in French in 1846 and there have been a number of French editions culminating in
the great work published by Bourgne & Azra in 1962 containing transcriptions of every page and fragment of the manuscripts that survive. Very few items have been available in English up to now.
The present work contains English translations of almost all the Galois material. They are presented alongside a new transcription of the original French, and are enhanced by three levels of
commentary. An introduction explains the context of Galois' work, the various publications in which it appears, and the vagaries of his manuscripts. Then there is a chapter in which the five
mathematical articles published in his lifetime are reprinted. After that come the Testamentary Letter and the First Memoir (in which Galois expounded the ideas now called Galois Theory), which are
the most famous of the manuscripts. There follow the less well known manuscripts, namely the Second Memoir and the many fragments. A short epilogue devoted to myths and mysteries concludes the text.
The book is written as a contribution to the history of mathematics but with mathematicans as well as historians in mind. It makes available to a wide mathematical and historical readership some of
the most exciting mathematics of the first half of the 19th century, presented in its original form. The primary aim is to establish a text of what Galois wrote. Exegesis would fill another book or
books, and little of that is to be found here.
This work will be a resource for research in the history of mathematics, especially algebra, as well as a sourcebook for those many mathematicians who enliven their student lectures with reliable
historical background. | {"url":"https://ems.press/books/hem/102","timestamp":"2024-11-05T10:13:54Z","content_type":"text/html","content_length":"64492","record_id":"<urn:uuid:0444ada8-509d-4dc0-af1c-0f139fe24b84>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00768.warc.gz"} |
Direction cosines and direction ratios of a vector
Consider a vector as shown below on the x-y-z plane. The angles made by this line with the +ve direactions of the coordinate axes: θ[x], θ[y], and θ[z] are used to find the direction cosines of the
line: cos θ[x], cos θ[y], and cos θ[z]. Likewise, the direction cosine of θ[x], θ[y], and θ[z] in the opposite direction of the line are –cos θ[x], -cos θ[y], and -cos θ[z]
Note that the angles θ[x], θ[y], and θ[z] are the direction angles that satisfy the condition: 0<θ[x], θ[y],θ[z]<π. Also, the sum of direction angles: θ[x] + θ[y] + θ[z]≠ 2π because these angles do
not lie on the same plane.
For a line on the x-axis, the direction cosines are cos 0, cos π/2, cos π/2 = 1, 0, 0.
For a line on the y-axis, the direction cosines are cos π/2, cos 0, cos π/2 = 0, 1, 0.
For a line on the x-axis, the direction cosines are cos π/2, cos π/2,cos 0 =0, 0, 1.
Consider a point P (x, y, z) on a 3D plane such that OP = $\overrightarrow{r}$ and let l, m and n be the direction cosine of $\overrightarrow{r}$ then we can conclude that: x=l$\overrightarrow{r}$, y
=m$\overrightarrow{r}$, and z=n$\overrightarrow{r}$.
The equation of the line can be written as: $l^{2}+m^{2}+n^{2}=1$
We know that l = cosθ$_{x}$, m = cosθ$_{y}$, and n = cosθ$_{z}$
And, cos$^{2}$θ$_{x}$ + cos$^{2}$θ$_{y}$ + cos$^{2}$θ$_{z}$ = 1
cos$^{2}$θ$_{x}$ + cos$^{2}$θ$_{y}$ + cos$^{2}$θ$_{z}$= $(\frac{x}{\overrightarrow{r}})^{2}+(\frac{y}{\overrightarrow{r}})^{2}+(\frac{z}{\overrightarrow{r}})^{2}=\frac{x^{2}+y^{2}+y^{2}}{r^{2}}=\frac
Now consider two points in a line segment $\overrightarrow{AB}$, with the coordinates A (x$_{1}$, y$_{1}$, z$_{1}$) and B (x$_{2}$, y$_{2}$, z$_{2}$). Then, their direction cosines can be written a$
(\frac{x_{2}-x_{1}}{\overrightarrow{r}}, \frac{y_{2}-y_{1}}{\overrightarrow{r}}, \frac{z_{2}-z_{1}}{\overrightarrow{r}})$.
In this case, $\overrightarrow{\vert r\vert}=\sqrt{(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}$
Example: A line makes 30°, 60° and 90° with the x, y and z axes respectively. Find their direction cosines.
l = cosθ$_{x}$ = cos 30° = $\frac{\sqrt{3}}{2}$
m = cosθ$_{y}$ = cos 60° = $\frac{1}{2}$
and n = cosθ$_{z}$= cos 90° = 0
The direction cosines are: $(\frac{\sqrt{3}}{2}, \frac{1}{2}, 0)$
Example: Consider a point P $(\sqrt{3},1, 2\sqrt{3} )$ in a 3D space, find the direction cosines of $\overrightarrow{OP}$
Solution: The direction ratios of point P= $(\sqrt{3},1, 2\sqrt{3} )$
Recall that $l=\frac{x}{\overrightarrow{r}}=\frac{x}{\sqrt{x^{2}+y^{2}+z^{2}}}$
In this case, $\overrightarrow{r}=\overrightarrow{OP}$ and (x, y, z) = $(\sqrt{3},1, 2\sqrt{3} )$
$\vert \overrightarrow{OP}\vert =\sqrt{(\sqrt{3}-0)^{2}+(1-0)^{2}+(2\sqrt{3}-0)^{2}}=\sqrt{3+1+12}=4$
The direction cosine of $\overrightarrow{OP}$ are: $(\frac{\sqrt{3}}{4},\frac{1}{4},\frac{2\sqrt{3}}{4})=(\frac{\sqrt{3}}{4},\frac{1}{4},\frac{\sqrt{3}}{2} | {"url":"https://www.w3schools.blog/direction-cosines-and-direction-ratios-of-a-vector","timestamp":"2024-11-06T09:02:37Z","content_type":"text/html","content_length":"137770","record_id":"<urn:uuid:1f19d2bc-ecbe-45fc-b95a-4de510df12e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00304.warc.gz"} |
I would like to make an IF index collect
I am working on creating an IF(ISBLANK({cross sheet reference of column}), " ", Index(Collect({ NAME},{cross sheet Helper column}, helpercolumn@row,{cross sheet status}, "Status, {CrossSheetDate},
I get an error for the contains part and the formula works without the date, but I am trying to use the date as a trigger. If anyone has any solutions I would Appreciate it
Best Answers
• Try changing your CONTAINS to CONTAINS("/", @cell). You're already specifying the range, and using CONTAINS as the criteria for that range. Using @cell within CONTAINS tells the formula to
consider the CrossSheetDate value in every row that meets the previous sets of criteria range/criteria.
Alternatively, if CrossSheetDate is really a date type column with date values in it, you could use {CrossSheetDate}, ISDATE(@cell) instead of the CONTAINS. The ISDATE function just checks to see
if the value in that column is a date value.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• In addition to @Jeff Reisman's suggestion, you also need a closing parenthesis for the COLLECT function before specifying the row number in the INDEX function.
=INDEX(COLLECT(............................., CONTAINS(..........)), 1)
• Try changing your CONTAINS to CONTAINS("/", @cell). You're already specifying the range, and using CONTAINS as the criteria for that range. Using @cell within CONTAINS tells the formula to
consider the CrossSheetDate value in every row that meets the previous sets of criteria range/criteria.
Alternatively, if CrossSheetDate is really a date type column with date values in it, you could use {CrossSheetDate}, ISDATE(@cell) instead of the CONTAINS. The ISDATE function just checks to see
if the value in that column is a date value.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• In addition to @Jeff Reisman's suggestion, you also need a closing parenthesis for the COLLECT function before specifying the row number in the INDEX function.
=INDEX(COLLECT(............................., CONTAINS(..........)), 1)
• @Jeff Reisman I almost missed it because of that curly bracket in the original post. Hahaha
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/99026/i-would-like-to-make-an-if-index-collect","timestamp":"2024-11-04T22:07:33Z","content_type":"text/html","content_length":"434507","record_id":"<urn:uuid:634c5f84-3e2e-4a7b-a393-870cc32513a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00797.warc.gz"} |
Rostam SAEED | Professor (Full) | Professor | Salahaddin University - Erbil, Erbil | SUH | Department of Mathematics | Research profile
Volterra-Fredholm Integral Equations; Inverse Problem; Numerical solution of PDE
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text.
Learn more
Investigation of Different Kinds of Volterra-Fredholm Integral Equations by Different Spline Functions ; iterative methods for solving System of nonlinear equations and Bubble dynamics
Salahaddin University - Hawler-College of Science
• Professaor of Numerical Analysis
• Calculus, Linear Algebra, Integral equation, Approximation theory, Numerical Analysis, Functional Analysis | {"url":"https://www.researchgate.net/profile/Rostam-Saeed","timestamp":"2024-11-09T00:43:48Z","content_type":"text/html","content_length":"1050305","record_id":"<urn:uuid:bfc94479-bb44-4e22-aae9-32f218ed9d25>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00672.warc.gz"} |
March 2017 Puzzle Periodical - Lucky Seven
News | March 21, 2017
March 2017 Puzzle Periodical - Lucky Seven
By David G., NSA Mathematician
Each die of a pair of non-identical dice has six faces, but some numbers are missing, others are duplicated, and some faces may have more than six spots.
The dice can roll every number from 2 to 12.
What is the largest possible probability of rolling a 7?
If one die has X different numbers and the other die has Y different numbers, there are at most XY possible totals. Since we need 11 totals from 2 to 12, XY must be at least 11. In particular, at
least one die must have at least four different numbers.
If the first die has six different numbers, then any number on the second die has at most a 1/6 probability of getting the right number on the first die to get a total of 7. Therefore, the
probability of rolling a 7 is at most 1/6.
If the first die has five different numbers, with one appearing twice, the second die must have at least three different numbers. There is only one possible number on the second die which totals a 7
with the duplicated number on the first die. The second die has at most four copies of the number which totals a 7 with the duplicated number and has two ways to roll a 7; all other numbers on the
second die have only one way to roll a 7. This is a maximum total of 10 rolls for a probability of 10/36.
If each die has four different numbers, they can be split 2,2,1,1 or 3,1,1,1. each number on each die can pair with at most one number on the other die to make a 7. The largest possible sum of
products, which gives the number of ways to roll a 7, is 3×3+1×1+1×1+1×1=12. The maximum probability is thus 12/36.
If the first die has four different numbers and the second has three, there are only 12 different pairs of values, and thus there must be at most two different value pairs which total 7 in order to
have 11 possible totals.
Therefore, the largest possible sum of products is 3×4+1×1=13, not 3×4+1×1+1×1=14. And 13 is possible, with one die having 1,2,6,6,6,7 and the other having 1,1,1,1,3,5 (or 1,2,2,2,6,7 and
1,3,5,5,5,5); this gives a probability of 13/36. | {"url":"https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/1627364/march-2017-puzzle-periodical-lucky-seven/","timestamp":"2024-11-09T10:59:30Z","content_type":"text/html","content_length":"92252","record_id":"<urn:uuid:46bff240-1694-4b8c-981a-eeb4a3b5f524>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00114.warc.gz"} |
What is Form Factor and Peak Factor?
In this article, we will discuss the form factor and peak factor. Both form factor and peak factor are two important measures related to alternating quantities. The form factor is used for describing
the distortion and heating effect of an alternating signal. While, the peak factor is used to get information about the maximum value of the alternating quantity, and thus helps in differentiating
the two waveforms.
What is a Form Factor?
In electrical engineering, the waveforms of two different alternating quantities of the same maximum or peak values and the same frequency may look different, which means there could be a difference
in their configuration. This difference in the configuration of periodic waveforms where the amplitudes and frequencies are the same is represented by the form factor (FF).
For an alternating quantity such as alternating current or voltage, the form factor is defined as the ratio of the RMS (Root Mean Square) value to the average value of the alternating quantity.
`\"Form Factor"=("RMS Value")/("Average Value")`
For alternating current, the form factor is expressed as,
`\"Form Factor"=I_(RMS)/I_(av)`
For alternating voltage, the form factor is expressed as,
`\"Form Factor"=V_("RMS")/V_(av)`
The form factor formulae and values of different types of alternating quantities are given as follows-
(1). The form factor of a sinusoidal (sine) wave is:
`\"Form factor"=((Ï€⁄2))/\sqrt{2}=1.11`
(2). The form factor of a half-wave rectified sine wave is:
`\"Form factor"=Ï€/2=1.57`
(3). The form factor of a full-wave rectified sine wave is:
`\"Form factor"=((Ï€⁄2))/\sqrt{2}=1.11`
(4). The form factor of a square wave is:
Form Factor = 1
(5). The form factor of a triangular wave is:
`\"Form factor"=2/\sqrt{3}=1.155`
(6). The form factor of a saw-tooth wave is:
`\"Form factor"=2/\sqrt{3}=1.155`
What is a Peak Factor?
The peak factor gives an idea about the maximum value of an alternating quantity (current or voltage) that its waveform can provide. The peak factor also helps in deriving the waveform of a signal
from the DC signal.
The peak factor of an alternating quantity may be defined as under-
The ratio of the maximum or peak value to the RMS value of an alternating quantity is known as the peak factor of the alternating quantity. It is also known as the crest factor because the peak value
is also called the crest value.
Mathematical, the peak factor or crest factor can be expressed as,
`\"Peak factor"=("Peak value")/("RMS value")`
For an alternating current, the peak factor is given by,
`"Peak factor"=I_m/I_(rms)`
For an alternating voltage, the peak factor is given by,
`\"Peak factor"=V_m/V_(rms)`
Where I[m] and V[m] are the maximum or peak values of the current and voltage respectively, and I[rms] and V[rms] are the root mean square (RMS) values of the current and voltage respectively.
The peak factor of a sinusoidal alternating current is:
`\"Peak factor"=I_m/((I_m⁄ \sqrt{2}) )=\sqrt{2}=1.414`
The peak factor of a sinusoidal alternating voltage is:
`\"Peak factor"=V_m/((V_m⁄ \sqrt{2}) )=\sqrt{2}=1.414`
Numerical Example – The RMS value of a sinusoidal voltage is 141.4 V and the average value is 127.4 V. If the maximum value of the given voltage is 200 V. Then, determine the form factor and peak
factor of the voltage waveform.
Solution – Given data,
`\V_(RMS)=141.4 "V"`
`\V_(av)=127.4 "V"`
`\V_m=200 "V"`
Therefore, the Form factor of the given voltage waveform is:
The peak factor of the voltage waveform is:
Hence, for the given sinusoidal voltage wave, the form factor is 1.11 and the peak factor is 1.414.
Thus, in the above sections of this article, we discussed what the form factor and peak factor are. The form factor is related to the RMS value and average value of the alternating quantity, and the
peak factor is related to the peak and RMS values of the alternating quantity.
Related Articles | {"url":"https://www.thetalearningpoint.com/2022/12/what-is-form-factor-and-peak-factor.html","timestamp":"2024-11-09T12:32:48Z","content_type":"application/xhtml+xml","content_length":"350569","record_id":"<urn:uuid:5d9c3f87-f740-4559-b273-a2b2874c5425>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00319.warc.gz"} |
Monitoring Animal Populations and their Habitats: A Practitioner's Guide
11 Data Analysis in Monitoring
Plant and animal data come in many forms including indices, counts, and occurrences. The diversity of these data types can present special challenges during analysis because they can follow different
distributions and may be more or less appropriate for different statistical approaches. As an additional complication, monitoring data are often taken from sites that are in close proximity or
surveys are repeated in time, and thus special care must be taken regarding assumptions of independency. In this chapter we will discuss some essential features of these data types, data
visualization, different modeling approaches, and paradigms of inference.
This chapter is designed to help you gain bio-statistical literacy and build an effective framework for the analysis of monitoring data. These issues are rather complex, and as such, only key
elements and concepts of effective data analysis are discussed. Many books have been written on the topic of analyzing ecological data. Thus, it would be impossible to effectively cover the full
range of the related topics in a single chapter. The purpose, therefore, of the data analysis segment of this book is to serve as an introduction to some of the classical and contemporary techniques
for analyzing the data collected by monitoring programs. After reading this chapter, if you wish to broaden your understanding of data analysis and learn to apply it with confidence in ecological
research and monitoring, we recommend the following texts that cover many of the classical approaches: Cochran (1977), Underwood (1997), Thompson et al. (1998), Zar (1999), Crawley (2005, 2007),
Scheiner and Gurevitch (2001), Quinn and Keough (2002), Gotelli and Ellison (2004), and Bolker (2008). For a more in-depth and analytical coverage of some of the contemporary approaches we recommend
Williams et al. (2002), MacKenzie et al. (2006), Royle and Dorazio (2008) and Thomson et al. (2009).
The field of data analysis in ecology is a rapidly growing enterprise, as is data management (Chapter 10), and it is difficult to keep abreast of its many developments. Consequently, one of the first
steps in developing a statistically sound approach to data analysis is to consult with a biometrician in the early phases of development of your monitoring plan. One of the most common (and oldest)
lamentations of many statisticians is that people with questions of data analysis seek advice after the data have been collected. This has been likened to a post-mortem examination (R.A. Fisher);
there is only so much a biometrician can do and/or suggest after the data have been collected. Consulting a biometrician is imperative in almost every phase of the monitoring program, but a strong
understanding of analytical approaches from the start will help ensure a more comprehensive and rigorous scheme for data collection and analysis
Data Visualization I: Getting to Know Your Data
The initial phase of every data analysis should include exploratory data evaluation (Tukey 1977). Once data are collected, they can exhibit a number of different distributions. Plotting your data and
reporting various summary statistics (e.g., mean, median, quantiles, standard error, minimums, and maximums) allows the you to identify the general form of the data and possibly identify erroneous
entries or sampling errors. Anscombe (1973) advocates making data examination an iterative process by utilizing several types of graphical displays and summary statistics to reveal unique features
prior to data analysis. The most commonly used displays include normal probability plots, density plots (histograms, dit plots), box plots, scatter plots, bar charts, point and line charts, and
Cleveland dotplots (Cleveland 1985, Elzinga et al. 1998, Gotelli and Ellison 2004, Zuur et al. 2007). Effective graphical displays show the essence of the collected data and should (Tufte 2001):
1. Show the data,
2. Induce the viewer to think about the substance of the data rather than about methodology, graphic design, or the technology of graphic production,
3. Avoid distorting what the data have to say,
4. Present many numbers in a small space,
5. Make large data sets coherent and visually informative,
6. Encourage the eye to compare different pieces of data and possibly different strata,
7. Reveal the data at several levels of detail, from a broad overview to the fine structure,
8. Serve a reasonably clear purpose: description, exploration, or tabulation,
9. Be closely integrated with the numerical descriptions (i.e., summary statistics) of a data set.
In some cases, exploring different graphical displays and comparing the visual patterns of the data will actually guide the selection of the statistical model (Anscombe 1973, Hilborn and Mangel 1997,
Bolker 2008). For example, refer to the four graphs in Figure 11.1. They all display relationships that produce identical outputs if analyzed using an ordinary least squares (OLS) regression analysis
(Table 11.1). Yet, whereas a simple regression model may reasonably well describe the trend in case A, its use in the remaining three cases is not appropriate, at least not without an adequate
examination and transformation of the data. Case B could be best described using a logarithmic rather than a linear model and the relationship in case D is spurious, resulting from connecting a
single point to the rest of the data cluster. Cases C and D also reveal the presence of outliers (i.e., extreme values that may have been missed without a careful examination of the data). In these
cases, the researcher should investigate these outliers to see if their values were true samples or an error in data collection and/or entry. This simple example illustrates the value of a visual
scrutiny of data prior to data analysis.
Figure 11.1. Relationships between the four sets of X-Y pairs (Redrafted from Anscombe 1973).
Table 11.1. Four hypothetical data sets of X-Y variable pairs
(Modified from Anscombe 1973).
A B C D Analysis output
X Y X Y X Y X Y
10.0 8.04 10.0 9.14 10.0 7.46 8.0 6.58
8.0 6.95 8.0 8.14 8.0 6.77 8.0 5.76 N = 11
13.0 7.58 13.0 8.74 13.0 12.74 8.0 7.71 Mean of Xs = 9.0
9.0 8.81 9.0 8.77 9.0 7.11 8.0 8.84 Mean of Ys = 7.5
11.0 8.33 11.0 9.26 11.0 7.81 8.0 8.47 Regression line:
14.0 9.96 14.0 8.10 14.0 8.84 8.0 7.04 Y = 3 + 0.5X
6.0 7.24 6.0 6.13 6.0 6.08 8.0 5.25 Regression SS = 27.50
4.0 4.26 4.0 3.10 4.0 5.39 19.0 12.50 r = 0.82
12.0 10.84 12.0 9.13 12.0 8.15 8.0 5.56 R^2 = 0.67
7.0 4.82 7.0 7.26 7.0 6.42 8.0 7.91
5.0 5.68 5.0 4.74 5.0 5.73 8.0 6.89
The fact that, under some circumstances, visual displays alone can provide an adequate assessment of the data underscores the value of visual analysis to an even greater extent. A strictly visual (or
graphical) approach may even be superior to formal data analyses in situations with large quantities of data (e.g., detailed measurements of demographics or vegetation cover) or if data sets are
sparse (e.g. in the case of inadequate sampling or pilot investigations). For example, maps can be effectively used to present a great volume of information. Tufte (2001) argues that maps are
actually the only means to display large quantities of data in a relatively small amount of space and still allow a meaningful interpretation of the information. In addition, maps allow a visual
analysis of data at different levels of temporal and spatial resolution and an assessment of spatial relationships among variables that can help identify potential causes of the detected pattern.
Other data visualization techniques also provide practical information. A simple assessment of the species richness of a community can be accomplished by presenting the total number of species
detected during the survey. This is made more informative by plotting the cumulative number of species detected against an indicator of sampling effort such as the time spent sampling or the number
of samples taken (i.e., detection curve or empirical cumulative distribution functions). These sampling effort curves can give us a quick and preliminary assessment of how well the species richness
of the investigated community has been sampled. A steep slope of the resulting curve would suggest the presence of additional, unknown species whereas a flattening of the curve would indicate that
most species have been accounted for (Magurran 1988, Southwood 1992). Early on in the sampling, these types of sampling curves are recommended because they can provide some rough estimates of the
minimum amount of sampling effort needed.
Constructing species abundance models such as log normal distribution, log series, McArthur’s broken stick, or geometric series model can provide a visual profile of particular research areas
(Southwood 1992). Indeed, the different species abundance models describe communities with distinct characteristics. For example, mature undisturbed systems characterized by higher species richness
typically display a log normal relationship between the number of species and their respective abundances. On the other hand, early successional sites or environmentally stressed communities (e.g.,
pollution) are characterized by geometric or log series species distribution models (Southwood 1992).
The use of confidence intervals presents another attractive approach to exploratory data analysis. Some even argue that confidence intervals represent a more meaningful and powerful alternative to
statistical hypothesis testing since they give an estimate of the magnitude of an effect under investigation (Steidl et al. 1997, Johnson 1999, Stephens et al. 2006). In other words, determining the
confidence intervals is generally much more informative than simply determining the P-value (Stephens et al. 2006). Confidence intervals are widely applicable and can be placed on estimates of
population density, observed effects of population change in samples taken over time, or treatment effects in perturbation experiments. They are also commonly used in calculations of effect size in
power or meta-analysis (Hedges and Olkin 1985, Gurevitch et al. 2000, Stephens et al. 2006).
Despite its historical popularity and attractive simplicity, however, visual analysis of the data does carry some caveats and potential pitfalls for hypothesis testing. For example, Hilborn and
Mangel (1997) recommended plotting the data in different ways to uncover “plausible relationships.” At first glance, this appears to be a reasonable and innocuous recommendation, but one should be
wary of letting the data uncover plausible relationships. That is, it is entirely possible to create multiple plots between a dependent variable (Y) and multiple explanatory variables (X[1], X[2], X
[3],…,X[N]) and discover an unexpected effect or pattern that is an artifact of that single data set, not of the more important biological process that generated the sample data (i.e., spurious
effects) (Anderson et al. 2001). These types of spurious effects are most likely when dealing with small or limited sample sizes and many explanatory variables. Plausible relationships and hypotheses
should be developed in an a priori fashion with significant input from a monitoring program’s conceptual system, stakeholder input, and well-developed objectives.
Data Visualization II: Getting to Know Your Model
Most statistical models are based on a set of assumptions that are necessary for models to properly fit and describe the data. If assumptions are violated, statistical analyses may produce erroneous
results (Sokal and Rohlf 1994, Krebs 1999). Traditionally, researchers are most concerned with the assumptions associated with parametric tests (e.g., ANOVA and regression analysis) since these are
the most commonly used analyses. The description below may be used as a basic framework for assessing whether or not data conform to parametric assumptions.
Independence of Data Points
The essential condition of most statistical tests is the independence and random selection of data points in space and time. In many ecological settings, however, data points can be counts of
individuals or replicates of treatment units in manipulative studies and one must think about spatial and temporal dependency among sampling units. Dependent data are more alike than would be
expected by random selection alone. Intuitively, if two observations are not independent then there is less information content between them. Krebs (1999) argued that if the assumption of
independence is violated, the chosen probability for Type I error (a) cannot be achieved. ANOVA and linear regression techniques are generally sensitive to this violation (Sokal and Rohlf 1994, Krebs
1999). Autocorrelation plots can be used to visualize the correlation of points across space (e.g., spatial correlograms) (Fortin and Dale 2005) or as a time series (Crawley 2007). Plots should be
developed using the actual response points as well as the residuals of the model to look for patterns of autocorrelation.
Homogeneity of Variances
Parametric models assume that sampled populations have similar variances even if their means are different. This assumption becomes critical in studies comparing different groups of organisms,
treatments, or sampling intervals and it is the responsibility of the researcher to make these considerations abundantly clear in the protocols of such studies. If the sample sizes are equal then
parametric tests are fairly robust to the departure from homoscedasticity (i.e., equal variance of errors across the data) (Day and Quinn 1989, Sokal and Rohlf 1994). In fact, an equal sample size
among different treatments or areas should be ensured whenever possible since most parametric tests are overly sensitive to violations of assumptions in situations with unequal sample sizes (Day and
Quinn 1989). The best approach for detecting a violation in homoescedasticity, however, is plotting the residuals of the analysis against predicted values (i.e., residual analysis) (Crawley 2007).
This plot can reveal the nature and severity of the potential disagreement/discord between variances (Figure 11.2), and is a standard feature in many statistical packages. Indeed, such visual
inspection of the model residuals, in addition to the benefits outlined above, can help determine not only if there is a need for data transformation, but also the type of the distribution. Although
several formal tests exist to determine the heterogeneity of variances (e.g., Bartlett’s test, Levine’s test), these techniques assume a normal data distribution, which reduces their utility in most
ecological studies (Sokal and Rohlf 1994).
Figure11.2. Three hypothetical residual scatters. In Case A, the variance is proportional to predicted values, which suggests a Poisson distribution. In Case B, the variance increases with the square
of expected values and the data approximate a log-normal distribution. The severe funnel shape in Case C indicates that the variance is proportional to the fourth power of predicted values (with
permission from Sabin and Stafford 1990).
Although parametric statistics are fairly robust to violations of the assumption of normality, highly skewed distributions can significantly affect the results. Unfortunately, non-normality appears
to be the norm in ecology; in other words ecological data only rarely follow a normal distribution (Potvin and Roff 1993, White and Bennetts 1996, Hayek and Buzas 1997, Zar 1999). Moreover, the
normal distribution primarily describes continuous variables whereas count data, often the type of information gathered during monitoring programs, are discrete (Thompson et al. 1998, Krebs 1999).
Thus, it is important to be vigilant for large departures from normality in your data. This can be done with a number of tests if your data meet certain specifications. For instance, if the sample
size is equal among groups and sufficiently large (e.g., n > 20) you can implement tests to assess for normality or to determine the significance of non-normality. The latter is commonly done with
several techniques, including the W-test and the Kolmogorov-Smirnov D-test for larger sample sizes. The applicability of both tests, however, is limited due to the fact that they exhibit a low power
if the sample size is small and excessive sensitivity when the sample size is large. To overcome these complications, visual examinations of the data are also undertaken. Visual examinations are
generally more appropriate than formal tests since they allow one to detect the extent and the type of problem. Keep in mind, however, that when working with linear models the aim is to “normalize”
the data/residuals. Plotting the residuals with normal-probability plots (Figure 11.3), stem-and-leaf diagrams, or histograms can help to understand the nature of the non-normal data (Day and Quinn
Figure 11.3. Plots of four hypothetical distributions (left column) with their respective normal probability plots (right column). Solid and broken lines show the observed and normal (expected)
distributions, respectively (with permission from Sabin and Stafford 1990).
Possible Remedies if Parametric Assumptions are Violated
Data transformations or non-parametric tests are often recommended as appropriate solutions if the data do not meet parametric assumptions (Sabin and Stafford 1990, Thompson et al. 1998). Those
developing monitoring plans, however, should repeatedly (and loudly) advocate sound experimental design as the only effective prevention of many statistical problems. This will keep unorganized data
collection and questionable data transformations to a minimum. There is no transformation or magical statistical button for data that are improperly collected. Nonetheless, a properly designed
monitoring program is likewise not a panacea; ecosystems are complex. Thus even proper designs can produce data that confound analysis, are messy, and require some remedies if parametric models are
to be used.
Data Transformation
If significant violations of parametric assumptions occur, it is customary to implement an appropriate data transformation to try to resolve the violations. During a transformation, data will be
converted and analyzed at a different scale. In general, researchers should be aware of the need to back-transform the results after analysis to present parameter values on the original data scale or
be clear that their results are being presented on a transformed scale. Examples of common types of transformations that a biometrician may recommend for use are presented in Table 11.2. A wisely
chosen transformation can often improve homogeneity of variances and produce an approximation of a normal distribution.
Table 11.2. The most common types of data transformation in biological studies (Modified from Sabin and Stafford 1990).
Transformation type When appropriate to use
Logarithmic Use with count data and when means positively correlate with variances. A rule of thumb suggests its use when the largest value of the response variable is at least 10 x the
smallest value.
Square-root Use with count data following a Poisson distribution.
Inverse Use when data residuals exhibit a severe funnel shaped pattern, often the case in data sets with many near-zero values.
Arcsine square root Good for proportional or binomial data.
Box-Cox objective If it is difficult to decide on what transformation to use, this procedure finds an optimal model for the data.
Nonparametric Alternatives
If the data violate basic parametric assumptions, and transformations fail to remedy the problem, then you may wish to use nonparametric methods (Sokal and Rohlf 1994, Thompson et al. 1998, Conover
1999). Nonparametric techniques have less stringent assumptions about the data, are less sensitive to the presence of outliers, and are often more intuitive and easier to compute (Hollander and Wolfe
1999). Since nonparametric models are less powerful than their parametric counterparts, however, parametric tests are preferred if the assumptions are met or data transformations are successful (Day
and Quinn 1989, Johnson 1995).
Statistical Distribution of the Data
As mentioned above, plant and animal data are often in the form of counts of organisms and this can present special challenges during analyses. The probability that organisms occur in a particular
habitat has a direct bearing on the selection of appropriate sampling protocols and statistical models (Southwood 1992). Monitoring data will most likely approximate random or clumped distributions,
yet this should not be blindly assumed. Fitting the data to the Poisson or negative binomial models are common ways to test if they do (Southwood 1992, Zuur 2009). These models are also particularly
appropriate for describing count data, which are, once again, examples of discrete variables (e.g., quadrat counts, sex ratios, ratios of juveniles to adults). The following subsections briefly
describe how to identify whether or not the data follow either a Poisson or negative binomial distribution model.
Poisson Distribution
Poisson distributions are common among species, where the probability of detecting an individual in any sample is rather low (Southwood 1992). The Poisson model gives a good fit to data if the mean
count (e.g., number of amphibians per sampling quadrat) is in the range of 1–5. As the mean number of individuals in the sample increases, and exceeds 10, however, the random distribution begins to
approach the normal distribution (Krebs 1999, Zar 1999).
During sampling, the key assumption of the Poisson (random) distribution is that the expected number of organisms in a sample is the same and that it equals μ, the population mean (Krebs 1999, Zuur
2009). One intriguing property of the random distribution is that it can be described by its mean, and that the mean equals the variance (s^2). The probability (frequency) of detecting a given number
of individuals in a sample collected from a population with mean = μ is:
P[μ] = e^-μ(μ^μ/μ!)
Whether or not the data follow a random distribution can be tested with a simple Chi-square goodness of fit test or with an index of dispersion (I), which is expected to be 1.0 if the assumption of
randomness is satisfied:
I = s^2 / ̅x̅,
where ̅x̅ and s^2 are the observed sample mean and variance, respectively.
Zurr et al. (2009) provided excellent examples of tests for goodness of fit for Poisson distributions. In practice, the presence of a Poisson distribution in data can also be assessed visually by
examining the scatter pattern of residuals during analysis. If we reject the null hypothesis that samples came from a random distribution, s^2 < μ, and s^2/ μ < 1.0, then the sampled organisms are
either distributed uniformly or regularly (underdispersed). If we reject the null hypothesis but the values of s^2 and s^2/ μ do not fall within those bounds, then the sampled organisms are clumped
Negative Binomial Distribution
An alternative approach to the Poisson distribution, and one of the mathematical distributions that describe clumped or aggregated spatial patterns, is the negative binomial (Pascal) distribution
(Anscombe 1973, Krebs 1999, Hilbe 2007, Zuur 2009). White and Bennetts (1996) suggested that this distribution is a better approximation to count data than the Poisson or normal distributions. The
negative binomial distribution is described by the mean and the dispersion parameter k, which expresses the extent of clumping. As a result of aggregation, it always follows that s^2 > μ and the
index of dispersion (I) > 1.0. Several techniques exist to evaluate the goodness-of-fit of data to the negative binomial distribution. As an example, White and Bennetts (1996) give an example of
fitting the negative binomial distribution to point-count data for orange-crowned warblers to compare their relative abundance among forest sites. Zero-inflated Poisson models (ZIP models) are
recommended for analysis of count data with frequent zero values (e.g., rare species studies) or where data transformations are not feasible or appropriate (e.g., Heilbron 1994, Welsh et al. 1996,
Hall and Berenhaut 2002, Zuur 2009). Good descriptions and examples of their use can be found in Krebs (1999), Southwood (1992), Faraway (2006), and Zurr et al. (2009). Since the variety of possible
clumping patterns in nature is practically infinite, it is possible that the Poisson and negative binomial distributions may not always adequately fit the data at hand.
Analysis of Inventory Data – Abundance
Absolute Density or Population Size
National policy on threatened and endangered species is ultimately directed toward efforts to increase or maintain the total number of individuals of the species within their natural geographic range
(Carroll et al. 1996). Total population size and effective population size (i.e., the number of breeding individuals in a population) (Harris and Allendorf 1989) are the two parameters that most
directly indicate the degree of species endangerment and/or effectiveness of conservation policies and practices. Population density is informative for assessing population status and trends because
the parameter is sensitive to changes in natural mortality, exploitation, and habitat quality. In some circumstances, it may be feasible to conduct a census of all individuals of a particular species
in an area to determine the total population size or density parameters. Typically, however, population size and density parameters are estimated using statistical analyses based on only a sample of
population members (Yoccoz et al. 2001, Pollock et al. 2002). Population densities of plants and sessile animals can be estimated from counts taken on plots or data describing the spacing between
individuals (i.e., distance methods) and are relatively straightforward. Population analyses for many animal species must account for animal response to capture or observation, observer biases, and
different detection probabilities among sub-populations (Kery and Schmid 2004). For instance, hiring multiple technicians for field-work and monitoring a species whose behavior or preferred habitat
change seasonally are two factors that would need to be addressed in the analysis. Pilot studies are usually required to collect the data necessary to do this. Furthermore, the more common techniques
used for animal species, such as mark-recapture studies, catch-per-unit effort monitoring programs, and occupancy studies, require repeated visits to sampling units. This, along with the need for
pilot studies, increases the complexity and cost of monitoring to estimate population parameters relative to monitoring of sessile organisms.
For animal species, mark-recapture models are often worth the extra investment in terms of the data generated as they may be used to estimate absolute densities of populations and provide additional
information on such vital statistics as animal movement, geographic distribution, and survivorship (Lebreton et al. 1992, Nichols 1992, Nichols and Kendall 1995, Thomson et al. 2009). Open
mark-recapture models (e.g., Jolly-Seber) assume natural changes in the population size of the species of interest during sampling. In contrast, closed models assume a constant population size.
Program MARK (White et al. 2006) performs sophisticated maximum-likelihood-based mark-recapture analyses and can test and account for many of the assumptions such as open populations and
Relative Abundance Indices
It is sometimes the case that data analyses for biological inventories and monitoring studies can be accomplished based on indices of population density or abundance, rather than population
estimators (Pollock et al. 2002). The difference between estimators and indices is that the former yield absolute values of population density while the latter provide relative measures of density
that can be used to assess population differences in space or time. Caughley (1977) advocated the use of indices after determining that many studies that used estimates of absolute density could have
used density indices without losing information. He suggested that use of indices often results in much more efficient use of time and resources and produces results with higher precision (Caughley
1977, Caughley and Sinclair 1994). Engeman (2003) also indicated that use of an index may be the most efficient means to address population monitoring objectives and that the concerns associated with
use of indices may be addressed with appropriate and thorough experimental design and data analyses. It is important, therefore, to understand these concerns before utilizing an index, even though
indices of relative abundance have a wide support among practitioners who often point out their efficiency and higher precision (Caughley 1977, Engeman 2003). First, they are founded on the
assumption that index values are closely associated with values of a population parameter. Because the precise relationship between the index and parameter usually is not quantified, the reliability
of this assumption is often brought into question (Thompson et al. 1998, Anderson 2001). Also, the opportunity for bias associated with indices of abundance is quite high. For instance, track counts
could be related to animal abundance, animal activity levels, or both. Indices often are used because of logistical constraints. Capture rates of animals over space and time may be related to animal
abundance or to their vulnerability to capture in areas of differing habitat quality. If either of these techniques are used to generate the index, considerable caution must be exercised when
interpreting results. Given these concerns, the utility of a pilot study that will allow determination, with a known level of certainty, of the relationship between the index and the actual
population (or fitness) for the species being monitored is clear. Determining this relationship, however, requires an estimate of the population.
Suitability of any technique, including indices, should ultimately be based on how well it addresses the study objective and the reliability of its results (Thompson 2002). It is also important to
consider that statistical analyses of relative abundance data require familiarity with the basic assumptions of parametric and non-parametric models. Some examples of the use and analysis of relative
density data can be found in James et al. (1996), Rotella et al. (1996), Knapp and Matthews (2000), Huff et al. (2000), and Rosenstock et al. (2002).
Analyses of relative abundance data require familiarity with the basic assumptions of parametric models. Since the focus is on count data, alternative statistical methods can be employed to fit the
distribution of the data (e.g., Poisson or negative binomial). Although absolute abundance techniques are independent of parametric assumptions, they nevertheless do have their own stringent
When a researcher decides to use a monitoring index, it is important to remember that statistical power negatively correlates with the variability of the monitoring index. This truly underscores the
need to choose an appropriate indicator of abundance and accurately estimate its confidence interval (Harris 1986, Gerrodette 1987, Gibbs et al. 1999). An excellent overview of a variety of groups of
animals and plants for which the variability in estimating their population is known is given in Gibbs et al. (1998). It is also important to keep in mind that relative measures of density can be
less robust to changes in habitat than absolute measures. For instance, forest practices may significantly affect indices that rely on visual observations of organisms. Although these factors may
confound absolute measures as well, modern distance and mark-recapture analysis methods can account for variations in sightability and trapability. See Caughley (1977), Thompson et al. (1998),
Rosenstock et al. (2002), Pollock et al. (2002) and Yoccoz et al. (2001) for in-depth discussions of the merits and limitations of estimating relative vs. absolute density in population monitoring
Generalized Linear Models and Mixed Effects
Recently, generalized linear models (GLM) have become increasingly popular and take advantage of the data’s true distribution without trying to normalize it (Faraway 2006, Bolker 2008, McCulloch et
al. 2008). Often times, the standard linear model cannot handle non-normal responses, such as counts or proportions, whereas generalized linear models were developed to handle categorical, binary,
and other response types (Faraway 2006, McCulloch et al. 2008). In practice, most data have non-normal errors, and so GLMs allow the user to specify a variety of error distributions. This can be
particularly useful with count data (e.g., Poisson errors), binary data (e.g., binomial errors), proportion data (e.g., binomial errors), data showing a constant coefficient of variation (e.g., gamma
errors), and survival analysis (e.g., exponential errors) (Crawley 2007).
An extension of the GLM is the Generalized Linear Mixed Model (GLMM) approach. GLMMs are examples of hierarchal models and are most appropriate when dealing with nested data. What are nested data? As
an example, monitoring programs may collect data on species abundances or occurrences from multiple sites on different sampling occasions within each site. Alternatively, researchers might also
sample from a single site in different years. In both cases, the data are “nested” within a site or a year, and to analyze the data generated from these surveys without considering the “site” or
“year” effect would be considered pseudoreplication (Hurlbert 1984). That is, there is a false assumption of independency of the sampling occasions within a single site or across the sampling period.
Traditionally, researchers might avoid this problem by averaging the results of those sampling occasions across the sites or years and focus on the means, or they may simply just focus their analysis
within an individual site or sampling period. The more standard statistical approaches, however, attempt to quantify the exact effect of the predictor variables (e.g., forest area, forb density), but
ecological problems often involve random effects that are a result of the variation among sites or sampling periods (Bolker et al. 2009). Random effects that come from the same group (e.g., site or
time period) will often be correlated, thus violating the standard assumption of independence of errors in most statistical models.
Hierarchal models offer an excellent way of dealing with these problems, but when using GLMMs, researchers should be able to correctly identify the difference between a fixed effect and a random
effect. In its most basic form, fixed effects have “informative” factor levels, while random effects often have “uninformative” factor levels (Crawley 2007). That is, random effects have factor
levels that can be considered random samples from a larger population (e.g., blocks, sites, years). In this case, it is more appropriate to model the added variation caused by the differences between
the levels of the random effects and the variation in the response variables (as opposed to differences in the mean). In most applied situations, random effect variables often include site names or
years. In other cases, when multiple responses are measured on an individual (e.g., survival), random effects can include individuals, genotypes, or species. In contrast, fixed effects then only
model differences in the mean of the response variable, as opposed to the variance of the response variable across the levels of the random effect, and can include predictor environmental variables
that are measured at a site or within a year. In practice, these distinctions are at times difficult to make and mixed effects models can be challenging to apply. For example, in his review of 537
ecological studies that used GLMM analyses, Bolker (2009) found that 58% used this tool inappropriately. Consequently, as is the case with many of these procedures, it is important to consult with a
statistician when developing and implementing your analysis. There several excellent reviews and books on the subject of mixed effects modeling (Gelman and Hill 2007, Bolker et al. 2009, Zuur 2009).
Analysis of Species Occurrences and Distribution
Does a species occur or not occur with reasonable certainty in an area under consideration for management? Where is the species likely to occur? These types of questions have been and continue to be
of interest for many monitoring programs (MacKenzie 2005, MacKenzie 2006). Data on species occurrences are often more cost-effective to collect than data on species abundances or demographic data.
Traditionally, information on species occurrences has been used to:
1. Identify habitats that support the lowest or highest number of species,
2. Shed light on the species distribution, and
3. Point out relationships between habitat attributes (e.g., vegetation types, habitat structural features) and species occurrence or community species richness.
For many monitoring programs, species occurrence data are often considered preliminary data only collected during the initial phase of an inventory project and often to gather background information
for the project area. In recent years, however, occupancy modeling and estimation (MacKenzie et al. 2002, MacKenzie et al. 2005, Mackenzie and Royle 2005, MacKenzie 2006) has become a critical aspect
of monitoring animal and plant populations. These types of categorical data have represented an important source of data for many monitoring programs that have targeted rare or elusive species, or
where resources are unavailable to collect data required for parameter estimation models (see Hayek and Buzas 1997, Thompson 2004).
For some population studies, simply determining whether a species is present in an area may be sufficient for conducting the planned data analysis. For example, biologists attempting to conserve a
threatened wetland orchid may need to monitor the extent of the species range and proportion of occupied area (POA) on a National Forest. One hypothetical approach is to map all wetlands in which the
orchid is known to be present, as well as additional wetlands that may qualify as the habitat type for the species within the Forest. To monitor changes in orchid distribution at a coarse scale, data
collection could consist of a semiannual monitoring program conducted along transects at each of the mapped wetlands to determine if at least one individual orchid (or some alternative criterion to
establish occupancy) is present. Using only a list that includes the wetland label (i.e., the unique identifier), the monitoring year, and an occupancy indicator variable, the biologists could
prepare a time series of maps displaying all of the wetlands by monitoring year and distinguish the subset of wetlands that were found to be occupied by the orchid.
Monitoring programs to determine the presence of a species typically require less sampling intensity than fieldwork necessary to collect other population statistics. It is far easier to determine if
there is at least one individual of the target species on a sampling unit than it is to count all of the individuals. Conversely, to determine with confidence that a species is not present on a
sampling unit requires more intensive sampling than collecting count or frequency data because it is so difficult to dismiss the possibility that an individual missed detection (i.e., a failure to
detect does not necessarily equate to absence). Traditionally, the use of occurrence data was considered a qualitative assessment of changes in the species distribution pattern and served as an
important first step to formulating new hypotheses as to the cause of the observed changes. More recently, however, repeated sampling and the use occupancy modeling estimation has increased the
applicability of occurrence data in ecological monitoring.
Possible analysis models for occurrence data
Species diversity
The number of species per sample (e.g., 1-m^2 quadrat) can give a simple assessment of local, α diversity, or these data may be used to compare species composition among several locations (β
diversity) using simple binary formulas such as the Jaccard’s index or Sorensen coefficient (Magurran 1988). For example, the Sorensen qualitative index may be calculated as:
C[S] = 2j / (a +b),
where a and b are numbers of species in locations A and B, respectively, and j is the number of species found at both locations. If species abundance is known (number individuals/species), species
diversity can be analyzed with a greater variety of descriptors such as numerical species richness (e.g., number species/number individuals), quantitative similarity indices (e.g., Sorensen
quantitative index, Morista-Horn index), proportional abundance indices (e.g., Shannon index, Brillouin index), or species abundance models (Magurran 1988, Hayek and Buzas 1997).
Binary Analyses
Since detected/not-detected data are categorical, the relationship between species occurrence and explanatory variables can be modeled with a logistic regression if values of either 1 (species
detected) or 0 (species not-detected) are ascribed to the data (Trexler and Travis 1993, Hosmer and Lemeshow 2000, Agresti 2002). Logistic regression necessitates a dichotomous (0 or 1) or a
proportional (ranging from 0 to 1) response variable. Yet in many cases, logistic regression is used in combination with a set of variables to predict the detection or non-detection a species. For
example, a logistic regression can be enriched with such predictors as the percentage of vegetation cover, forest patch area, or presence of snags to create a more informative model of the occurrence
of a forest-dwelling bird species. The resulting logistic function provides an index of probability with respect to species occurrence. There are a number of cross-validation functions that allow the
user to identify the probability value that best separates sites where a species was found from where it was not found based on the existing data (Freeman and Moisen 2008). In some cases, data points
are withheld from formal analysis (e.g., validation data) and used to test the relationships after the predictive relationships are developed using the rest of data (e.g., training data) (Harrell
2001). Logistic regression, however, is a parametric test. If the data do not meet or approximate the parametric assumption, alternatives to standard logistic regression can be used including General
Additive Models (GAM) and variations of classification tree (CART) analyses.
Prediction of species density
In some cases, occurrence data have been used to predict organism density if the relationship between species occurrence and density is known and the model’s predictive power is reasonably high
(Hayek and Buzas 1997). For example, one can record plant abundance and species richness in sampling quadrats. The species proportional abundance, or constancy of its frequency of occurrence (P[o]),
can then be calculated as:
P[o] = No. of species occurrences (+ or 0) / number of samples (quadrats)
Consequently, the average species density is plotted against its proportional abundance to derive a model to predict species abundance in other locations with only occurrence data. Note, however,
that the model may function reasonably well only in similar and geographically related types of plant communities (Hayek and Buzas 1997).
Occupancy Modeling
Note that without a proper design, detected/not-detected data cannot be reliably used to measure or describe species distributions (Kery and Schmid 2004, MacKenzie 2006, Kéry et al. 2008). Although
traditional methods using logistic regression and other techniques may be used to develop a biologically-based model that can predict the probability of occurrence of a site over a landscape,
occupancy modeling has developed rapidly over the past few years. As with mark-recapture analysis, changes in occupancy over time can be parameterized in terms of local extinction (ε) and
colonization (γ) processes, analogous to the population demographic processes of mortality and recruitment (Figure 11.4) (MacKenzie et al. 2003, MacKenzie 2006, Royle and Dorazio 2008). In this case,
sampling must be done in a repeated fashion within separate primary sampling periods (Figure 11.4). Occupancy models are robust to missing observations and can effectively model the variation in
detection probabilities between species. Of greatest importance, occupancy (ψ), colonization (γ), and local extinction (ε) probabilities can be modeled as functions of environmental covariate
variables that can be site-specific, count-specific, or change between the primary periods (MacKenzie et al. 2003, MacKenzie et al. 2009). In addition, detection probabilities can also be functions
of season-specific covariates and may change with each survey of a site. More recently, program PRESENCE has been made available as a sophisticated likelihood-based family of models that has been
increasingly popular using species occurrence in for monitoring (www.mbr-pwrc.usgs.gov/software/presence.html). Donovan and Hines (2007) also present an explanation of occupancy models and several
online exercises (www.uvm.edu/envnr/vtcfwru/spreadsheets/occupancy/occupancy.htm).
Figure 11.4. Representation of how the occupancy states at a site might change between primary sampling periods (T1,T2,T3). Blue circles indicate that the site is occupied (species present at some
point during a count) during the primary period (T), while red circles indicate the site is unoccupied (species absent from the count). Processes such as occupancy (ψ), colonization (γ), and local
extinction (ε) can be modeled independently and account for heterogeneous detection probabilities because of the repeated sampling within each time period (redrafted from MacKenzie et al. 2006).
Assumptions, data interpretation, and limitations
It is crucial to remember that failure to detect a species in a habitat does not mean that the species was truly absent (Kery and Schmid 2004, MacKenzie 2006, Kéry et al. 2008). Cryptic or rare
species, such as amphibians, are especially prone to under-detection and false absences (Thompson 2004). Keep in mind that occasional confirmations of species presence provide only limited data. For
example, the use of a habitat by a predator may reflect prey availability, which may fluctuate annually or even during one year. A more systematic approach with repeated visits is necessary to
generate more meaningful data (Mackenzie and Royle 2005).
Extrapolating density without understanding the species requirements is also likely to produce meaningless results since organisms depend on many factors that we typically do not understand.
Furthermore, limitations of species diversity measures should be recognized, especially in conservation projects. For example, replacement of a rare or keystone species by a common or exotic species
would not affect species richness of the community and could actually ‘improve’ diversity metrics. Also, the informative value of qualitative indices is rather low since they disregard species
abundance and are sensitive to differences in sample size (Magurran 1988). Rare and common species are weighted equally in community comparisons. Often this may be an erroneous assumption since the
effect of a species on the community is expected to be proportional to its abundance; keystone species are rare exceptions (Power and Mills 1995). In addition, analyses that focus on species
co-occurrences without effectively modeling or taking into account the varying detection probabilities of the species can be prone to error, although new occupancy models are beginning to incorporate
detectability in models of species richness (MacKenzie et al. 2004, Royle et al. 2007, Kéry et al. 2009).
Analysis of Trend Data
Trend models should be used if the objective of a monitoring plan is to detect a change in a population parameter over time. Most commonly, population size is repeatedly estimated at set time
intervals. Trend monitoring is crucial in the management of species since it may help:
1. Recognize population decline and focus attention on affected species,
2. Identify environmental variables correlated with the observed trend and thus help formulate hypotheses for cause-and-effect studies, and
3. Evaluate the effectiveness of management decisions (Thomas 1996, Thomas and Martin 1996).
The status of a population can be assessed by comparing observed estimates of the population size at some time interval against management-relevant threshold values (Gibbs et al. 1999, Elzinga et al.
2001). All monitoring plans, but particularly those designed to generate trend data, should emphasize that the selection of reliable indicators or estimators of population change is a key requirement
of effective monitoring efforts. Indices of relative abundance are often used in lieu of measures of actual population size, sometimes because of the relatively reduced cost and effort needed to
collect the necessary data on an iterative basis. For example, counts of frog egg masses may be performed annually in ponds (Gibbs et al. 1998) or bird point-counts may be taken along sampling routes
(Böhning-Gaese et al. 1993, Link and Sauer 1997a,b; 2007). Analysis of distribution maps, checklists, and volunteer-collected data may also provide estimates of population trends (Robbins 1990,
Temple and Cary 1990, Cunningham and Olsen 2009, Zuckerberg et al. 2009). To minimize the bias in detecting a trend, such as in studies of sensitive species, the same population may be monitored
using different methods (e.g., a series of different indices) (Temple and Cary 1990). Data may also be screened prior to analysis. For example, only monitoring programs that meet agreed-upon criteria
may be included, or species with too few observations may be excluded from the analysis (Thomas 1996, Thomas and Martin 1996).
Possible Analysis Models
Trends over space and time present many challenges for analysis to the extent that consensus does not exist on the most appropriate method to analyze the related data. This is a significant
constraint since model selection may have a considerable impact on interpretation of the results of analysis (Thomas 1996, Thomas and Martin 1996).
Poisson regression is independent of parametric assumptions and is especially appropriate for count data. Classic linear regression models in which estimates of population size are plotted against
biologically relevant sampling periods have been historically used in population studies since they are easy to calculate and interpret. However, these models are subject to parametric assumptions,
which are often violated in count data (Krebs 1999, Zar 1999). Linear regressions also assume a constant linear trend in data, and expect independent and equally spaced data points. Since individual
measurements in trend data are autocorrelated, classic regression can give skewed estimates of standard errors and confidence intervals, and inflate the coefficient of determination (Edwards and
Coull 1987, Gerrodette 1987). Edwards and Coull (1987) suggested that correct errors in linear regression analysis can be modeled using an autoregressive process model (ARIMA model). Linear
route-regression models represent a more robust form of linear regression and are popular with bird ecologists in analyses of roadside monitoring programs (Geissler and Sauer 1990, Sauer et al. 1996,
Thomas 1996). They can handle unbalanced data by performing analysis on weighted averages of trends from individual routes (Geissler and Sauer 1990) but may be sensitive to nonlinear trends (Thomas
1996). Harmonic or periodic regressions do not require regularly spaced data points and are valuable in analyzing data on organisms that display significant daily periodic trends in abundance or
activity (Lorda and Saila 1986).
For some data where large sample sizes are not possible, or where variance structure cannot be estimated reliably, alternative analytical approaches may be necessary. This is especially true when the
risk of concluding that a trend cannot be detected is caused by large variance or small sample sizes, the species is rare, and the failure to detect a trend could be catastrophic for the species.
Wade (2000) provides an excellent overview of the use of Bayesian analysis to address these types of problems. Thomas (1996) gives a thorough review of the most popular models fit to trend data and
assumptions associated with their use.
Assumptions, Data Interpretation, and Limitations
The underlying assumption of trend monitoring projects is that a population parameter is measured at the same sampling points (e.g., quadrats, routes) using identical or similar procedures (e.g.,
equipment, observers, time period) at regularly spaced intervals. If these requirements are violated, data may contain excessive noise, which may complicate their interpretation. Thomas (1996)
identified four sources of variation in trend data:
1. Prevailing trend – population tendency of interest (e.g., population decline),
2. Irregular disturbances – disruptions from stochastic events (e.g., drought mortality),
3. Partial autocorrelation – dependence of the current state of the population on its previous levels, and
4. Measurement error – added data noise from deficient sampling procedures.
Although trend analyses are useful in identifying population change, the results are correlative and tell us little about the underlying mechanisms. Ultimately, only well designed cause-and-effect
studies can validate causation and facilitate management decisions.
Analysis of Cause and Effect Monitoring Data
The strength of trend studies lies in their capacity to detect changes in population size. To understand the reason for population fluctuations, however, the causal mechanism behind the population
change must be determined. Cause-and-effect studies represent one of the strongest approaches to test cause-and-effect relationships and are often used to assess effects of management decisions on
populations. Similar to trend analyses, cause-and-effect analyses may be performed on indices of relative abundance or absolute abundance data.
Possible analysis models
Parametric and distribution free (non-parametric) models provide countless alternatives to fitting cause-and-effect data (Sokal and Rohlf 1994, Zar 1999). Excellent introductory material to the
design and analysis of ecological experiments, specifically for ANOVA models, can be found in Underwood (1997) and Scheiner and Gurevitch (2001).
A unique design is recommended for situations where a disturbance (treatment) is applied and its effects are assessed by taking a series of measurements before and after the perturbation
(Before-After Control-Impact, BACI) (Stewart-Oaten et al. 1986) (Figure 11.5). This model was originally developed to study pollution effects (Green 1979), but it has found suitable applications in
other areas of ecology as well (Wardell-Johnson and Williams 2000, Schratzberger et al. 2002, Stanley and Knopf 2002). In its original design, an impact site would have a parallel control site.
Further, variables deemed at the beginning of the study to be relevant to the management actions would be planned to be periodically monitored over time. Then any differences between the trends of
those measured variables in the impact site with those from the control site (treatment effect) would be demonstrated as a significant time*location interaction (Green 1979). This approach has been
criticized since the design was originally limited to unreplicated impact and control sites (Figure 11.5), but it can be improved by replicating and randomly assigning sites (Hurlbert 1984, Underwood
Figure 11.5. A hypothetical example of a BACI analysis where abundance samples are taken at control impact sites before and after the impact and compared to a control site (redrafted from
Stewart-Oaten et al. 1986).
Assumptions, data interpretation, and limitations
Since cause-and-effect data are frequently analyzed with ANOVA models, a parametric model, one must pay attention to parametric assumptions. Alternative means of assessing manipulative studies may
also be employed. For example, biologically significant effect size with confidence intervals may be used in lieu of classic statistical hypothesis testing. An excellent overview of arguments in
support of this approach with examples may be found in Hayes and Steidl (1997)), Steidl et al. (1997), Johnson (1999), and Steidl and Thomas (2001).
Paradigms of Inference: Saying Something With Your Data and Models
Randomization tests
These tests are not alternatives to parametric tests, but rather are unique means of estimating statistical significance. They are extremely versatile and can be used to estimate test statistics for
a wide range of models, and are especially valuable in analyzing non-randomly selected data points. It is important to keep in mind, however, that randomization tests are computationally difficult
even with small sample sizes (Edgington and Onghena 2007). A statistician needs to be involved in choosing to use and implement these techniques. More information on randomization tests and other
computation-intensive techniques can be found in Crowley (1992), Potvin and Roff (1993), and Petraitis et al. (2001).
Information Theoretic Approaches: Akaike’s Information Criterion
Akaike’s information criterion (AIC), derived from information theory, may be used to select the best-fitting model among a number of a priori alternatives. This approach is more robust and less
arbitrary than hypothesis-testing methods since the P-value is often predominantly a function of sample size. AIC can be easily calculated for any maximum-likelihood based statistical model,
including linear regression, ANOVA, and general linear models. The model hypothesis with the lowest AIC value is generally identified as the ’best‘ model with the greatest support (given the data)
(Burnham and Anderson 2002). Once the best model has been identified, the results can then be interpreted based on the changes in the explanatory variables over time. For instance, if the amount of
mature forest near streams were associated with the probability of occurrence of tailed frogs, then a map generated over the scope of inference could be used to identify current and likely future
areas where tailed frogs could be vulnerable to management actions. In addition, one of the more useful advantages of using information theoretic approaches is that identifying a single, best model
is not necessary. Using the AIC metric (or any other information criteria), one can rank models and can average across models to calculate weighted parameter estimates or predictions (i.e., model
averaging). A more in depth discussion of practical uses of AIC may be found in Burnham and Anderson (2002) and Anderson (2008).
Bayesian Inference
Bayesian statistics refers to a distinct approach to making inference in the face of uncertainty. In general, Bayesian statistics share much with the traditional frequentist statistics with which
most ecologists are familiar. In particular, there is a similar reliance on likelihood models which are routinely applied by most statisticians and biometricians. Bayesian inference can also be used
in a variety of statistical tasks, including parameter estimation and hypothesis testing, post hoc multiple comparison tests, trend analysis, ANOVA, and sensitivity analysis (Ellison 1996). Bayesian
methods, however, test hypotheses not by rejecting or accepting them, but by calculating their probabilities of being true. Thus, P-values, significance levels and confidence intervals are moot
points (Dennis 1996). Based on existing knowledge, investigators assign a priori probabilities to alternative hypotheses and then use data to calculate (“verify”) posterior probabilities of the
hypotheses with a likelihood function (Bayes theorem). The highest probability identifies the hypothesis that is the most likely to be true given the experimental data (Dennis 1996, Ellison 1996).
Bayesian statistics has several key features that differ from classical frequentist statistics:
• Bayes is based on an explicit mathematical mechanism for updating and propagating uncertainty (Bayes theorem)
• Bayesian analyses quantify inferences in a simpler, more intuitive manner. This is especially true in management settings that require making decisions under uncertainty
• Takes advantage of pre-existing data, and may be used with small sample sizes
For example, conclusions of a monitoring analysis could be framed as: “There is a 65% chance that clearcutting will negatively affect this species,” or “The probability that this population is
declining at a rate of 3% per year is 85%.” A more in-depth coverage of the use of Bayesian inference in ecology can be found in Dennis (1996), Ellison (1996), Taylor et al. (1996), Wade (2000), and
O’Hara et al. (2002). Even though Bayesian inference is easy to grasp and perform, it is still relatively rare in natural resources applications (although that is quickly changing) and sufficient
support resources for these types of tests may not be readily available. It is recommended that it only be implemented with the assistance of a consulting statistician.
Retrospective Power Analysis
Does the outcome of a statistical test suggest that no real biological change took place at the study site? Did the change actually occur but was not detected due to a low power of the statistical
test used, in other words, was a Type II (missed-change) error committed in the process? It is recommended that those undertaking inventory studies should routinely evaluate the validity of results
of statistical tests by performing a post hoc, or retrospective power analysis for two important reasons:
1. The possibility of falsely accepting the null hypothesis is quite real in ecological studies, and
2. A priori calculations of statistical power are only rarely performed in practice, but are critical to data interpretation and extrapolation (Fowler 1990).
A power analysis is imperative whenever a statistical test turns out to be non-significant and fails to reject the null hypothesis (H[0]); for example, if P > 0.05 at 95% significance level. There
are a number of techniques to carry out a retrospective power analysis well. For instance, they should be performed only using an effect size other than the effect size observed in the study (Hayes
and Steidl 1997, Steidl et al. 1997). In other words, post hoc power analyses can only answer whether or not the performed study in its original design would have allowed detecting the newly selected
effect size.
Elzinga et al. (2001) recommends the following approach to conducting a post hoc power analysis assessment (Figure 11.6). If a statistical test was declared non-significant, one could calculate a
power value to detect a biologically significant effect of interest, usually a trigger point tied to a management action. If the resulting power is low, one must take precautionary measures in the
monitoring program. Alternatively, one can calculate a minimum detectable effect size at a selected power level. An acceptable power level in wildlife studies is often set at about 0.80 (Hayes and
Steidl 1997). If the selected power can only detect a change that is larger than the trigger point value, the outcome of the study should again be viewed with caution.
Figure 11.6. A decision process to evaluate the significance of a statistical test (redrafted from Elzinga et al. 2001).
Monitoring plans may also encourage the use of confidence intervals as an alternative approach to performing a post hoc power analysis. This method is actually superior to power analysis since
confidence intervals not only suggest whether or not the effect was different from zero, but they also provide an estimate of the likely magnitude of the true effect size and its biological
significance. Ultimately, for scientific endeavors these are rules of thumb. In management contexts, however, decision making under uncertainty where the outcomes have costs, power calculations and
other estimates for acceptable amounts of uncertainty should be approached more rigorously.
Even before the collection of data, researchers must consider which analytical techniques will likely be appropriate to interpret their data. Techniques will be highly dependent on the design of the
monitoring program, so a monitoring plan should clearly articulate the expected analytical approaches after consulting with a biometrician. After data collection but before statistical analyses are
conducted, it is often helpful to view the data graphically to understand data structure. Assumptions upon which certain techniques are based (e.g., normality, independence of observations and
uniformity of variances for parametric analyses) should be tested. Some violations of assumptions may be addressed with transformations, while others may need different approaches. Detected/
non-detected, count data, time series and before-after control impact designs all have different data structures and will need to be analyzed in quite different ways. Given the considerable room for
spurious analysis and subsequent erroneous interpretation, if possible, a biometrician/statistician should be consulted throughout the entire process of data analysis.
Agresti, A. 2002. Categorical data analysis. 2nd edition. John Wiley & Sons, New York, New York.
Anderson, D.R. 2001. The need to get the basics right in wildlife field studies. Wildlife Society Bulletin 29:1294-1297.
Anderson, D.R. 2008. Model based inference in the life sciences: a primer on evidence. Springer, New York, New York.
Anderson, D.R., K.P. Burnham, W.R. Gould, and S. Cherry. 2001. Concerns about finding effects that are actually spurious. Widlife Society Bulletin 29:311-316.
Anscombe, F.J. 1973. Graphs in statistical-analysis. American Statistician 27:17-21.
Böhning-Gaese, K., M.L. Taper, and J.H. Brown. 1993. Are declines in North American insectivorous songbirds due to causes on the breeding range. Conservation Biology 7:76-86.
Bolker, B.M. 2008. Ecological models and data in R. Princeton University Press, Princeton, New Jersey. 408pp.
Bolker, B.M., M.E. Brooks, C.J. Clark, S.W. Geange, J.R. Poulsen, M.H.H. Stevens, and J.S.S. White. 2009. Generalized linear mixed models: a practical guide for ecology and evolution. Trends in
Ecology & Evolution 24:127-135.
Burnham, K.P., and D.R. Anderson. 2002. Model selection and inference: a practical information-theoretic approach. Springer-Verlag, New York, New York, USA. 454pp.
Carroll, R., C. Augspurger, A. Dobson, J. Franklin, G. Orians, W. Reid, R. Tracy, D. Wilcove, and J. Wilson. 1996. Strengthening the use of science in achieving the goals of the endangered species
act: An assessment by the Ecological Society of America. Ecological Applications 6:1-11.
Caughley, G. 1977. Analysis of vertebrate populations. John Wiley & Sons, New York, New York.
Caughley, G., and A. R. E. Sinclair. 1994. Wildlife management and ecology. Blackwell Publishing, Malden, MA.
Cleveland, W.S. 1985. The elements of graphing data. Wadsworth Advanced Books and Software, Monterey, Calif.
Cochran, W.G. 1977. Sampling techniques. 3rd edition. John Wiley & Sons, New York.
Conover, W.J. 1999. Practical nonparametric statistics. 3rd edition. Wiley, New York.
Crawley, M.J. 2005. Statistics: an introduction using R. John Wiley & Sons, Ltd, West Sussex, England.
Crawley, M.J. 2007. The R Book. John Wiley & Sons, Ltd, West Sussex, England.
Crowley, P.H. 1992. Resampling methods for computation-intensive data-analysis in ecology and evolution. Annual Review of Ecology and Systematics 23:405-447.
Cunningham, R.B., and P. Olsen. 2009. A statistical methodology for tracking long-term change in reporting rates of birds from volunteer-collected presence-absence data. Biodiversity and Conservation
Day, R.W., and G.P. Quinn. 1989. Comparisons of treatments after an analysis of variance in ecology. Ecological Monographs 59:433-463.
Dennis, B. 1996. Discussion: Should ecologists become Bayesians? Ecological Applications 6:1095-1103.
Donovan, T.M., and J. Hines. 2007. Exercises in occupancy modeling and estimation. www.uvm.edu/envnr/vtcfwru/spreadsheets/occupancy/occupancy.htm
Edgington, E.S., and P. Onghena. 2007. Randomization tests. 4th edition. Chapman & Hall/CRC, Boca Raton, FL.
Edwards, D., and B.C. Coull. 1987. Autoregressive trend analysis – an example using long-term ecological data. Oikos 50:95-102.
Ellison, A.M. 1996. An introduction to Bayesian inference for ecological research and environmental decision-making. Ecological Applications 6:1036-1046.
Elzinga, C.L., D.W. Salzer, and J.W. Willoughby. 1998. Measuring and monitoring plant plant populations. Technical Reference 1730-1., Bureau of Land Management, National Business Center, Denver, CO.
Elzinga, C.L., D.W. Salzer, J.W. Willoughby, and J.P. Gibbs. 2001. Monitoring plant and animal populations. Blackwell Science, Inc., Malden, Massachusetts.
Engeman, R.M. 2003. More on the need to get the basics right: population indices. Wildlife Society Bulletin 31:286-287.
Faraway, J. J. 2006. Extending the linear model with R: generalized linear, mixed effects and nonparametric regression models. Chapman & Hall/CRC, Boca Raton.
Fortin, M.-J., and M.R.T. Dale. 2005. Spatial analysis : a guide for ecologists. Cambridge University Press, Cambridge, UK ; New York.
Fowler, N. 1990. The 10 most common statistical errors. Bulletin of the Ecological Society of America 71:161-164.
Freeman, E.A., and G.G. Moisen. 2008. A comparison of the performance of threshold criteria for binary classification in terms of predicted prevalence and kappa. Ecological Modelling 217:48-58.
Geissler, P.H., and J.R. Sauer. 1990. Topics in route-regression analysis. Pages 54-57 in J.R. Sauer and S.Droege, editors. Survey designs and statistical methods for the estimation of avian
population trends. USDI Fish and Wildlife Service, Washington, DC.
Gelman, A., and J. Hill. 2007. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, Cambridge ; New York.
Gerrodette, T. 1987. A power analysis for detecting trends. Ecology 68:1364-1372.
Gibbs, J.P., S. Droege, and P. Eagle. 1998. Monitoring populations of plants and animals. Bioscience 48:935-940.
Gibbs, J.P., H.L. Snell, and C.E. Causton. 1999. Effective monitoring for adaptive wildlife management: Lessons from the Galapagos Islands. Journal of Wildlife Management 63:1055-1065.
Gotelli, N.J., and A.M. Ellison. 2004. A Primer of Ecological Statistics. Sinaeur, Sunderland, MA.
Green, R.H. 1979. Sampling design and statistical methods for environmental biologists. Wiley, New York.
Gurevitch, J., J.A. Morrison, and L.V. Hedges. 2000. The interaction between competition and predation: A meta-analysis of field experiments. American Naturalist 155:435-453.
Hall, D.B., and K.S. Berenhaut. 2002. Score test for heterogeneity and overdispersion in zero-inflated Poisson and Binomial regression models. The Canadian Journal of Statistics 30:1-16.
Harrell, F.E. 2001. Regression modeling strategies with applications to linear models, logistic regression, and survival analysis. Springer, New York.
Harris, R.B. 1986. Reliability of trend lines obtained from variable counts. Journal of Wildlife Management 50:165-171.
Harris, R.B., and F.W. Allendorf. 1989. Genetically effective population size of large mammals – an assessment of estimators. Conservation Biology 3:181-191.
Hayek, L.-A.C., and M.A. Buzas. 1997. Surveying natural populations. Columbia University Press, New York.
Hayes, J.P., and R.J. Steidl. 1997. Statistical power analysis and amphibian population trends. Conservation Biology 11:273-275.
Hedges, L.V., and I. Olkin. 1985. Statistical methods for meta-analysis. Academic Press, Orlando.
Heilbron, D. 1994. Zero-altered and other regression models for count data with added zeros. Biometrical Journal 36:531-547.
Hilbe, J. 2007. Negative binomial regression. Cambridge University Press, Cambridge ; New York.
Hilborn, R., and M. Mangel. 1997. The ecological detective: confronting models with data. Princeton University Press, Princeton, NJ.
Hollander, M., and D.A. Wolfe. 1999. Nonparametric statistical methods. 2nd edition. Wiley, New York.
Hosmer, D.W., and S. Lemeshow. 2000. Applied logistic regression. 2nd edition. Wiley, New York.
Huff, M.H., K.A. Bettinger, H.L. Ferguson, M.J. Brown, and B. Altman. 2000. A habitat-based point-count protocol for terrestrial birds, emphasizing Washington and Oregon. USDA Forst Service General
Technical Report, PNW-GTR-501.
Hurlbert, S.H. 1984. Pseudoreplication and the design of ecological field experiments. Ecological Monographs 54:187-211.
James, F.C., C.E. McCullogh, and D.A. Wiedenfeld. 1996. New approaches to the analysis of population trends in land birds. Ecology 77:13-27.
Johnson, D.H. 1995. Statistical sirens – the allure of nonparametrics. Ecology 76:1998-2000.
Johnson, D.H. 1999. The insignificance of statistical significance testing. Journal of Wildlife Management 63:763-772.
Kéry, M., J.A. Royle, M. Plattner, and R.M. Dorazio. 2009. Species richness and occupancy estimation in communities subject to temporary emigration. Ecology 90:1279-1290.
Kéry, M., J.A. Royle, and H. Schmid. 2008. Importance of sampling design and analysis in animal population studies: a comment on Sergio et al. Journal of Applied Ecology 45:981-986.
Kery, M., and H. Schmid. 2004. Monitoring programs need to take into account imperfect species detectability. Basic and Applied Ecology 5:65-73.
Knapp, R.A., and K.R. Matthews. 2000. Non-native fish introductions and the decline of the mountain yellow-legged frog from within protected areas. Conservation Biology 14:428-438.
Krebs, C.J. 1999. Ecological methodology. 2nd edition. Benjamin/Cummings, Menlo Park, Calif.
Lebreton, J.D., K.P. Burnham, J. Clobert, and D.R. Anderson. 1992. Modeling survival and testing biological hypotheses using marked animals – a unified approach with case-studies. Ecological
Monographs 62:67-118.
Link, W.A., and J.R. Sauer. 1997a. Estimation of population trajectories from count data. Biometrics 53:488-497.
Link, W.A., and J.R. Sauer. 1997b. New approaches to the analysis of population trends in land birds: Comment. Ecology 78:2632-2634.
Link, W.A., and J.R. Sauer. 2007. Seasonal components of avian population change: joint analysis of two large-scale monitoring programs. Ecology 88:49-55.
Lorda, E., and S.B. Saila. 1986. A statistical technique for analysis of environmental data containing periodic variance components. Ecological Modelling 32:59-69.
MacKenzie, D.I. 2005. What are the issues with presence-absence data for wildlife managers? Journal of Wildlife Management 69:849-860.
MacKenzie, D.I., J.D. Nichols, J.A. Royle, K.H. Pollock, L.L. Bailey, and J.E. Hines. 2006. Occupancy estimation and modeling: inferring patterns and dynamics of species occurrence. Elsevier Academic
Press, Burlingame, MA.
MacKenzie, D.I., L.L. Bailey, and J.D. Nichols. 2004. Investigating species co-occurrence patterns when species are detected imperfectly. Journal of Animal Ecology 73:546-555.
MacKenzie, D.I., J.D. Nichols, J.E. Hines, M.G. Knutson, and A.B. Franklin. 2003. Estimating site occupancy, colonization, and local extinction when a species is detected imperfectly. Ecology
MacKenzie, D.I., J.D. Nichols, G.B. Lachman, S. Droege, J.A. Royle, and C.A. Langtimm. 2002. Estimating site occupancy rates when detection probabilities are less than one. Ecology 83:2248-2255.
MacKenzie, D.I., J.D. Nichols, M.E. Seamans, and R.J. Gutierrez. 2009. Modeling species occurence dynamics with multiple states and imperfect detection. Ecology 90:823-835.
MacKenzie, D.I., J.D. Nichols, N. Sutton, K. Kawanishi, and L.L. Bailey. 2005. Improving inferences in popoulation studies of rare species that are detected imperfectly. Ecology 86:1101-1113.
Mackenzie, D.I., and J.A. Royle. 2005. Designing occupancy studies: general advice and allocating survey effort. Journal of Applied Ecology 42:1105-1114.
Magurran, A.E. 1988. Ecological diversity and its measurement. Princeton University Press, Princeton, N.J.
McCulloch, C.E., S.R. Searle, and J.M. Neuhaus. 2008. Generalized, linear, and mixed models. 2nd edition. Wiley, Hoboken, N.J.
Nichols, J.D. 1992. Capture-recapture models. Bioscience 42:94-102.
Nichols, J.D., and W.L. Kendall. 1995. The use of multi-state capture-recapture models to address questions in evolutionary ecology. Journal of Applied Statistics 22:835-846.
O’Hara, R.B., E. Arjas, H. Toivonen, and I. Hanski. 2002. Bayesian analysis of metapopulation data. Ecology 83:2408-2415.
Petraitis, P.S., S.J. Beaupre, and A.E. Dunham. 2001. ANCOVA: nonparametric and randomization approaches. Pages 116-133 in S.M. Scheiner and J. Gurevitch, editors. Design and analysis of ecological
experiments. Oxford University Press, Oxford, New York.
Pollock, K.H., J.D. Nichols, T.R. Simons, G.L. Farnsworth, L.L. Bailey, and J.R. Sauer. 2002. Large scale wildlife monitoring studies: statistical methods for design and analysis. Environmetrics
Potvin, C., and D.A. Roff. 1993. Distribution-free and robust statistical methods – viable alternatives to parametric statistics. Ecology 74:1617-1628.
Power, M.E., and L.S. Mills. 1995. The keystone cops meet in Hilo. Trends in Ecology & Evolution 10:182-184.
Quinn, G.P., and M.J. Keough. 2002. Experimental design and data analysis for biologists. Cambridge University Press, Cambridge, UK.
Robbins, C.S. 1990. Use of breeding bird atlases to monitor population change. Pages 18-22 in J.R. Sauer and S. Droege, editors. Survey designs and statistical methods for the estimation of avian
population trends. USDI Fish and Wildlife Service, Washington, DC.
Rosenstock, S.S., D.R. Anderson, K.M. Giesen, T. Leukering, and M.F. Carter. 2002. Landbird counting techniques: Current practices and an alternative. Auk 119:46-53.
Rotella, J.J., J.T. Ratti, K.P. Reese, M.L. Taper, and B. Dennis. 1996. Long-term population analysis of gray partridge in eastern Washington. Journal of Wildlife Management 60:817-825.
Royle, J.A., and R.M. Dorazio. 2008. Hiearchal modeling and inference in ecology: the analysis of data from populations, metapopulations, and communities. Academic Press, Boston, Ma.
Royle, J.A., M. Kery, R. Gautier, and H. Schmid. 2007. Hierarchical spatial models of abundance and occurrence from imperfect survey data. Ecological Monographs 77:465-481.
Sabin, T.E., and S.G. Stafford. 1990. Assessing the need for transformation of response variables. Forest Research Laboratory, Oregon State University, Corvallis, Oregon.
Sauer, J.R., G.W. Pendleton, and B G. Peterjohn. 1996. Evaluating causes of population change in North American insectivorous songbirds. Conservation Biology 10:465-478.
Scheiner, S.M., and J. Gurevitch. 2001. Design and analysis of ecological experiments. 2nd edition. Oxford University Press, Oxford, UK.
Schratzberger, M., T.A. Dinmore, and S. Jennings. 2002. Impacts of trawling on the diversity, biomass and structure of meiofauna assemblages. Marine Biology 140:83-93.
Sokal, R. R., and F. J. Rohlf. 1994. Biometry; the principles and practice of statistics in biological research. 3rd edition. W. H. Freeman, San Francisco, CA.
Southwood, R. 1992. Ecological methods, with particular reference to the study of insect populations. Methuen, London, UK.
Stanley, T.R., and F.L. Knopf. 2002. Avian responses to late-season grazing in a shrub-willow floodplain. Conservation Biology 16:225-231.
Steidl, R.J., J.P. Hayes, and E. Schauber. 1997. Statistical power analysis in wildlife research. Journal of Wildlife Management 61:270-279.
Steidl, R.J., and L. Thomas. 2001. Power analysis and experimental design. Pages 14-36 in S. M. Scheiner and J. Gurevitch, editors. Design and analysis of ecological experiments. Oxford University
Press, UK.
Stephens, P.A., S.W. Buskirk, and C. Martinez del Rio. 2006. Inference in ecology and evolution. Trends in Ecology & Evolution 22:192-197.
Stewart-Oaten, A., W.W. Murdoch, and K.R. Parker. 1986. Environmental-impact assessment – pseudoreplication in time. Ecology 67:929-940.
Taylor, B.L., P.R. Wade, R.A. Stehn, and J.F. Cochrane. 1996. A Bayesian approach to classification criteria for spectacled eiders. Ecological Applications 6:1077-1089.
Temple, S., and J.R. Cary. 1990. Using checklist records to reveal trends in bird populations. Pages 98-104 in J.R. Sauer and S. Droege, editors. Survey designs and statistical methods for the
estimation of avian population trends. USDI Fish and Wildlife Service, Washington, DC.
Thomas, L. 1996. Monitoring long-term population change: Why are there so many analysis methods? Ecology 77:49-58.
Thomas, L., and K. Martin. 1996. The importance of analysis method for breeding bird survey population trend estimates. Conservation Biology 10:479-490.
Thompson, S.K. 2002. Sampling. 2nd edition. Wiley, New York.
Thompson, W.L. 2004. Sampling rare or elusive species : concepts, designs, and techniques for estimating population parameters. Island Press, Washington.
Thompson, W.L., G.C. White, and C. Gowan. 1998. Monitoring vertebrate populations. Academic Press, San Diego, Calif.
Thomson, D.L., M.J. Conroy, and E.G. Cooch, editors. 2009. Modeling demographic processes in marked populations. Springer, New York.
Trexler, J.C., and J. Travis. 1993. Nontraditional regression analyses. Ecology 74:1629-1637.
Tufte, E.R. 2001. The visual display of quantitative information. 2nd edition. Graphics Press, Cheshire, Conn.
Tukey, J.W. 1977. Exploratory data analysis. Addison-Wesley Pub. Co., Reading, Mass.
Underwood, A.J. 1994. On beyond BACI: sampling designs that might reliably detect environmental disturbances. Ecological Applications 4:3-15.
Underwood, A.J. 1997. Experiments in ecology: Their logical design and interpretation using analysis of variance. Cambridge University Pres, New York, New York.
Wade, P.R. 2000. Bayesian methods in conservation biology. Conservation Biology 14:1308-1316.
Wardell-Johnson, G., and M. Williams. 2000. Edges and gaps in mature karri forest, south-western Australia: logging effects on bird species abundance and diversity. Forest Ecology and Management
Welsh, A.H., R.B. Cunningham, C.F. Donnelly, and D.B. Lindenmayer. 1996. Modelling the abundance of rare species: Statistical models for counts with extra zeros. Ecological Modelling 88:297-308.
White, G.C., and R.E. Bennetts. 1996. Analysis of frequency count data using the negative binomial distribution. Ecology 77:2549-2557.
White, G.C., W.L. Kendall, and R.J. Barker. 2006. Multistate survival models and their extensions in Program MARK. Journal of Wildlife Management 70:1521–1529.
Williams, B.K., J.D. Nichols, and M.J. Conroy. 2002. Analysis and management of animal populations : modeling, estimation, and decision making. Academic Press, San Diego.
Yoccoz, N.G., J.D. Nichols, and T. Boulinier. 2001. Monitoring of biological diversity in space and time. Trends in Ecology & Evolution 16:446-453.
Zar, J.H. 1999. Biostatistical analysis. 4th edition. Prentice Hall, Upper Saddle River, N.J.
Zuckerberg, B., W.F. Porter, and K. Corwin. 2009. The consistency and stability of abundance-occupancy relationships in large-scale population dynamics. Journal of Animal Ecology 78:172-181.
Zuur, A. 2009. Mixed effects models and extensions in ecology with R. 1st edition. Springer, New York.
Zuur, A. F., E. N. Ieno, and G. M. Smith. 2007. Analysing ecological data. Springer, New York ; London. | {"url":"https://open.oregonstate.education/monitoring/chapter/data-analysis-in-monitoring/","timestamp":"2024-11-03T18:59:49Z","content_type":"text/html","content_length":"159858","record_id":"<urn:uuid:b831148a-f20a-4fd0-bad7-99b5c3c0b6fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00666.warc.gz"} |
Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm. - PDF Download Free
HHS Public Access Author manuscript Author Manuscript
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15. Published in final edited form as: J Chem Theory Comput. 2016 April 12; 12(4): 1449–1458. doi:10.1021/acs.jctc.5b00706.
Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm Yunjie Chen#†, Seyit Kale#†, Jonathan Weare‡, Aaron R. Dinner†, and Benoît Roux*,¶,†,∥
of Chemistry, University of Chicago, Chicago, Illinois 60637, United States
Author Manuscript
of Statistics & James Franck Institute, University of Chicago, Chicago, Illinois 60637, United States
of Biochemistry and Molecular Biology, University of Chicago, Chicago, Illinois 60637, United States ∥Center
for Nanomaterials, Argonne National Laboratory, Argonne, Illinois 60439, United States
These authors contributed equally to this work.
Author Manuscript
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The
Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which
the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes
effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an
internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance
criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the
computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.
Author Manuscript *
Corresponding Author:
[email protected]
. The authors declare no competing financial interest.
Chen et al.
Author Manuscript I. INTRODUCTION Author Manuscript
Molecular dynamics (MD) simulations offer a powerful avenue to study the properties of complex atomic and molecular systems.1 In classical MD the potential energy surface is represented by a
molecular mechanical (MM) force field constructed from simple differentiable functions.2 This type of approximation is necessary to achieve the desired computational efficiency needed to simulate
very large systems for long time scales. More accurate representations of the electronic behavior of a system can be obtained by using quantum mechanical (QM) or hybrid QM/MM approaches.3 However,
such sophisticated ab initio models evolve on a complex energy surface. Their dynamic propagation often requires a small time-step, which can become computationally prohibitive in MD simulations. For
this reason, various strategies have been sought to achieve an adequate sampling by accelerating the exploration of configurational space.
Author Manuscript
A common method to improve sampling in MD is to use multiple time-step algorithms such as RESPA (REference System Propagator Algorithm).4-7 RESPA assumes that a full system Hamiltonian can be
separated into its fast and slowly varying components.8-11 Fast varying forces are evaluated every step, while slowly varying forces are updated less frequently to reduce computational cost. Similar
splitting schemes have been applied to the potential energy to accelerate Monte Carlo (MC) propagators.12 Versions of RESPA have been derived for both the NVE and NVT ensembles.13 However, in
practice, the method exhibits systematic errors for larger time-steps and increased numbers of evaluations of the fast varying forces for each evaluation of the slow ones.13,14 For this reason, the
original algorithm does not obey detailed balance to acceptable accuracy15,16 and was reported to suffer from resonance issues,17-19 some of which were addressed by recent work.20-27
Author Manuscript
Multiple time-step dynamics have recently been coupled with MC to address RESPA-related resonance issues; the performance improvement over conventional Langevin dynamics remained low.28 Here, we
propose a multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics and Monte Carlo to sample the canonical NVT ensemble. It is based on
nonequilibrium molecular dynamics (neMD), in which the system is steered from one state to another. Hybrid nonequilibrium Molecular Dynamics Monte Carlo (neMD-MC) methods were originally introduced
for constant pH sampling,29-31 but they have since been shown to be efficient general purpose algorithms.32-37 Of special interest in the present context is that the introduction of a Metropolis
acceptance criterion38 suppresses the discretization errors normally associated J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
with molecular dynamics.34,36,39-41 The essential idea is that the work done on the system by the discretization error can be thought of as an external work.
Author Manuscript
Our method is based on two similar Hamiltonians: a reference one in which we are interested and an alternative one to which we aim to shift the workload. For the method to be efficient, the latter
potential should be far less computationally demanding than the former. Contrary to RESPA, we do not assume that the reference Hamiltonian has slowly varying components. This is useful because the
partitioning for a given system can be nonobvious, varying from local interactions versus long-range electrostatics in dense matter42 to electron correlation versus the underlying Hartree–Fock wave
function in ab initio molecular dynamics (AIMD).6 Instead we assume that the difference between the two Hamiltonians is slowly varying. Then we can express the reference Hamiltonian as a sum of the
second Hamiltonian and the slowly varying difference term. Approximations similar in motivation have been successfully applied to accelerate iterative optimizations, including energy minimization and
finite temperature reaction path refinement.43,44 Earlier work has shown that these schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a
recursive root finding problem45,46 that can be solved by propagating a correction term through an internal loop, analogous to RESPA.
Author Manuscript
The paper is organized as follows. In the next two sections, we outline the theory behind dual Hamiltonian based canonical sampling and describe the corresponding Langevin dynamics and the
computational setup. We first introduce the (Dual Hamiltonian Multiple Time Step) (DHMTS) algorithm, which can be used when the difference between a high level theory Hamiltonian and a
computationally inexpensive Hamiltonian is slowly varying. We then introduce a hybrid MD-MC version of the DHMTS algorithm to ensure consistency with the Boltzmann distribution by enforcing detailed
balance via a Metropolis acceptance criterion.38 We then discuss our results obtained from one-dimensional model systems as well as ab initio gas phase clusters. We report speedups and various
restrictions as seen in model systems as well as two different AIMD simulations. We demonstrate that computationally “inexpensive” semiempirical Hamiltonians can be used to accelerate trajectories of
computationally more demanding DFT or MP2 Hamiltonians even for reactive chemical systems far from equilibrium. We conclude by highlighting future directions, including possibilities within the
framework of emerging force matching and adaptive procedures in which the information accumulated on-the-fly progressively improves the parameters of the initial model.
II. THEORY Author Manuscript
The standard Hamiltonian in Cartesian coordinates is
where r is the coordinates, p is the momenta, M is the mass matrix, and U is the potential energy.
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
Let the reference potential U0, forces F0, and a corresponding Liouville operator iL0 be derived from an accurate and computationally expensive Hamiltonian , and U1, F1, and iL1 be derived from an
approximate and computationally inexpensive one, . Here, the Liouville operator is expressed as
The operator can be made measure-preserving to sample different ensembles.47 The reference operator, iL0, can be separated into two terms (3)
Author Manuscript
where iL1 and i(L0–L1) are noncommuting. Using Trotter factorization,48,49 the classical propagator eiL0Δt can be expressed as
The form of eq 2 permits further simplification such that
Author Manuscript
The factor eiL1Δt varies rapidly relative to ei(L0–L1)Δt/2 and therefore prevents use of a large time-step Δt. To address this issue, we divide Δt into ninner steps of length δt. We then evolve L1
for ninner steps for each evaluation of L0. To this end, we apply an additional Trotter factorization and separate iL1 into iL1r and iL1p, where iL1 = iL1r + iL1p
(6) When eq 6 is used as a single integration step and eiL1pδt/2 and eiL1rδt have valid functional forms that preserve temperature, the resulting propagator approximately samples the Boltzmann
Author Manuscript
for small enough time-steps δt.
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
It is worth noting that when corresponds to the fast varying forces of , then this propagator reduces to RESPA. Here we explore the general case of and that are similar to each other but not
necessarily separated in time scales.
Author Manuscript
For most discretized propagators, error is known to scale with time-step δt. For a RESPAlike propagator, error further goes up with the number of inner loop steps ninner, as shown in eq 6, and the
difference between iL0 and iL1. This limitation leads to an unavoidable tradeoff between the potential speedup and the integration error. Fortunately, recent work has shown that, for a
time-independent Hamiltonian, finite time-step Langevin integrators can be thought of as driven, nonequilibrium physical processes.34,36 This numerical error can typically be corrected on-the-fly by
enforcing detailed balance via a Metropolis acceptance criterion. Constant temperature RESPA-like symplectic propagators are similar to Langevin integrators, except they are discretized twice.
Therefore, RESPA-like propagators can also be corrected by taking into account an effective “error work”, δWe (also called the “shadow work”).36,50,51 The acceptance criterion can then be applied
after each dynamics step eq 6. The exact criterion is given by eq 18 in ref 36 as (8)
where ΔE is the total energy change during the entire dynamics step, and Qreal is the energy difference accumulated from temperature conserving stochastic substeps, i.e., heat exchanged between the
bath and the system.36
Author Manuscript
In practice, the Langevin equations are integrated by following a predefined sequence of substeps that are repeated every time-step δt. The resulting trajectory is continuous in time and contains
error characteristic to the underlying choice of splitting, known as the propagator. Here, we work with the VVVR (Velocity Verlet with Velocity Randomization) propagator51,52 and the so-called BAOAB
(position-momenta-randomization-momentaposition) propagator of Leimkuhler and Matthews.52 Both VVVR and BAOAB require a single force evaluation per time-step, typically the most CPU-intensive
operation in a molecular simulation. In our VVVR implementation, one integration time-step is composed of the following substeps
Author Manuscript
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
Author Manuscript
and γ is the friction coefficient, mi is the mass of particle i, and N(0,1) is a vector of Gaussian random numbers with a mean of zero and standard deviation of one. Here, pi and ri are momenta and
position vectors of particle i, and fi is the total force exerted on particle i by all other particles in the system. Tilde and wide-tilde designate intermediate quantities. The BAOAB propagator, on
the other hand, uses an alternative half-splitting scheme, which yields improved accuracy over Velocity-Verlet.52 Both positions and momenta are updated in two half-steps, and randomization is
executed at the midpoint. Following the same notation from eq 9, BAOAB sequence is executed as
Author Manuscript
Author Manuscript
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
For the sake of convenience and clarity, we used VVVR in the illustrative one-dimensional model system. The numerically superior scheme BAOAB was used for the detailed chemical simulations. It is
important to emphasize that the present approach can be coupled with any integrator. The choice, however, may affect the cumulative error and, thus, the maximum number of inner loop iterations that
can be used for a given pair of force fields. In light of this information, we can express the present approach in operator notation as well. For VVVR-based inner loops we have
Author Manuscript
and for BAOAB-based inner loops
Author Manuscript
where operators eiL1r and eiL1p correspond to position and momentum updates at the computationally fast level of theory, respectively, and eiLOU is velocity randomization. Furthermore, the effective
time-step Δt is related to the inner loop time-step δt as Δt = ninner · δt. Stochastic substeps, eq 9a, eq 9e, and eq 11c, conserve the temperature and do not affect the Metropolis acceptance
criterion, since energy changes of such substeps appear in both ΔE and Qreal. Therefore, one could alternatively propagate the inner loop deterministically and only perform stochastic substeps after
applying a Metropolis acceptance criterion. In that case, the deterministic Liouville operator is
Author Manuscript
such that the MC acceptance criterion simplifies to
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
where ΔE is the cumulative energy change during the inner loop. We call this setup “deterministic-inner-loop”. Our simulations show that the deterministic-inner-loop algorithm improves acceptance
rates without compromising the target distribution.
Author Manuscript
Symmetric two-end momentum reversal is used for MD-MC DHMTS to satisfy detailed balance condition.30,35,36 Under this prescription, all momenta are reversed either both before and after the inner
loop or not reversed at all, both with equal probability. This prescription greatly reduces the chances of different regions in configurational space being isolated from one another.35 However,
symmetric momentum reversal can also lead to resonance problems, especially when using deterministic propagators. One solution is to recycle the decision of whether to reverse the momenta, through
several consecutive inner loops. As long as the frequency of making new decisions is not correlated with configurational changes, detailed balance condition will be satisfied and resonance mitigated.
We will refer to this algorithm as “tandem flip”. Alternatively, one can skip applying the MC acceptance criterion and use 〈Ta〉 as an indicator of error, where 〈…〉 is a trajectory average. 〈Ta〉
close to 1 indicates that the Metropolis step can be abandoned altogether. This is the main difference between MD-MC DHMTS and DHMTS, the two algorithms illustrated in the next section. A pseudocode
is given in the Appendix.
III. COMPUTATIONAL DETAILS Author Manuscript
Here, we describe the various systems that we use to test the algorithms. a). One-Dimensional Model System We introduce a simple one-dimensional (1D) model system using two different potential energy
surfaces U0(x) and U1(x)
Author Manuscript
as shown in Figure 1 (a). U0 serves as a reference Hamiltonian, and U1 serves as the preconditioner. Both potentials are functions of particle position x. The theoretical normalized distribution
along x is
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
which we aim to reproduce using our dual Hamiltonian schemes and a time-step δt = 0.05, thermal energy kBT 1.0, particle mass 1.0, and collision rate 4.0. All 1D model simulations are run using
in-house C++ scripts. Total length of each simulation is 107 steps. We explore four different propagators: 1.
dissipative Langevin dynamics VVVR for equilibration and reference,
large time-step VVVR such that Δt = ninnerδt,
MD-MC DHMTS, and
b). Gas Phase Water Clusters
Author Manuscript
We simulated two ab initio gas phase water clusters, the deprotonated water trimer (H2O)2OH− and a neutral water octamer (H2O)8, using MD-MC DHMTS and DHMTS. Trajectories are collected and analyzed
using in-house Python scripts. The Langevin propagator for stochastic substeps is BAOAB,52 a slightly modified version of VVVR. Deterministic substeps use Velocity-Verlet without any randomization.
Single point energies and gradients are calculated via Gaussian 09, revision A.02.53 For DFT runs, hybrid B3LYP54 exchange correlation functional is used together with Dunning’s correlationconsistent
cc-pVDZ55 basis set. For MP2 the earlier Pople type 6-31G(d,p) basis56,57 is used. Hartree–Fock and two semiempirical force fields, RM158 and PM6,59 are chosen as preconditioners. The HF basis set is
3-21G. RM1 is not readily available in Gaussian 09; this force field is implemented by coupling RM1 parameters with their equivalent AM1 functional forms.
Author Manuscript
To prevent evaporation, spherical boundaries are applied. Any atom further from the origin beyond a cutoff distance rcutoff experiences the potential (19)
where r is the distance of the particle from the origin, and kboundary = 1.0 kJ/mol Å2. Cutoff radii are 3.5 and 3.85 Å for the trimer and the octamer, respectively. Clusters are equilibrated for 2.5
and 3.0 ps to T = 298 K, respectively. For each case explored, 16 to 32 parallel trajectories are initialized from the same equilibrated positions and momenta but different random seeds.
Author Manuscript
IV. RESULTS AND DISCUSSIONS a). One-Dimensional Model System We first tested if all propagators reproduce the reference equilibrium distribution as we gradually increase ninner from 1 to 100. Results
are shown in Figure 2. For DHMTS and large time-step VVVR, simulated distributions agree with reference for when ninner is lower than 10. As ninner increases, DHMTS distributions gradually shift
toward U1 prediction,
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
while large time-step VVVR distributions flatten out as a result of accumulating error. MDMC DHMTS distributions are always correct for the range of ninner values explored. In a second application,
we tested propagator efficiencies (Table 1). We calculated the effective sample size by the spectral density at frequency zero estimated by fitting to an autoregressive model.60-62 This metric has
the advantage that effective sample size is independent from sampling frequency. Then we estimate speedup as
Author Manuscript
where TCPU is the CPU time required to evaluate total energies and forces. To mimic the conditions involving a pair of high level Hamiltonians, we assume that the cost of expensive, reference level
potential calculation is 10 times that of the cheaper preconditioner. For MDMC DHMTS speedup can reach up to 9.0 for when ninner is 20. For DHMTS speedup can reach 5.0 at ninner = 10. Higher values
of ninner can cause sampling errors more than 5%. Efficiencies in MD-MC DHMTS are higher using deterministic-inner-loop and/or reversal schedule decided less frequently. In our third and final
application, we tested the effect of the potential surface on propagator efficiency. We redefine our model potentials as
Author Manuscript
The added ruggednesses to both potentials violate our fundamental assumption that the difference between the Hamiltonians is slowly varying. As a result, MD-MC DHMTS still generates correct
distribution but is no longer competitive against VVVR, as shown in Figure 3 and Table 2. b). Deprotonated Water Trimer
Author Manuscript
Ambient deprotonated water trimer at the B3LYP/cc-pVDZ level of theory assumes a quasi 2-fold symmetry with two strong hydrogen bonds coordinating the hydroxide as in Figure 4. The cluster exhibits
frequent proton hops such that the “defect proton” carrier switches identity. Transfers typically occur along the shortest hydrogen bond, while the third monomer coordinates at least one of the
exchanging partners. RM1 predicts a more extended water-hydroxide-water angle and shorter hydrogen bonds to the hydroxide. Barriers to proton hops and water reorientation are higher than B3LYP. We
sampled this reactive system via the following dual propagators: 1.
MD-MC DHMTS with ninner = 4,
deterministic-inner-loop MD-MC DHMTS with ninner = 4,
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
tandem flip MD-MC DHMTS with ninner = 4 and flip decisions propagated for 10 consecutive loops, and
DHMTS with ninner = 4.
Inner time-steps are δt = 1 fs, and Langevin collision frequency is γ = 100 ps−1. For the reference B3LYP and the preconditioner RM1 cases, a total of each 1.28 ns of trajectories are collected. Dual
Hamiltonian trajectories reach a total of 2.0 to 2.1 ns for each case, requiring around 60% fewer B3LYP single point calculations than the reference. Results are summarized in Figure 5. MC acceptance
rates for MD-MC DHMTS and deterministic-inner-loop MD-MC DHMTS are 44.3% and 47.5%, respectively. As shown in the left and middle columns of Figure 5, both setups reproduce reference distributions to
within an agreeable noise.
Author Manuscript
Case 4, as shown in the right column of Figure 5, corresponds to a RESPA-like multiple time-step dynamics integrator with an effective outer time-step of Δt = 4 fs. This propagator reproduces three
different geometric distributions fairly well even though the preconditioner RM1 predictions are significantly different (red graphs in Figure 5). One noticeable sensitivity occurs with respect to
the proton transfer coordinate (bottom right panel of Figure 5). This observable is subject to the fastest time scale explored comparable to Δt in magnitude. As shown in Figure 6, DHMTS indeed
samples a slightly different ensemble than the reference, while all MD-MC DHMTS schemes converge to the correct one.
Author Manuscript
MD-MC DHMTS can reproduce static observables of the reference distribution, while DHMTS can reproduce dynamics as well. Shown in Figure 7 are the histograms of proton nontransfer intervals as
obtained from pure B3LYP, pure RM1, and DHMTS trajectories. These are dwell times when a charge defect resides on the same monomer before hopping to a different partner. These plots provide insight
about the free energy barriers seen by the hopping proton via the given level of theory. Preconditioner RM1 exhibits more frequent long dwell times compared to the reference B3LYP; however, DHMTS
dynamics is not affected significantly by this difference between the force fields. c). Water Octamer In the MP2 complete basis set limit neutral water octamers can assume two high-symmetry
conformations, D2d and S4.63 Starting from D2d, we collected MP2 AIMD trajectories using the following propagators
Author Manuscript
Langevin dynamics (reference),
DHMTS using PM6 as preconditioner with ninner= 2 to 8,
DHMTS using RM1 as preconditioner with ninner= 2 to 8, and
DHMTS using HF as preconditioner with ninner = 2 to 8.
The inner time-step is δt = 1 fs and collision frequency γ = 100 ps−1. RM1 and HF preconditioned DHMTS setups yield stable trajectories at up to ninner = 4, corresponding to Δt = 4 fs. In the absence
of MC correction, PM6 preconditioning
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
accumulates too much error to be stable for even ninner = 2. Single point MP2 calculations for this cluster are more than 100-times more time-consuming than RM1 and approximately 30 times more
compared to HF. Hence, the overall boost in speedup is close to the number of inner loop steps. For stable trajectories we use the following error estimator
Author Manuscript
where Etotal excludes work exerted by the thermostat or the boundaries. Convergence of error is shown in Figure 8 for simulations run using equal amounts of wall clock CPU time. While errors are
typically higher for dual integrators than reference, net speedups make the dual trajectories an attractive choice.
Author Manuscript
We introduced a dual Hamiltonian multiple time-step frame-work to accelerate the sampling of systems in the canonical ensemble. Our approach is motivated by the observation that the workload of one
computationally expensive Hamiltonian can be shifted to another, less expensive one, when the difference between the two Hamiltonians varies on a slow time scale. The speedups can be particularly
good with respect to the number of inner loop steps when the reference level calculations are computationally demanding, such as ab initio gradient evaluations. The underlying theory bears overlap
with the multiple time-step framework of RESPA; however, we make no strong assumptions regarding the separability of the time scales in the dynamics.
Author Manuscript
Dual Hamiltonian propagators increase the effective integration time-steps at the cost of the inner loop iterations. These iterations require gradient evaluations only at the less expensive level of
theory. This observation should be particularly valuable for AIMD and QM/MM simulations, because computational effort usually scales far steeper with increasing sophistication of the chemical level
of theory. A typical situation involving the simulation of water molecules at the MP2/6-31G(d,p) level of theory is illustrated in Figure 9. MD simulations of such a system become very costly due to
the rapidly growing computational cost at the MP2/6-31G(d,p) level of theory. A straightforward propagation method beyond a relatively small number of water molecules is prohibitive. Using the DHMTS
algorithm, an acceleration by a factor of 4–5 is reached with only 20 water molecules in the system (Figure 9). For pairs of Hamiltonians whose difference is slowly varying, the present formulation
can shift nearly all workload to the inexpensive iterations. This opportunity has key implications for recently emerging force-matching algorithms. Such schemes typically learn from a library of
molecular configurations at one reference level of theory by trying to mimic energies and/or gradients.64,65 Others have demonstrated that dynamics and machine learning can be performed
simultaneously.66 Dual Hamiltonian formulations can be readily
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
coupled to similar force-matching schemes to attain accelerated sampling by improving the guiding Hamiltonian on-the-fly.
ACKNOWLEDGMENTS This research was partially supported by the National Institutes of Health (grant 5 R01 GM109455-02) and by the National Science Foundation (grants CHE-1136709 and MCB-1517221).
Computational resources were provided by the University of Chicago Research Computing Center (RCC). We thank Charles Matthews for useful discussions.
Author Manuscript
Note that, for each round, one only needs to recalculate total energy and gradients once at the expensive level and ninner times at the cheap level. To avoid an extra expensive level calculation,
energies and gradients from previous calculation are stored in memory (not shown below for clarity). Algorithm 0.1: DUALPOT(&r, &p)
Author Manuscript Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
REFERENCES (1). Allen, M.; Tildesley, D. Computer Simulation of Liquids. Oxford Science Publications, Clarendon Press; Oxford: 1989. (2). Becker, OM.; MacKerell, AD.; Roux, B.; Watanabe, M.
Computational Biochemistry and Biophysics. Marcel Dekker, Inc., New York; New York, NY: 2001. (3). Field MJ, Bash PA, Karplus M. A combined Quantum Mechanical and molecular mechanical potential for
molecular dynamics simulations. J. Comput. Chem. 1990; 11:700–733.
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
(4). Gibson DA, Carter EA. Time-reversible multiple time scale ab initio molecular dynamics. J. Phys. Chem. 1993; 97:13429–13434. (5). Guidon M, Schiffmann F, Hutter J, VandeVondele J. Ab initio
molecular dynamics using hybrid density functionals. J. Chem. Phys. 2008; 128:214104. [PubMed: 18537412] (6). Steele RP. Communication: Multiple-timestep ab initio molecular dynamics with electron
correlation. J. Chem. Phys. 2013; 139:011102. [PubMed: 23822286] (7). Luehr N, Markland TE, Martínez TJ. Multiple time step integrators in ab initio molecular dynamics. J. Chem. Phys. 2014;
140:084116. [PubMed: 24588157] (8). Grubmüller H, Heller H, Windemuth A, Schulten K. Generalized Verlet algorithm for efficient molecular dynamics simulations with long-range interactions. Mol.
Simul. 1991; 6:121–142. (9). Tuckerman ME, Berne BJ, Martyna GJ. Molecular dynamics algorithm for multiple time scales: Systems with long range forces. J. Chem. Phys. 1991; 94:6811–6815. (10).
Tuckerman M, Berne BJ, Martyna GJ. Reversible multiple time scale molecular dynamics. J. Chem. Phys. 1992; 97:1990–2001. (11). Martyna GJ, Tuckerman ME, Tobias DJ, Klein ML. Explicit reversible
integrators for extended systems dynamics. Mol. Phys. 1996; 87:1117–1157. (12). Hetenyi B, Bernacki K, Berne B. Multiple time step Monte Carlo. J. Chem. Phys. 2002; 117:8203–8207. (13). Tuckerman, M.
Statistical Mechanics: Theory and Molecular Simulations. Oxford University Press; New York, NY: 2008. (14). Anglada E, Junquera J, Soler JM. Efficient mixed-force first-principles molecular dynamics.
Phys. Rev. E. 2003; 68:055701. (15). Pastor RW, Brooks BR, Szabo A. An analysis of the accuracy of Langevin and molecular dynamics algorithms. Mol. Phys. 1988; 65:1409–1419. (16). Akhmatskaya E,
Reich S. GSHMC: An efficient method for molecular simulation. J. Comput. Phys. 2008; 227:4934–4954. (17). Izaguirre JA, Catarello DP, Wozniak JM, Skeel RD. Langevin stabilization of molecular
dynamics. J. Chem. Phys. 2001; 114:2090–2098. (18). Ma, Q.; Izaguirre, JA.; Skeel, RD. Nonlinear instability in multiple time stepping molecular dynamics; Proceedings of the 2003 ACM symposium on
Applied computing; New York, NY. 2003; p. 167-171. (19). Leimkuhler, B.; Matthews, C. Molecular Dynamics: With Deterministic and Stochastic Numerical Methods. Vol. 39. Springer; Switzerland: 2015.
(20). Garcia-Archilla B, Sanz-Serna J, Skeel RD. Long-time-step methods for oscillatory differential equations. SIAM J. Sci. Comput. 1998; 20:930–963. (21). Barth E, Schlick T. Overcoming stability
limitations in biomolecular dynamics. I. Combining force splitting via extrapolation with Langevin dynamics in LN. J. Chem. Phys. 1998; 109:1617– 1632. (22). Skeel, RD.; Izaguirre, JA. Computational
Molecular Dynamics: Challenges, Methods, Ideas. Springer; 1999. p. 318-331. (23). Izaguirre JA, Reich S, Skeel RD. Longer time steps for molecular dynamics. J. Chem. Phys. 1999; 110:9853–9864. (24).
Qian X, Schlick T. Efficient multiple-time-step integrators with distance-based force splitting for particle-mesh-Ewald molecular dynamics simulations. J. Chem. Phys. 2002; 116:5971–5983. (25).
Minary P, Tuckerman M, Martyna G. Long time molecular dynamics for enhanced conformational sampling in biomolecular systems. Phys. Rev. Lett. 2004; 93:150201. [PubMed: 15524853] (26). Sweet CR,
Petrone P, Pande VS, Izaguirre JA. Normal mode partitioning of Langevin dynamics for biomolecules. J. Chem. Phys. 2008; 128:145101. [PubMed: 18412479] (27). Leimkuhler B, Margul DT, Tuckerman ME.
Stochastic, resonance-free multiple time-step algorithm for molecular dynamics with very large time steps. Mol. Phys. 2013; 111:3579–3594. (28). Escribano B, Akhmatskaya E, Reich S, Azpiroz JM.
Multiple-time-stepping generalized hybrid Monte Carlo methods. J. Comput. Phys. 2015; 280:1–20.
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
(29). Stern HA. Molecular simulation with variable protonation states at constant pH. J. Chem. Phys. 2007; 126:164112. [PubMed: 17477594] (30). Stern HA. Erratum: “Molecular simulation with variable
protonation states at constant pH” [J. Chem. Phys.126, 164112 (2007)]. J. Chem. Phys. 2007; 127:079901. (31). Chen Y, Roux B. Constant-pH Hybrid Non-Equilibrium Molecular Dynamics - Monte Carlo
Simulation Method. J. Chem. Theory Comput. 2015; 11:3919–3931. [PubMed: 26300709] (32). Ballard AJ, Jarzynski C. Replica exchange with non-equilibrium switches. Proc. Natl. Acad. Sci. U. S. A. 2009;
106:12224–12229. [PubMed: 19592512] (33). Nilmeier JP, Crooks GE, Minh DDL, Chodera JD. Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation. Proc. Natl. Acad. Sci. U.
S. A. 2011; 108:E1009– E1018. [PubMed: 22025687] (34). Sivak DA, Chodera JD, Crooks GE. Using nonequilibrium fluctuation theorems to understand and correct errors in equilibrium and nonequilibrium
simulations of discrete Langevin dynamics. Phys. Rev. X. 2013; 3:011007. (35). Chen Y, Roux B. Efficient hybrid non-equilibrium molecular dynamics-Monte Carlo simulations with symmetric momentum
reversal. J. Chem. Phys. 2014; 141:114107. [PubMed: 25240345] (36). Chen Y, Roux B. Generalized Metropolis acceptance criterion for hybrid non-equilibrium molecular dynamics - Monte Carlo
simulations. J. Chem. Phys. 2015; 142:024101. [PubMed: 25591332] (37). Chen Y, Roux B. Enhanced Sampling of a Detailed All-Atom Model with Hybrid Nonequilibrium Molecular Dynamics Monte Carlo Guided
by Coarse-Grained Simulations. J. Chem. Theory Comput. 2015; 11:3572–3583. [PubMed: 26574442] (38). Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equation of state calculations by
fast computing machines. J. Chem. Phys. 1953; 21:1087–1092. (39). Bussi G, Donadio D, Parrinello M. Canonical sampling through velocity rescaling. J. Chem. Phys. 2007; 126:014101. [PubMed: 17212484]
(40). Roberts GO, Tweedie RL. Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika. 1996; 83:95–110. (41). Bou-Rabee N, Vanden-Eijnden
E. Pathwise accuracy and ergodicity of metropolized integrators for SDEs. Comm. Pure Appl. Math. 2010; 63:655–696. (42). Zhou R, Harder E, Xu H, Berne B. Efficient multiple time step method for use
with Ewald and particle mesh Ewald for large biomolecular systems. J. Chem. Phys. 2001; 115:2348–2358. (43). Tempkin JO, Qi B, Saunders MG, Roux B, Dinner AR, Weare J. Using multiscale
preconditioning to accelerate the convergence of iterative molecular calculations. J. Chem. Phys. 2014; 140:184114. [PubMed: 24832260] (44). Kale S, Sode O, Weare J, Dinner AR. Finding Chemical
Reaction Paths with a Multilevel Preconditioning Protocol. J. Chem. Theory Comput. 2014; 10:5467–5475. [PubMed: 25516726] (45). Maday Y, Turinici G. A parareal in time procedure for the control of
partial differential equations. C. R. Math. 2002; 335:387–392. (46). Bylaska EJ, Weare JQ, Weare JH. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum
chemistry and complex force representations. J. Chem. Phys. 2013; 139:074114. [PubMed: 23968079] (47). Tuckerman ME, Alejandre J, López-Rendón R, Jochim AL, Martyna GJ. A Liouville-operator derived
measure-preserving integrator for molecular dynamics simulations in the isothermalisobaric ensemble. J. Phys. A: Math. Gen. 2006; 39:5629. (48). Trotter HF. On the product of semi-groups of
operators. Proc. Am. Math. Soc. 1959; 10:545–551. (49). Strang G. On the construction and comparison of difference schemes. SIAM J. Numer. Anal. 1968; 5:506–517. (50). Lechner W, Oberhofer H, Dellago
C, Geissler PL. Equilibrium free energies from fast-switching trajectories with large time steps. J. Chem. Phys. 2006; 124:044113. [PubMed: 16460155] (51). Sivak DA, Chodera JD, Crooks GE. Time step
rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems. J. Phys. Chem. B. 2014; 118:6466–6474. [PubMed: 24555448]
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript
(52). Leimkuhler B, Matthews C. Robust and efficient configurational molecular sampling via Langevin dynamics. J. Chem. Phys. 2013; 138:174102. [PubMed: 23656109] (53). Frisch, MJ.; Trucks, GW.;
Schlegel, HB.; Scuseria, GE.; Robb, MA.; Cheeseman, JR.; Scalmani, G.; Barone, V.; Mennucci, B.; Petersson, GA.; Nakatsuji, H.; Caricato, M.; Li, X.; Hratchian, HP.; Izmaylov, AF.; Bloino, J.; Zheng,
G.; Sonnenberg, JL.; Hada, M.; Ehara, M.; Toyota, K.; Fukuda, R.; Hasegawa, J.; Ishida, M.; Nakajima, T.; Honda, Y.; Kitao, O.; Nakai, H.; Vreven, T.; Montgomery, JA., Jr.; Peralta, JE.; Ogliaro, F.;
Bearpark, M.; Heyd, JJ.; Brothers, E.; Kudin, KN.; Staroverov, VN.; Kobayashi, R.; Normand, J.; Raghavachari, K.; Rendell, A.; Burant, JC.; Iyengar, SS.; Tomasi, J.; Cossi, M.; Rega, N.; Millam, JM.;
Klene, M.; Knox, JE.; Cross, JB.; Bakken, V.; Adamo, C.; Jaramillo, J.; Gomperts, R.; Stratmann, RE.; Yazyev, O.; Austin, AJ.; Cammi, R.; Pomelli, C.; Ochterski, JW.; Martin, RL.; Morokuma, K.;
Zakrzewski, VG.; Voth, GA.; Salvador, P.; Dannenberg, JJ.; Dapprich, S.; Daniels, AD.; Farkas, O.; Foresman, JB.; Ortiz, JV.; Cioslowski, J.; Fox, DJ. Gaussian 09, revision A.02. Vol. 19. Gaussian,
Inc.; Wallingford, CT: 2009. p. 227-238. (54). Becke AD. Density-functional thermochemistry. III. The role of exact exchange. J. Chem. Phys. 1993; 98:5648–5652. (55). Dunning TH Jr. Gaussian basis
sets for use in correlated molecular calculations. I. The atoms boron through neon and hydrogen. J. Chem. Phys. 1989; 90:1007–1023. (56). Hehre WJ, Ditchfield R, Pople JA. Self-consistent molecular
orbital methods. XII. Further extensions of Gaussian-type basis sets for use in molecular orbital studies of organic molecules. J. Chem. Phys. 1972; 56:2257–2261. (57). Hariharan PC, Pople JA. The
influence of polarization functions on molecular orbital hydrogenation energies. Theor. Chim. Acta. 1973; 28:213–222. (58). Rocha GB, Freire RO, Simas AM, Stewart JJ. RM1: a reparameterization of AM1
for H, C, N, O, P, S, F, Cl, Br, and I. J. Comput. Chem. 2006; 27:1101–11. [PubMed: 16691568] (59). Stewart JJP. Optimization of parameters for semiempirical methods V: Modification of NDDO
approximations and application to 70 elements. J. Mol. Model. 2007; 13:1173–1213. [PubMed: 17828561] (60). Heidelberger P, Welch PD. A spectral method for confidence interval generation and run
length control in simulations. Commun. ACM. 1981; 24:233–245. (61). Venables, WN.; Ripley, BD. Modern applied statistics with S. Springer Science & Business Media; New York, NY: 2002. (62).
Brockwell, PJ.; Davis, RA. Time series: theory and methods. Springer Science & Business Media; New York, NY: 2009. (63). Xantheas SS, Apra E. The binding energies of the D-2d and S-4 water octamer
isomers: Highlevel electronic structure and empirical potential results. J. Chem. Phys. 2004; 120:823–828. [PubMed: 15267918] (64). Zhai H, Ha M-A, Alexandrova AN. AFFCK: Adaptive Force
Field-Assisted ab initio Coalescence Kick Method for Global Minimum Search. J. Chem. Theory Comput. 2015; 11:2385–2393. [PubMed: 26574433] (65). Dral PO, von Lilienfeld OA, Thiel W. Machine Learning
of Parameters for Accurate Semiempirical Quantum Chemical Calculations. J. Chem. Theory Comput. 2015; 11:2120–2125. [PubMed: 26146493] (66). Li Z, Kermode JR, De Vita A. Molecular dynamics with
on-the-fly machine learning of quantummechanical forces. Phys. Rev. Lett. 2015; 114:096405. [PubMed: 25793835]
Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Figure 1.
Potential energy of 1D model system. U0 is shown in black, and U1 is shown in red. (a) First system is defined in eq 17. (b) Second system is defined in eq 21.
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Figure 2.
Equilibrium distributions predicted by different propagators with respect to model particle position x. Theoretical distribution for U0 is in solid black, for U1 it is in dashed black. For each
propagator, seven different ninner values are illustrated: 1 (green), 2, 5, 10 (orange), 20, 50, 100 (red). (a) DHMTS, distribution with larger ninner shifts toward left, and distribution with ninner
= 1, 2, 5, and 10 overlap with each other, (b) MD-MC DHMTS, and (c) VVVR using Δt(=ninnerδt), distribution with larger ninner flattens, and distribution with ninner = 1, 2, 5, and 10 overlap with
each other. For (a) and (c), deviations from ρU0 become more
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript
pronounced with increasing ninner, while for (b) deviation remains nearly constant and negligible.
Author Manuscript Author Manuscript Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Figure 3.
Distribution generated using three different propagators. Definition and color convention follows Figure 2. Potential energy is given by eq 21 instead of eq 17.
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript
Figure 4.
Deprotonated water trimer.
Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript Figure 5.
Author Manuscript
Geometric observables from the deprotonated water trimer: MD-MC DHMTS (left row), MD-MC DHMTS via randomization post Metropolis (middle row), and DHMTS (right row). Results are compared against
conventional Langevin MD at the B3LYP level of theory (blue). Preconditioner RM1 predictions are shown in red on the right column. Top row indicates the distribution of all oxygen–oxygen distances,
middle row the shortest oxygen– oxygen distances, and bottow row the proton transfer coordinate as defined by δ = |rO1H* – rO2H*| where O1 and O2 are the oxygens closest to each other and H* is the
shared proton.
J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Figure 6.
Author Manuscript
Convergence of mean (left) and standard deviation (right) of potential energy in MD-MC DHMTS and DHMTS water trimer cluster. Each curve is an average over 16 parallel simulations. Data are offset by
the best known estimate as obtained by the average over 32 parallel reference MD trajectories (blue horizontal lines). Note that DHMTS converges to a slightly different ensemble.
Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript
Figure 7.
Distributions of defect proton residence times in water trimer: reference B3LYP (blue), DHMTS via n = 4 inner loop steps (green), preconditioner RM1 (red).
Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript
Figure 8.
Average relative absolute error (eq 22) computed from water octamer trajectories. Reference level of theory is MP2/6-31G(d,p). Trajectories are collected using equal amounts of singleCPU wall clock
time (~128.4 h). Inset: Initial cubic D2d cluster (from Xantheas and Apra).63
Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Author Manuscript Author Manuscript Author Manuscript
Figure 9.
Scaling of core-second wallclock times of 4 fs of dynamics integration via DHMTS at the MP2/6-31G(d,p) level of theory compared against conventional integration using 1 fs timesteps. Timings
correspond to DHMTS using ninner = 4 and preconditioned via RM1 (black) or HF/3-21G (red) on a single 2.6 GHz Intel Sandybridge CPU with 2GB memory.
Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Table 1
Efficiency of the MD-MC DHMTS and DHMTS
Author Manuscript
a simulation
n inner
T relax
T relax
T relax
T relax
T relax
η 3
Author Manuscript
1. VVVR; 2. MD-MC DHMTS; 3. MD-MC DHMTS with deterministic-inner-loop; 4. MD-MC DHMTS with deterministic-inner-loop and reversal schedule decided every 10 rounds of propagation; 5. DHMTS.
Effective Speedup η eq 20, assuming TCPU(U0) = 10 TCPU(U1).
Best efficiency of each propagator is underlined.
Author Manuscript Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15.
Chen et al.
Table 2
Efficiency of MD-MC DHMTS and DHMTS for the Model System with Eq 21
Author Manuscript
a simulation
n inner
T relax, VVVR
T relax
T rex
η 3
Author Manuscript
1. VVVR; 2. MD-MC DHMTS; 3. DHMTS.
Effective Speedup η eq 20, assuming TCPU(U0) = 10 TCPU(U1).
Best efficiency of each propagator is underlined.
Author Manuscript Author Manuscript J Chem Theory Comput. Author manuscript; available in PMC 2016 July 15. | {"url":"https://d.docksci.com/multiple-time-step-dual-hamiltonian-hybrid-molecular-dynamics-monte-carlo-canoni_5a1c7985d64ab28886d5ef3d.html","timestamp":"2024-11-12T02:02:01Z","content_type":"text/html","content_length":"101192","record_id":"<urn:uuid:8dba4273-6ce3-4b0d-bd81-0c797a342c1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00758.warc.gz"} |
Metric tensor
From Encyclopedia of Mathematics
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
basic tensor, fundamental tensor
A twice covariant symmetric tensor field $g=g(X,Y)$ on an $n$-dimensional differentiable manifold $M^n$, $n\geq2$. The assignment of a metric tensor on $M^n$ introduces a scalar product $\langle X,Y\
rangle$ of contravariant vectors $X,Y\in M_p^n$ on the tangent space $M_p^n$ of $M^n$ at $p\in M^n$, defined as the bilinear function $g_p(X,Y)$, where $g_p$ is the value of the field $g$ at the
point $p$. In coordinate notation:
$$\langle X,Y\rangle=g_{ij}(p)X^iY^j,\quad X=\{X^i\},\quad Y=\{Y^j\},\quad0\leq i,j\leq n.$$
The metric in $M_p^n$ with this scalar product is regarded as infinitesimal for the metric of the manifold $M^n$, which is expressed by the choice of the quadratic differential form
as the square of the differential of the arc length of curves in $M^n$, going from $p$ in the direction $dx^1,\dots,dx^n$. With respect to its geometric meaning the form \eqref{*} is called the
metric form or first fundamental form on $M^n$, corresponding to the metric tensor $g$. Conversely, if a symmetric quadratic form \eqref{*} on $M^n$ is given, then there is a twice covariant tensor
field $g(X,Y)=g_{ij}X^iY^j$ associated with it and whose corresponding metric form is $g$. Thus, the specification of a metric tensor $g$ on $M^n$ is equivalent to the specification of a metric form
on $M^n$ with a quadratic line element of the form \eqref{*}. The metric tensor completely determines the intrinsic geometry of $M^n$.
The collection of metric tensors $g$, and the metric forms defined by them, is divided into two classes, the degenerate metrics, when $\det(g_{ij})=0$, and the non-degenerate metrics, when $\det(g_
{ij})\neq0$. A manifold $M^n$ with a degenerate metric form \eqref{*} is called isotropic. Among the non-degenerate metric tensors, in their turn, are distinguished the Riemannian metric tensors, for
which the quadratic form \eqref{*} is positive definite, and the pseudo-Riemannian metric tensors, when \eqref{*} has variable sign. A Riemannian (pseudo-Riemannian) metric introduced on $M^n$ via a
Riemannian (pseudo-Riemannian) metric tensor defines on $M^n$ a Riemannian (respectively, pseudo-Riemannian) geometry.
Usually a metric tensor, without special indication, means a Riemannian metric tensor; but if one wishes to stress that the discussion is about Riemannian and not about pseudo-Riemannian metric
tensors, then one speaks of a proper Riemannian metric tensor. A proper Riemannian metric tensor can be introduced on any paracompact differentiable manifold.
[1] L.P. Eisenhart, "Riemannian geometry" , Princeton Univ. Press (1949)
[2] P.K. [P.K. Rashevskii] Rashewski, "Riemannsche Geometrie und Tensoranalyse" , Deutsch. Verlag Wissenschaft. (1959) (Translated from Russian)
[3] D. Gromoll, W. Klingenberg, W. Meyer, "Riemannsche Geometrie im Grossen" , Springer (1968)
[a1] W. Klingenberg, "Riemannian geometry" , de Gruyter (1982) (Translated from German)
How to Cite This Entry:
Metric tensor. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Metric_tensor&oldid=44737
This article was adapted from an original article by I.Kh. Sabitov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/index.php?title=Metric_tensor&printable=yes","timestamp":"2024-11-08T17:41:24Z","content_type":"text/html","content_length":"16742","record_id":"<urn:uuid:b710de25-b845-4aaf-9f90-52646b23b6fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00141.warc.gz"} |
regular and irregular polygons worksheets
Geometry: Chapter 6-1 and 6-2 Quiz
Interior and Exterior Angles of Polygons
1.4 Perimeter and Area Quizizz
Interior and Exterior Angles of Polygons
030124 Polygons Test Review L
030124 Polygons Test Review L
Regular and Irregular Polygons
Explore Worksheets by Subjects
Explore printable regular and irregular polygons worksheets
Regular and irregular polygons worksheets are essential tools for teachers who want to help their students master the concepts of Math and Geometry. These worksheets provide a variety of exercises
and problems that focus on understanding the properties, classification, and identification of both regular and irregular polygons. Teachers can use these worksheets to create engaging and
interactive lessons that cater to different learning styles and abilities. By incorporating these worksheets into their curriculum, educators can ensure that their students develop a strong
foundation in Geometry, and are well-prepared to tackle more advanced topics in Math. Regular and irregular polygons worksheets are a valuable resource for teachers who want to enhance their
students' learning experience and improve their overall performance in Math and Geometry.
Quizizz is an excellent platform for teachers who want to incorporate regular and irregular polygons worksheets into their lessons, along with other offerings. This platform allows educators to
create fun and interactive quizzes, which can be used to assess students' understanding of Math and Geometry concepts. Teachers can easily customize these quizzes to include questions related to
regular and irregular polygons, ensuring that their students are thoroughly tested on these topics. In addition to quizzes, Quizizz also offers a wide range of resources, such as flashcards, games,
and study guides, which can be used to supplement the regular and irregular polygons worksheets. By utilizing Quizizz and its various offerings, teachers can create a comprehensive and engaging
learning experience for their students, helping them to excel in Math and Geometry. | {"url":"https://quizizz.com/en-us/regular-and-irregular-polygons-worksheets","timestamp":"2024-11-04T21:33:56Z","content_type":"text/html","content_length":"178653","record_id":"<urn:uuid:5a1f3e35-6b33-4688-bcda-26ea9e87cc49>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00183.warc.gz"} |
Constant and Chain Calculation functions are popular in general electronic calculators as they allow equations to be entered without having to key repeatedly. Confusion may arise however, as not all
scientific calculators show the equation being equal to the answer in the formula display. With SHARP's scientific calculators, the figures you omit are automatically shown as K (constant) or ANS
(answer). Contradictions between equations and answers are eliminated and calculations are easier and more logical to enter.
Constant Calculation
Chain Calculation
(Example using EL-506TS/X/W.)
The EL-501T/X/W enables you to calculate the answer of the next equation just by pressing the | {"url":"http://global.sharp/contents/calculator/features/standard/constant/index.html","timestamp":"2024-11-06T15:18:45Z","content_type":"text/html","content_length":"12903","record_id":"<urn:uuid:4a47e7c3-4cfa-4441-9206-59e8445b826e>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00099.warc.gz"} |
Pengaruh Pengurangan Jumlah Air Pada Beton Dengan Penambahan Polynex HE
Pengaruh Pengurangan Jumlah Air Pada Beton Dengan Penambahan Polynex HE
Keywords: 22498
This study aims to determine the behavior of using Polynex HE in concrete mixtures. That from the other side the results of high-quality concrete work fas (cement water factor) must be low or the
amount of water is small so that workability is low or difficult. The resulting concrete will also experience the emergence of air cavities even though the concrete has been compacted concrete. For
this reason, added materials are needed, one of which is polynex HE.
The method used in this test is by making a concrete mixture using SNI 7656: 2012 with Polynex HE as an added material to the concrete mixture by 1% of the weight of cement and a reduction in the
amount of water 10%, 15%, 20%, 30% of the amount of normal concrete water. Treated concrete for 1 day, 3 days, 7 days, and 28 days.
The parameters to be obtained in this study are the slump value, fill weight and concrete quality. From the test results, the slump value is obtained, the greater the reduction in the amount of
water, the smaller the slump value obtained. As for the weight of the contents, the greater the reduction in the amount of water, the smaller the weight of the contents obtained. For concrete
quality at the age of 28 days, compressive strength values range from 26.80 – 37.99 Mpa with a maximum value of 37.99 Mpa at BP1-30%. | {"url":"https://eprosiding.snit-polbeng.org/index.php/snit/article/view/499","timestamp":"2024-11-04T20:09:48Z","content_type":"text/html","content_length":"14804","record_id":"<urn:uuid:27b555dc-e5d9-4a66-8ee2-cef1df8112a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00656.warc.gz"} |
Geometric Measurement & Dimension
To find the volume of a rectangular pyramid, you need to know the length and width of the base and the height of the pyramid. Then, take those values, plug them into the formula for the volume of a
rectangular pyramid, and simplify to get your answer! Watch this tutorial to see how it's done! | {"url":"https://virtualnerd.com/common-core/hsf-geometry/HSG-GMD-measurement-dimension/","timestamp":"2024-11-01T19:26:52Z","content_type":"text/html","content_length":"37192","record_id":"<urn:uuid:b35b1c9e-8563-412c-a369-eea58afb3489>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00635.warc.gz"} |
Groupoids and Smarandache Groupoids
by W. B. Vasantha Kandasamy
Publisher: American Research Press 2002
ISBN/ASIN: 1931233616
ISBN-13: 9781931233613
Number of pages: 115
This book aims to give a systematic development of the basic non-associative algebraic structures viz. Smarandache groupoids. Smarandache groupoids exhibits simultaneously the properties of a
semigroup and a groupoid. Such a combined study of an associative and a non associative structure has not been so far carried out.
Download or read it online for free here:
Download link
(570KB, PDF)
Similar books
Group theory for Maths, Physics and Chemistry
Arjeh Cohen, Rosane Ushirobira, Jan DraismaSymmetry plays an important role in chemistry and physics. Group captures the symmetry in a very efficient manner. We focus on abstract group theory, deal
with representations of groups, and deal with some applications in chemistry and physics.
Why are Braids Orderable?
Patrick Dehornoy, at al.This book is an account of several quite different approaches to Artin's braid groups, involving self-distributive algebra, uniform finite trees, combinatorial group theory,
mapping class groups, laminations, and hyperbolic geometry.
Group Theory: Birdtracks, Lie's, and Exceptional Groups
Predrag Cvitanovic
Princeton University PressA book on the theory of Lie groups for researchers and graduate students in theoretical physics and mathematics. It answers what Lie groups preserve trilinear, quadrilinear,
and higher order invariants. Written in a lively and personable style.
Galois Groups and Fundamental Groups
David Meredith
San Francisco State UniversityThis course brings together two areas of mathematics that each concern symmetry -- symmetry in algebra, in the case of Galois theory; and symmetry in geometry, in the
case of fundamental groups. Prerequisites are courses in algebra and analysis. | {"url":"http://e-booksdirectory.com/details.php?ebook=4285","timestamp":"2024-11-09T14:30:23Z","content_type":"text/html","content_length":"11089","record_id":"<urn:uuid:cadb7c89-8743-4ec2-a861-2a46dd0e7aba>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00145.warc.gz"} |
Numerals, ordinal numbers and grammar - LenguaCenter
Numerals, ordinal numbers and grammar
Numerals are a type of numbers that are used to represent quantity or position. Ordinals are their specific subtype, although are often differentiated.
For the most part, numerals are used to represent a specific quantity or amount. There are three main types of numerals:
1. Cardinal numerals: These are numbers that are used to represent a specific quantity, such as one, two, three, etc. Cardinal numerals can be written as words (e.g. “one”) or as digits (e.g. “1”).
2. Fractional numerals: These are numbers that represent a portion or a part of a whole, such as one-half, one-third, etc. Fractional numerals are usually written using a combination of a cardinal
numeral and a fraction bar (e.g. “1/2”).
3. Ordinal numerals: These are numbers that are used to indicate position or order, such as first, second, third, etc. Ordinal numerals are usually written as words (e.g. “first”) or as digits
followed by the suffix “-th” (e.g. “1st”).
Ordinals are formed from cardinal numbers by adding the suffix “-th,” “-nd,” “-rd,” or “-st” to the end of the number, depending on the number itself. For example:
– 1st (first)
– 2nd (second)
– 3rd (third)
– 4th (fourth)
– 5th (fifth)
… and so on.
Ordinals are used to indicate the position or order of things in a sequence, such as the months of the year (January is the first month), the days of the week (Sunday is the first day), or the floors
of a building (the ground floor is the first floor).
So, to summarize, numerals are numbers that represent quantity or amount, while ordinals are words that indicate position or order. Numerals can be written as words (e.g. “one”) or as digits (e.g.
“1”), while ordinals are always written as words (e.g. “first”) or as digits followed by the suffix “-th” (e.g. “1st”).
You might want to try your strengths with a few tasks!
#1: You’re about to eat a specific amount of different foods. Write down how much is remaining, in fractional numerals.
I ate half of the cake. There’s still ______ remaining.As they ate three lollipops of eight in total, there’s still _____ of the whole left.I’ve saved 60% of money for the trip, so there’s still
______ to save.The professor said one in three of us thirty will pass. That’s ____!I have never been good with returning books. Out of five I’ve read, I’ve returned only two. That’s _____ .
#2: Create ordinals out of capital numbers.
5 → 15 →55 → 100 → 113 →
#3: You are working as a data analyst and have been asked to prepare a report on the sales performance of different products in the company. You have been given the following information:
Product A: sold 20 unitsProduct B: sold 40 unitsProduct C: sold 30 units
Organise them, create a list – which product did the best? Which product did the worst?
Dodaj komentarz | {"url":"https://lenguacenter.pl/numerals-ordinal-numbers-and-grammar/","timestamp":"2024-11-10T19:22:04Z","content_type":"text/html","content_length":"46768","record_id":"<urn:uuid:72561b80-dcee-402e-aad5-9326c527cf80>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00341.warc.gz"} |
Analysis of CDM Covariance Consistency in Operational Collision Avoidance
Document details
Publishing year2017 PublisherESA Space Debris Office Publishing typeConference Name of conference7th European Conference on Space Debris Pagesn/a Volume Issue Editors
In the process of collision avoidance, Conjunction Data Messages (CDMs) are generated in case of a close encounter between two objects. This message includes the time of the encounter, the predicted
positions with the consequential miss distance and the covariance matrices at the time of the encounter [1]. This information is used by spacecraft operators to decide if a satellite, the target,
shall perform an avoidance manoeuvre, because in almost all cases the chaser is a non-manoeuvrable space debris object. This decision is taken after a manual analysis of the situation [2] and its
efficiency depends heavily on consistent and reliable covariance information. Therefore this paper investigates the consistency of the provided CDM covariance.
For this analysis, the ESA satellites Cryosat-2 and Swarm A/B/C are chosen. Predicted positions and covariance matrices are extracted from CDMs for a time span of one year, Oct 2015 – Oct 2016. For
the same period, the mission orbits of the satellites are obtained from ESA Flight Dynamics. This data is based on high-precision orbit determination and thus considered as ground truth. From this,
the deviations of the predicted CDM positions can be calculated in the satellite based coordinate system RTN (radial, along track, cross track) and compared to the covariance matrix given in the same
coordinates. Only the main elements of the matrix are used, not considering the orientation of the covariance, to investigate the effectiveness of the two most common avoidance manoeuvres, increasing
along track or radial separation [2]. Therefore it is checked how well the calculated errors in R- and T-direction agree with the corresponding uncertainties along these axes. The investigated
parameter is called consistency percentage, which gives the share of the number of predictions located within the uncertainty.
The JSpOC CDMs are consistent among all satellites and reach percentages of 50% - 60% for one sigma along the R- and T-axes. It is shown that a reasonable scaling factor around 1.5 can be applied to
the covariance to reach the desired one sigma percentage of 68%. Additionally, it can be seen that the consistency percentage has a linear relation to the scaling factor, which indicates that the
value is close to one sigma, because the second derivative of the normal distribution is zero at one sigma.
To conclude, this work shows the applicability of CDMs by JSpOC in operational use. It could be shown that the covariance included in JSpOC CDMs is a good representative of the uncertainty as it
reaches normal one sigma consistency percentages with a small scaling.
[1] CCSDS. Conjunction Data Message CCSDS 508.0-B-1. 2013.
[2] H. Krag et al.. ESA’s Modernised Collision Avoidance Service. 2016. | {"url":"https://conference.sdo.esoc.esa.int/proceedings/sdc7/paper/449","timestamp":"2024-11-05T10:40:41Z","content_type":"text/html","content_length":"22946","record_id":"<urn:uuid:86b33455-c11c-47bb-8b19-47b95ae03e91>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00393.warc.gz"} |
Pie chart not showing actually percent complete
@Paul Newcome Hi Paul - I found your solution for displaying appropriate percentage for pie charts in Dashboard. You had suggested entering a "helper column" =1 - [Percent Complete]@row.
Your recommendation worked for one of my pies....but not for the others.....I am only referencing the two percentages so I am confused why it would show 100 percent complete when in fact it is 0?!
Any suggestions on this one?
• The second column is the % Remaining. If the % Complete is 0% then the % Remaining is going to be 100%.
• Got - so silly question....how do I flip it to show % complete....I am missing something super easy here I think.
• You would need to set it so that your actual % Complete is never actually 0. You would need to add maybe 0.0001 to it so it shows both on the chart.
• Ha! Thanks. I just updated color and changed title to include not started yet. Thank you!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/94459/pie-chart-not-showing-actually-percent-complete","timestamp":"2024-11-13T08:58:07Z","content_type":"text/html","content_length":"416890","record_id":"<urn:uuid:b44735e5-9a82-4ab0-a85d-7a8ef25b2fd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00449.warc.gz"} |
Fall 1998-99
INTRODUCTION TO HIGHER ANALYSIS: REAL VARIABLES (Math 347)
143 Altgeld Hall, 11, MWF
Instructor: Mihail Kolountzakis
E-mail: kolount@math.uiuc.edu
Text: K.A. Ross, Elementary analysis: the theory of calculus, Springer Verlag.
Material to be covered (approximately): The entire book.
A brief review of the book (from the Mathematical Reviews) This book is intended for the student who has a good, but naive, understanding of elementary calculus and now wishes to gain "a thorough
understanding of a few basic concepts in analysis such as continuity, convergence of sequences and series of numbers, and convergence of sequences and series of functions". The basic properties of
the real number system are discussed in the first chapter; no proofs are given but the construction from Peano's axioms to the set of real numbers is excellently set out and well motivated. Assuming,
then, these basic properties, with the existence of least upper bounds as the completeness axiom, all the basic concepts referred to above are derived and discussed with exemplary thoroughness and
precision. A few of the more difficult early proofs are preceded by a "discussion", which makes the short formal proofs easily understood -- a student could actually read this book and comprehend the
material{fact} As well as covering the basic concepts, the author also covers decimal representation, Weierstrass' approximation theorem via Bernshtein polynomials, and the Riemann-Stieltjes
integral, and there are optional sections generalizing results to metric spaces -- here the Heine-Borel theorem, compact sets, and connected sets are introduced. There are many nontrivial examples
and exercises which illuminate and extend the material. The author has tried to write in an informal but precise style, stressing motivation and methods of proof, and, in this reviewer's opinion, has
succeeded admirably. --Reviewed by William Eames
Grading policy: There will be two 2-hour-long tests plus the final. Each of the two tests counts for 20% of the grade and the remainder 60% is the final. The homework will not directly contribute to
your grade.
Office hours (at 234 Illini Hall): anytime I'm in the office (or anywhere else you find me) or by appointment.
W, Aug 26: Organizational stuff, introduction to mathematical induction as a method of proving propositions with a natural number as a parameter.
F, Aug 28: More examples of the use of mathematical induction. The rational numbers. The square root of 2 is not rational. The rational zeros Theorem.
HWK 1: 1.2, 1.3, 1.6, 1.9, 1.11, 2.5
M, Aug 32: The real numbers. maxima/minima, upper/lower bounds and suprema/infima of sets of real numbers.
HWK 2: 4.1-4.4 (do these together), 4.6, 4.7, 4.8, 4.11, 4.12, 4.14, 4.16
There will be a short (optionally anonymous) quiz on Friday (15-20 min).
W, Sep 2: We mostly talked about hwk problem 4.1. We proved the Archimedean Property and introduced the symbols +infinity and -infinity.
Grader for this course is Dimitris Kalikakis (Illini Hall 223). Turn your hwk to him (optional) no later than a week after it has been assigned (there will be one envelope outside his door for
incoming and one for graded homework). I remind here that no grade will be earned directly from hwk and that you're doing it in order to learn, and the purpose of the grading is to show to you if
you're doing well or not. So do not expect too accurate a marking of the paper (or maybe there will be no numerical grade assigned at all).
HWK 3: 5.1, 5.2, 5.6 (turn these in with HWK 2).
CORRECTION: (which concerns graduate students only)
I must have been very very confused when I proposed that those of you who want extra credit (to 1 unit) could get it by writing a computer program. What I mean is that what I had in mind was my other
(!) class, Math 243, which (since it speaks about geometry in 3-space) is perfectly suitable for such a project. However Math 347 is not at all suitable for this, and I have to retract my offer of
having some of you writing a program. The deal should be, either to do some extra reading (which we'll choose together) and then come and explain it to me, or to take an "enhanced" final exam (and
the two tests as well) with some extra problems (which will be more difficult than the others).
Each person who wants extra credit should e-mail me with his choice of the two above methods, preferably within a week from today 9/3/98).
I apologize for being so confused when I spoke to you about this.
F, Sep 4: We spoke about countable and uncountable sets and gave Cantor's diagonal argument to prove that the set of real numbers is not countable.
W, Sep 9: Limits of sequences. We gave the definition an applied it to several concrete cases in which we proved that a certain number is a limit using the definition directly. We also saw several
sequences which have no limit. We proved the uniqueness of the limit of a sequence.
HWK 4: 7.1(b), 7.3, 7.4, 7.5 (turn these in by Monday).
F, Sep 11: We more or less covered the material in Section 8, but not the problems therein, over some of which we shall go on Monday. Please, read and try to comprehend this Section by then. As a
self-test, try to reproduce the proof of example 5 without looking at the book. Also, a week from today we shall have a short quizz, mainly to test how far along you are in learning how to write a
mathematical proof from which nothing essential is missing.
M, Sep 14: Section 9. Some theorems that help us to calculate limits. We started doing the "basic examples" in paragraph 9.7.
HWK 5: 8.1, 8.7, 8.9, 8.10
W, Sep 16: We finished Section 9 (we discussed mostly the concept of convergence to infinity and we did some of the problems in Section 8). We shall have a short quiz on Friday.
F, Sep 18: We discussed in detail the convergence of n^{1/n}. Proved that every monotone sequence has a finite or infinite limit. Had a short quiz. Most common error: when you said that the minimum
of a finite set exists, you should have added that it is NOT zero, the reason being that 0 is not in the finite set. Except for roughly 5 people (to whom it will be obvious that it is them I mean)
you did quite well on this one.
HWK 6: (Section 9) 4, 5, 8, 9, 11, 12, 14, 18.
M, Sep 21: We defined the liminf and limsup of an arbitrary sequence of real numbers and proved that a sequence is convergent (both infinities included as possible limits) if and only if its liminf
is equal to its limsup. Further we defined Cauchy sequences and saw that a sequence is Cauchy if and only if it is convergent to a finite limit.
HWK 7: (Section 10) 1, 6, 7, 8, 9, 10.
W, Sep 23: We proved that every sequence has a monotonic subsequence which converges (Bolzano-Weierstrass), defined what accumulation points are (in your book: subsequential limits) and eventually
proved that a seq. converges iff it has only one accumulation point. We also saw that the limits of accumulation points are accumulation points (of the original sequence) themselves.
HWK 8: (Section 11) 1, 2, 3(1st), 4(4th), 8.
F, Sep 25: We defined open and closed sets of real numbers and proved several properties of them. Last thing we did was to prove that a finite intersection of open sets is open and that a set is open
if and and only if its complement is closed. For Monday, please write down a proof of this last statement and use it to prove that a finite union of closed sets is closed.
The first exam is tentatively scheduled for Th, Oct 8, sometime after 5pm. It will be two hours long and the material will be eveything that has been covered until then. More details will follow.
M, Sep 28: We went over section 12 and did some problems as well.
HWK 9: (Section 12) 3, 4, 5, 8, 9, 10, 12.
The first exam will be on Thursday, Oct. 8, at 7-9pm in 124 Burrill Hall.
W, Sep 30: We saw the definition of metric spaces and several examples. We proved that the Euclidean distance on R^d is a metric, which involves the use of the Cauchy-Schwartz inequality (we proved
that). We defined completeness and proved that R^d is complete and the Bolzano-Weierstrass theorem for R^d.
HWK 10: (Section 13) 1, 7, 9.
F, Oct 2: We defined open and closed sets in a metric space. Also the set of limit points of a set, the interior and the boundary of a set. We proved that a decreasing sequence of bounded and closed
sets in R^n has non-empty intersection.
HWK 11: (Section 13) 10, 11, 12, 13, 14.
M, Oct 5: We finished section 13 with a discussion of compactness in general metric spaces and in the Euclidean space. The test on Thursday will cover up to section 13.
W, Oct 7: We talked about the convergence of series, absolute convergence, the ratio test and the root test and their relative strength. Finally we say some series which converge but not abolutely.
HWK 12: (Section 14) 1, 2, 3, 5, 7, 8, 12, 14.
F, Oct 9: We talked about the integral test for series as well as about alternating series.
HWK 13: (Section 15) 1, 2, 3, 4, 6, 7.
Grades for the first test are here.
M, Oct 12: Skipped section 16 (decimal expansion). Continuous functions from one metric space to another. In particular real valued functions of a real argument.
HWK 14: (Section 17) Problems 6--14.
W, Oct 14: We discussed some properties of continuous functions, such as that they are always bounded on compact sets and that they assume their sup and inf therein, and the intermediate value
theorem and several consequences of it.
HWK 15: (Section 18) 4, 5, 7, 8, 9, 12.
F, Oct 16: We introduced the concept of uniform continuity, proved that every continuous function on a compact set is uniformly continous and used this to define the (Riemann) integral of a
continuous function defined on a closed interval.
M, Oct 19: We proved that a function continuous in an open interval is uniformly continuous if and only if it can be extended continuously at the endpoints. We also show a new equivalent defintion of
continuity for functions from a metric space to another (preimages of all open sets must be open).
HWK 16: (Section 19) 1, 4, 5, 6, 7, 11, (Section 20) 11, 14, 18.
W, Oct 21: We finished section 21. We saw that for a continuous function from a metric space to another the preimages of open (resp. closed) sets are open (resp. closed), and that the images of
compact sets are compact. We also saw several counterexamples to several similar statements.
We shall have a short quiz on Friday.
HWK 17: (Section 21) 1, 2, 4, 5, 8, 9, 10, 11.
F, Oct 23: We discussed some homework problems and had a short quiz.
M, Oct 26: Pointwise convergence of a sequence of functions to another and how several properties of the functions in the sequence fail to propagate to the limit function. Power series and their
radius of convergence. Sveral examples. Deifinition of uniform convergence of a sequence of functions to another. The metric space C[0,1] with the "sup" distance function that gives uniform
HWK 18: (Section 23) 1, 2, 6, 7, 8, 9.
W, Oct 28: We proved that a uniform limit of continuous functions is continuous. We also saw several examples and counterexamples.
HWK 19: (Section 24) 1-5, 10, 11, 14, 17.
F, Oct 30: Uniform covergence on a finite interval implies convergence of the integrals to the integral of the limit function. We saw that this is not true for unbounded intervals. Defined what it
means for a sequence of functions to be uniformly Cauchy and saw that this is equivalent to the sequence possesing a uniform limit. Proved the Weierstrass M-test which helps us prove that some series
of functions converge uniformly.
HWK 20: (Section 25) 2, 3, 5, 6, 12, 13, 14.
M, Nov 2: We proved that power series can be differentiated or integrated termwise, in their interval of convergence.
For Graduate students only: For those of you who want to receive the extra credit for the course your task will be the follwing. I am going to put on reserve in the Math Library the book
"Introduction to topology and modern analysis", by G. F. Simmons. You should read from there Chapter Two (Metric Spaces). You should read all the material and do all problems (or, at least, all but
very few). You will then be examined (either orally or in writing, and not before December the 1st) on your understanding of the material (which includes the ability to solve problems).
Those of you who are going ahead with this project (i.e. those of you who want the extra credit), should let me know by the end of the week (11/6/98) by e-mail that you are going to do so.
The 2nd test will take place on Nov 19, 7-9pm, at 112 Chem Annex. The material to be examined is everything that I shall have tought by the previous Monday.
W, Nov 4: We proved Abel's theorem and discussed Taylor series expansion of a function. Also stated the Weierstrass approximation theorem (did not prove it) and saw several ways that assumptions it
makes cannot be relaxed.
HWK 21: (Section 26) 2-7.
F, Nov 6: The derivative of a function at a point. Rules of differentiation and their proof.
NO CLASS for the week of the 9th of November. We resume on Monday the 16th.
M, Nov 16: We covered the first 4 problems from the sample exam that I gave you 10 days ago.
EXTRA CLASS on T, Nov 17, at 7pm in 145 Altgeld.
W, Nov 18: The mean value theorem and its applications.
HWK 22: (Section 29) 1, 3, 5, 9, 13, 14, 16, 18.
The 2nd test will take place on Nov 19, 7-9pm, at 112 Chem Annex. The material to be examined is everything that I shall have tought by the previous Monday.
F, Nov 20: We went over the problems that were on the test of yesterday.
NO CLASS on Monday, Nov 30.
2nd test grades are here. The maximum grade is 45 (problem 2b was not taken into account).
W, Dec 2: We started section 32 (the definition of the Riemann integral of a function). We defined the lower and upper sums corresponding to a certain partition of the interval [a,b], and the lower
and upper integrals of a bounded function on [a,b]. If these are the same the function is called Riemann integrable.
Extra class on Tuesday and Wednesday, Dec. 8 and 9, in Altgeld Hall 145 at 7pm. Please come with questions.
F, Dec 4: We completed section 32 (the Darboux and Riemann integral) and showed that the two definitions of integrability are equivalent.
HWK 23: (Section 32) 2, 3, 7, 8.
The Final Exam will take place on Wednesday, December 16, 8-11am, in the same room where the class is being tought.
M, Dec 7: We proved that monotonic and continuous functions are integrable, that sums of integrable functions are integrable, the trinagle inequality for integrals, and more.
HWK 24: (Section 33) 3, 4, 5, 7, 8, 9, 10, 11, 13, 14.
Extra class on Tuesday and Wednesday, Dec. 8 and 9, in Altgeld Hall 145 at 7pm. Please come with questions.
W, Dec 9: The Fundamental Theorem of Calculus.
HWK 25: (Section 34) 2, 3, 5, 6, 10, 12.
Your final grade will be computed as follows. It will be the maximum of your final exam and 0.6*final + 0.2*(first test) + 0.2*(second test).
Final grades are here. | {"url":"https://eigen-space.org/347/main.html","timestamp":"2024-11-03T13:44:09Z","content_type":"text/html","content_length":"16689","record_id":"<urn:uuid:45340fb3-2768-4e4f-8355-15d83bb6e554>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00879.warc.gz"} |
'Heintz, Tyler M.'
Searching for codes credited to 'Heintz, Tyler M.'
➥ Tip! Refine or expand your search. Authors are sometimes listed as 'Smith, J. K.' instead of 'Smith, John' so it is useful to search for last names only. Note this is currently a simple phrase
[ascl:1809.004] VBBINARYLENSING: Microlensing light-curve computation
VBBinaryLensing forward models gravitational microlensing events using the advanced contour integration method; it supports single and binary lenses. The lens map is inverted on a collection of
points on the source boundary to obtain a corresponding collection of points on the boundaries of the images from which the area of the images can be recovered by use of Green’s theorem. The code
takes advantage of a number of techniques to make contour integration much more efficient, including using a parabolic correction to increase the accuracy of the summation, introducing an error
estimate on each arc of the boundary to enable defining an optimal sampling, and allowing the inclusion of limb darkening. The code is written as a C++ library and wrapped as a Python package, and
can be called from either C++ or Python. | {"url":"http://www.ascl.net/code/cs/Heintz%2C%20Tyler%20M.","timestamp":"2024-11-12T08:56:43Z","content_type":"text/html","content_length":"5421","record_id":"<urn:uuid:6c6e9f88-3051-449b-bc73-8e99e265f2a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00519.warc.gz"} |
Below is a list of the most frequently asked questions we get here at Kaz Technologies. Hopefully you will find your answer here. If not, send an email to jess@kaztechnologies.com, and we will get
you the information you need.
Cost of shipping is based on weight, distance and time. After selecting your products in our online store, you can insert your location and select your shipping method; which will tell you shipping
cost. Time is based on your distance from our shipping location, how quickly do you want your order and how much you are willing to pay. An extra week must be allowed when ordering shocks custom
built to your specifications.
The Kaz Tech products are only available through Kaz Technologies located in the United States. However, we ship to all parts of the world.
Sorry. At this time we are not able to offer any discount or sponsorships to any team. However, our technical assistance is available to any team without charge.
Shock drawings and other technical information can be found under downloads.
All the information about our products can be found by clicking on the products themselves. If you need additional help, please send an email to jess@kaztechnologies.com.
We accept Visa and MasterCard, or you can pay by check or bank transfer. However, there is a $40.00 bank transfer fee. No products are built, shipped or serviced until we receive full payment.
We warranty all our products against manufacturing defects. We will gladly replace them if you find any issues before you install them on your car. However, like all racing products, we do not
supervise the installation or use of our products, therefore we are not responsible for any breakage due to improper installation or use.
This is the million dollar question. Unfortunately, there is not one technique or response to this question that is the correct answer. There are nearly unlimited options when it comes to damping and
the proper result in each application is as unique as the car and the team that designed it. It is imperative to understand that proper damping design is not strictly driven by mechanical
specifications such as mass and spring rates, but also by the relative importance of overall car properties such as handling, grip, and aerodynamics. There are many compromises to be made and as a
design engineer you know your car better than anyone else, and thus you are the one most qualified to make these decisions. All you need is a little fundamental understanding so that not only will
you have a properly damped race car, but you will also know why.
This is where the journey begins. The process of designing damping starts with a basic understanding of how damping affects a dynamic system. This understanding will allow you to know what it is you
can and cannot do with shocks in general. Once you get the basic overview you must apply that knowledge to the goals and purpose of your car design. For example, you can target damping to items such
as aerodynamics, mechanical grip, and handling and you can target each in pitch, roll, heave, and warp but you cannot optimize all things at one time. It is your decision on the items of importance
and relative compromising of each.
First, it is important to understand vehicle dynamics and especially concepts pertaining to tires, grip, corner loading, chassis motion, and transient handling behaviors. Information covering these
topics is widely available online and in print. This includes items, such as text books, that are not vehicle specific but still cover the important engineering principles. Think statics, kinetics,
kinematics, and vibrations.
Second, put your engineering tools to work. You can start with the most basic analysis which requires nothing more than applying fundamental dynamics and vibrations. If you simplify your system, or
partial system, as a linear 1 degree-of-freedom (DOF) mass, spring , damper system, and perform some analysis you can identify most all basic parameters commonly associated with suspensions and
The free body diagram and equation of motion for the 1 DOF system which you should already know is:
where m is the system mass, c is the actual damping coefficient, k is the spring rate, x is displacement of the mass as a function of time, and f(t) is the external force acting on the mass as a
function of time.
From solving this simple 1 DOF model all the following important equations can be derived:
where ωn is the system’s undamped natural frequency, ωd is the system’s damped natural frequency, ζ is the system’s damping ratio, and ccritical is the system’s critical damping coefficient.
These equations constitute simple yet valuable information about a system that summarizes some general and specific behaviors. You should make sure you understand how the normalized parameters of
natural frequencies and damping ratios describe a simple system.
It depends. The basic 1 DOF model, and the parameters derived from it, is full of assumptions and simplifications. However, it still provides system descriptions that are accurate under the
assumptions. It is up to you to determine if the simplification of a 1 DOF model provides answers that are close enough for your application. Understanding assumptions and simplifications is a large
part of the engineering problem solving process.
For example, the model assumes linear damping coefficients and spring rates and thus the more linear your shock curves and wheel rates, the more accurate the model is. You could choose to use linear
spring and damping rates purely for the ability to accurately predict your car’s behavior using the simple model. You also could choose to not to run linear rates and adjust your model for the more
complex behavior. Yet another option is to run non-linear rates while using the linear model, understanding there are variances unaccounted for.
In the end, you want to design using engineering tools, get your expected behavior close enough, and finish tuning and tweaking while driving the car. The goal here is to be close to your desired
behavior so track testing takes you from good to better and not bad to good.
The sky is the limit. A logical next step up is to create a 2 degree-of-freedom (DOF) model that includes unsprung mass and tire rates along with suspension wheel rates and the sprung mass. If you
desired a very full model you can create a 7 DOF system that includes displacement of each corner’s unsprung mass as well as chassis heave, pitch, and roll. Even further you can begin incorporating
non-linear parameters into your model for items such as damping and springing. It is your job as an engineer to know how far you need to go to get the results you require.
Generally, springs come first and damping comes second, although you should have an idea of what normalized damping you desire while designing wheel rates. Whether you design suspensions for grip,
aerodynamics, handling, or something else, springs are arguably the most dominant component. Wheel rates determine a large part of the dynamic response as well as all of the steady state response. So
the choice of spring rates is targeted to many items, none of which are actual damping rates. Once you know your masses, wheel rates, and something about your normalized damping target you can
determine where you should be with actual damping rates.
Damping ratio (ζ) is a unitless, normalized parameter that summarizes the damped response of a 2nd order system. It is the ratio of actual damping to that which would be required for critical damping
and it is the simplest way to summarize system damping with a single, concise term. While it is technically valid only for 2nd order, linear, time-invariant mass, spring, damper systems the
simplification holds useful merit even as systems become more complex.
Since damping ratio is normalized to the relationship of system mass to system spring rate, all 1 DOF systems with the same damping ratio have the same characteristic damped response, regardless of
the magnitude of the mass and spring rate. Damping ratios greater than 1 are termed over-damped, exactly 1 is termed critically damped, and less than 1 is termed under-damped. In the case of
critically damped, the system response from an initial disturbance is a return to equilibrium without any overshoot or oscillations. In the over-damped case system response from an initial
disturbance also returns to equilibrium with no overshoot or oscillations but will take more time. In the under-damped case system response from an initial disturbance returns to equilibrium after
oscillating. The closer to 1 the under-damped case is, the less it will oscillate before returning to equilibrium and the closer to 0 the longer it will oscillate.
The figure below illustrates how different damping ratios affects a system as it responds to an initial disturbance.
Tip: Understanding damping ratios and its effect on system response is the key concept to designing damped response. Know what it would take to critically damp your car and know where you are in
relation to that critically damped case.
Natural frequency is a normalized parameter often expressed in Hertz or radians per second that summarizes particular responses of an under-damped mass, spring, damper system. Similar to damping
ratios, systems with identical natural frequencies will respond with identical behaviors to external inputs. There are two common but different types of natural frequencies, the undamped natural
frequency and damped natural frequency. The undamped natural frequency of a system is the frequency at which the system would freely vibrate after a disturbance if it had no damping. The damped
natural frequency is the frequency at which the system will freely vibrate including the effect of damping. Most often the term natural frequency is assumed to refer to the undamped natural
frequency. The damped natural frequency is easily found using the undamped natural frequency and damping ratio.
A system may have as many natural frequencies as it has degrees-of-freedom (DOF). However, the most often applied calculation of natural frequency is only valid for a 2nd order, linear, 1 DOF system.
However, this is still a useful simplification for more complex systems.
As a tool, natural frequency summarizes important, basic dynamic factors about a system such as bandwidth, resonant frequency, 90 degree phase point, and transmissibility. These types of factors, if
understood, can then be correlated to car behavior. For instance, bandwidth and phase are general descriptions of how a car as a system will react to handling inputs and thus can characterize overall
handling response. Fundamentals of transmissibility give a good indication of how mechanical grip may be affected with a spring or damping change. Cars built with matching natural frequencies (along
with other similarities) tend to have similar behaviors and thus natural frequencies can be a good way to target a specific or known baseline.
The figure below illustrates how systems with varying natural frequencies but identical damping ratios respond in time to the same initial disturbance.
Motion ratio describes the ratio between wheel travel and shock/spring travel. Convention at Kaz Technologies is to use shock travel divided by wheel travel. Motion ratios are rarely perfectly
linear; however, for ease of use and simplicity we tend to treat it as linear. It is never a bad idea to build linearity into your motion ratio unless you have a specific reason not to do so. For the
full picture of motion ratio and motion ratio variation, evaluate the slope of the shock travel versus wheel travel curve between min and max suspension travel.
Understanding motion ratio is imperative to achieving desired wheel rates. Because it is the wheel rate that ultimately determines how the car behaves, motion ratios are used to correct for the
proper spring and damping rates in order to achieve the correct wheel rates. Do not miss this important step. The relationship describing motion ratio and the respective rates is:
where kw is the wheel rate, mr is the motion ratio, and ks is the rate at the shock or spring. Important: Note that this equation holds for both spring and damping rates.
Similar to spring rate, a damping coefficient is the slope of a shock’s force versus velocity curve. It is expressed as force per unit of velocity. If damping is linear then the damping coefficient
is a constant and fits nicely into basic linear models. If damping is not linear then the damping coefficient varies with velocity.
Wheel rate is the effective rate of the suspension between the wheel center and chassis. It is related to the rates of the installed springs and shocks by the motion ratio. It is wheel rates that are
designed to control how the car pitches, rolls, heaves, and responds dynamically. Installed spring and shock rates are chosen so that when corrected by the motion ratio the desired wheel rates are
Transmissibility is another concept that summarizes particular behaviors of a mass, spring, damper system. It is the steady state amplitude ratio of output to input for a given frequency ratio and
damping ratio. This amplitude ratio can be expressed in whatever units are of interest and most often will used as a measure of force transmitted or displacement transmitted. Transmissibility is a
key concept to vibration isolation. One example of using this concept with respect to force is evaluating how a vibrating piece of equipment transmits force to the structure of a building and how
this can be manipulated and reduced. A second example that would be concerned with transmitted displacement would be isolating sensitive equipment from a vibrating environment. Both of these concepts
can also be extended to cars and race cars. The transmissibility plot below shows how amplitude ratio behaves as a function of frequency ratio for various damping ratios.
Frequency ratio is the ratio of the input excitation frequency to a system’s undamped natural frequency. Resonance for a damped system occurs near a frequency ratio of 1 and depends on the damping
ratio. The frequency ratio of 1.414, the square root of 2, is the crossover point between amplification and attenuation at steady state and is clearly seen on a transmissibility plot. It is also
worth noting that more damping decreases transmissibility at frequency ratios below 1.414 but increases transmissibility at frequency ratios above 1.414.
Resonance is the phenomenon that occurs when the steady state output of a system is at its maximum relative to a sinusoidal input. In the under-damped case this occurs when the input frequency is
near the natural frequency of the system and thus the frequency ratio is close to a value of 1. When resonance occurs and very little damping is present, the input to output amplification can be
extremely severe and quickly get to an out of control or damaging level. The basic relationship of resonance and damping is seen as the peaks on a transmissibility plot. Resonance is also easily
identified as the peak on a Bode plot. Although traditional damping is not the only way to control resonance, it is extremely effective.
The resonant frequency of a 1 degree-of-freedom system with a damping ratio between 0 and 0.7 can be found from the following equation:
where ωr is the system’s resonant frequency, ωn is the system’s undamped natural frequency, and ζ is the system’s damping ratio.
Frequency may be measured in many different units. The most common units used are hertz (Hz), cycles per minute (cpm), and radians per second (rad/s or 1/s). Hertz is the unit of frequency in cycles
per second. While rad/s is often a unit of angular velocity it may be extended to be a measurement of angular frequency when considering one complete cycle of a sine wave subtends 2π radians. Rad/s
is an extremely critical base unit because of its fundamental requirement in deriving and solving most all equations associated with angular velocity or frequency. Therefore, rad/s are extremely
important to the engineer while Hz is more intuitive for conversations and description. | {"url":"https://www.kaztechnologies.com/formula-sae/faq","timestamp":"2024-11-01T23:01:27Z","content_type":"text/html","content_length":"49346","record_id":"<urn:uuid:c86dc98b-cc4c-486d-b173-0bea027c7745>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00362.warc.gz"} |
Conditional Probability Formula: Properties, Chemical Structure and Uses
Conditional Probability Formula
Conditional Probability Formula
The potential of an event or outcome occurring based on the existence of a prior event or outcome is known as Conditional Probability. It is determined by multiplying the likelihood of the earlier
occurrence by the increased likelihood of the later, or conditional, event. This is where the independent event and dependent event notion is used. Consider a student who misses class twice each
week, omitting Sunday. What are the possibilities that he will take a leave of absence on Saturday the same week if it is known that he will be absent from school on Tuesday? It has been noted that
situations where the outcome of one event influences the outcome of a subsequent event are termed as Conditional Probability.
What Is Conditional Probability Formula?
One of the core concepts in probability theory is the concept of the Conditional Probability Formula. The Conditional Probability Formula calculates the likelihood of an event, say B, given the
occurrence of another event, say A.
By knowing the Conditional Probability Formula of event B given that event A has occurred and the individual probabilities of events A and B, the Bayes theorem may be used to calculate the
Conditional Probability Formula likelihood of event A given that event B has occurred.
The Conditional Probability Formula of P(A | B) is undefinable if P(B)=0. (Event B did not take place).
Formula for Conditional Probability
The Conditional Probability Formula is as follows:
P(A and B)/P = P(A | B) (B)
Another way to write it is,
Derivation of Conditional Probability Formula
Based on the existence of a prior event or outcome, the Conditional Probability Formula is the likelihood that a future event or outcome will occur. The probability multiplication rule serves as the
basis for the Conditional Probability Formula.
P(A) = Chance that event A will occur
P(B) is the likelihood that event B will occur.
P(AB) suggests that both occurrences, A and B, have taken place or at least some of their common components.
Event A is already here.
Every outcome that is not contained in B but is in A is removed if B has also occurred, narrowing the sample space needed to determine B.
The only way that A can occur is when the outcome belongs to the set AB, since the set of possible outcomes for A and B is thus limited to those in which B occurs. As a result, we divide P(A B) by P
(B), which is equivalent to limiting the sample space to those instances where B occurs.
Application of Conditional Probability Formula
The Conditional Probability Formula is frequently used to anticipate the results of actions like tossing dice, picking a card from a deck, and flipping a coin. Additionally, it aids in the analysis
of the given data set by data scientists, improving results. Creating more precise prediction models is helpful for machine learning developers.
Examples that have been solved to understand the Conditional Probability Formula.
Example 1: Of a group of ten persons, four purchased apples, three purchased oranges, and two purchased both apples and oranges. Using the Conditional Probability Formula, what is the likelihood that
a consumer who selected apples at random also purchased oranges?
Let those who purchased apples be A and those who purchased oranges be O.
It follows that
P(A) = 4/10, 40%, or 0.4.
P(O) = 3/10, 30%, or 0.3.
P(AO) = 2/10, or 20%, or 0.2
Using the Conditional Probability Formula,
50% is equal to P(O|A) = P(AO) / P(A) = 0.2 / 0.4 / 0.5
Given that they also purchased apples, there is a 50% chance that the customer also purchased oranges.
Examples Using Conditional Probability Formula
Consider yourself a furniture salesperson. On any given day, 30% of new customers to your business are likely to buy a couch. However, the likelihood may be 70% if they visit your store in the month
before the Super Bowl. The Conditional Probability Formula of selling a couch in the month before the Super Bowl may be expressed as P (Selling a couch | Super Bowl month), where the symbol | stands
for “given that”. This Conditional Probability Formula gives us a mechanism to define probabilities when our opinions about the likelihood that one event will occur (in this case, the sale of
couches) given that another event has occurred change (in this case, the advent of the month preceding the Super Bowl).
FAQs (Frequently Asked Questions)
1. 3 coins are in a piggy bank. 1 fake-headed coin and 2 regular coins are included. {P (H) = 1 P (H)=1}. A. A person chose a coin at random and tossed it. How likely is it that it will bring up
Based on the supposition,
Assume that A1A1 is the requirement that you choose a regular coin and A2A2 is the requirement that you choose the 2-headed coin. A1A1 and A2A2 create a sample space partition, so keep that in mind.
P(H|A1) = 0.5 and P(H|A1) = 0.5
P H|A2 = 1 P H|A2 = 1
Here, using the probability principle, we write
P H P H = P H | A1 P H | A2 P H | A1 – P H | A2 P ‘A1’ plus P ‘H|A2’ ‘A2’
1/2.2/3 + 1.1 /3 = 2/3
So the probability obtained is 2/3. | {"url":"https://www.extramarks.com/studymaterials/formulas/conditional-probability-formula/","timestamp":"2024-11-08T14:20:32Z","content_type":"text/html","content_length":"628270","record_id":"<urn:uuid:2b3fbfb9-05cc-4849-8095-3bb0ba1e4a19>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00841.warc.gz"} |
Simple Movie Recommender Using SVD
Given a movie title, we’ll use Singular Value Decomposition (SVD) to recommend other movies based on user ratings.
Filtering and recommending based on information given by other users is known as collaborative filtering. The assumption is that people with similar movie tastes are most likely to give similar movie
ratings. So, if I’m looking for a new movie and I’ve watched The Matrix, this method will recommend movies that have a similar rating pattern to The Matrix across a set of users.
SVD Concept
The essence of SVD is that it decomposes a matrix of any shape into a product of 3 matrices with nice mathematical properties: $A = U S V^T$.
By lucid analogy, a number can decompose into 3 numbers to always have the smallest prime in the middle. E.g $24 = 3 \times 2 \times 4$ or $57 = 1 \times 3 \times 19$.
For the interested, I previously wrote a post on SVD visualisation to view the properties of the decomposition.
The result of the decomposition leaves us with an ordered matrix of singular values which encompass the variance associated with every direction. We assume that larger variances means less redundancy
and less correlation and encode more structure about the data. This allows us to use a representative subset of user rating directions or principal components to recommend movies.
I highly recommend reading John Shlen’s tutorial on PCA and SVD (2014) to fully understand the mathematical properties of the two related methods.
Simple Recommender
Python libraries we’ll be using:
import numpy as np
import pandas as pd
We’ll be using 2 files from the MovieLens 1M dataset: ratings.dat and movies.dat.
1) Read the files with pandas
data = pd.io.parsers.read_csv('data/ratings.dat',
names=['user_id', 'movie_id', 'rating', 'time'],
engine='python', delimiter='::')
movie_data = pd.io.parsers.read_csv('data/movies.dat',
names=['movie_id', 'title', 'genre'],
engine='python', delimiter='::')
2) Create the ratings matrix of shape ($m \times u$) with rows as movies and columns as users
ratings_mat = np.ndarray(
shape=(np.max(data.movie_id.values), np.max(data.user_id.values)),
ratings_mat[data.movie_id.values-1, data.user_id.values-1] = data.rating.values
3) Normalise matrix (subtract mean off)
normalised_mat = ratings_mat - np.asarray([(np.mean(ratings_mat, 1))]).T
4) Compute SVD
A = normalised_mat.T / np.sqrt(ratings_mat.shape[0] - 1)
U, S, V = np.linalg.svd(A)
5) Calculate cosine similarity, sort by most similar and return the top N.
def top_cosine_similarity(data, movie_id, top_n=10):
index = movie_id - 1 # Movie id starts from 1
movie_row = data[index, :]
magnitude = np.sqrt(np.einsum('ij, ij -> i', data, data))
similarity = np.dot(movie_row, data.T) / (magnitude[index] * magnitude)
sort_indexes = np.argsort(-similarity)
return sort_indexes[:top_n]
# Helper function to print top N similar movies
def print_similar_movies(movie_data, movie_id, top_indexes):
print('Recommendations for {0}: \n'.format(
movie_data[movie_data.movie_id == movie_id].title.values[0]))
for id in top_indexes + 1:
print(movie_data[movie_data.movie_id == id].title.values[0])
6) Select $k$ principal components to represent the movies, a movie_id to find recommendations and print the top_n results.
k = 50
movie_id = 1 # Grab an id from movies.dat
top_n = 10
sliced = V.T[:, :k] # representative data
indexes = top_cosine_similarity(sliced, movie_id, top_n)
print_similar_movies(movie_data, movie_id, indexes)
Recommendations for Toy Story (1995):
Toy Story (1995)
Toy Story 2 (1999)
Babe (1995)
Bug's Life, A (1998)
Pleasantville (1998)
Babe: Pig in the City (1998)
Aladdin (1992)
Stuart Little (1999)
Secret Garden, The (1993)
Tarzan (1999)
We can change k and use different number of principal components to represent our dataset. This is essentially performing dimensionality reduction.
SVD and PCA relationship
Instead of computing SVD in step 4 above, the same results can be obtained by computing PCA using the eigenvectors of the co-variance matrix:
normalised_mat = ratings_mat - np.matrix(np.mean(ratings_mat, 1)).T
cov_mat = np.cov(normalised_mat)
evals, evecs = np.linalg.eig(cov_mat)
We re-use the same cosine similarity calculation in step 5. Instead of the matrix V from SVD, we can use the eigenvectors computed from the co-variance matrix:
k = 50
movie_id = 1 # Grab an id from movies.dat
top_n = 10
sliced = evecs[:, :k] # representative data
top_indexes = top_cosine_similarity(sliced, movie_id, top_n)
print_similar_movies(movie_data, movie_id, top_indexes)
Recommendations for Toy Story (1995):
Toy Story (1995)
Toy Story 2 (1999)
Babe (1995)
Bug's Life, A (1998)
Pleasantville (1998)
Babe: Pig in the City (1998)
Aladdin (1992)
Stuart Little (1999)
Secret Garden, The (1993)
Tarzan (1999)
Exactly the same results!
In step 4 above, our input matrix $A$ has shape $u \times m$. The computation of V from SVD is the result of the eigenvectors of $A^T A$. The columns of V are the eigenvectors that correspond to the
sorted eigenvalues in the diagonal of $S$.
By construction, $A^T A$ equals the covariance matrix of normalised_mat. Thus, the columns of $V$ are the principal components of normalised_mat. (Refer to section VI of John Shlen’s tutorial (2014)
for the full mathematical proof of this relationship).
Why use SVD over the covariance matrix?
• Its faster (Facebook published a fast randomized SVD)
• Singular values from SVD are sorted (we have to sort the eigenvalues in ascending order) | {"url":"http://alyssaq.github.io/2015/20150426-simple-movie-recommender-using-svd/","timestamp":"2024-11-10T12:59:15Z","content_type":"text/html","content_length":"24703","record_id":"<urn:uuid:f73cfbe0-d7fa-42dc-aa08-010337b3e88d>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00738.warc.gz"} |
An Amateur Solved a 60-Year-Old Maths Problem About Colours That Can Never Touch
Physics19 April 2018
Amateur mathematician Aubrey de Grey has stunned the maths world by making the first significant progress in decades towards solving a longstanding riddle - one that's perplexed mathematical thinkers
for over 60 years.
The riddle, called the Hadwiger-Nelson problem, is basically all about untouchable colours, and how many of them – or, rather, how few – can be represented on a graph with potentially infinite
Picture a graph made up of a number of different scattered points on a plane, all connected by lines drawn between them. If each of these points (or vertices) were coloured, how many different
colours would you need so that no two connected dots shared the same hue?
In a nutshell, that's as simple as the Hadwiger-Nelson problem is, but solving the puzzle is no easy task – especially when the question theoretically contemplates an infinite number of linked
First comprehensively formulated by Princeton mathematician Edward Nelson in 1950, the problem has never been definitely solved, but not for lack of trying.
Soon after the question was first posed, mathematicians figured out that an infinite Hadwiger-Nelson plane would require no fewer than four colours, but wouldn't need more than seven.
Then, for decades, minimal progress was made on narrowing this range down any further – until this month, when de Grey uploaded a new proof to the pre-print research site arXiv.org.
De Grey, who only tinkers with mathematics for fun in his spare time, isn't just notable for the new solution.
He's better known for being a provocative longevity researcher, who thinks human ageing processes can actually be reversed – and helps lead a research foundation dedicated to investigating how
regenerative medicine can cure "age-related disease".
Ultimately, the group's mission is to see humans live for hundreds of years longer than our bodies currently allow – potentially to an age of 1,000, believe it or not – which is pretty heady stuff,
so it's no wonder the gerontologist likes to tinker with math puzzles in his downtime.
Fortunately, as de Grey explained to Quanta Magazine, last Christmas gave him an opportunity to do just that, and by fiddling around with the Hadwiger-Nelson problem, he figured out an assumption
mathematicians had made for decades was in fact quite wrong.
In his pre-print solution, de Grey demonstrates that a graph with 1,581 vertices requires at least five different colours – not four as had been previously thought to be the lower range answer to the
De Grey made the discovery by playing around with a shape called the Moser spindle, composed of seven vertices and eleven edges.
By aggregating huge numbers of these constructs together with other shapes, de Grey realised a composite of 20,425 points required more than four colours: the first time the Hadwiger-Nelson range had
been narrowed in more than 60 years.
Eventually, de Grey minimised his five-colour graph to 1,581 vertices, and along with sharing his new work, invited other mathematicians to see if they could improve upon this further, by finding
graphs with even fewer points that require at least five colours.
A number of mathematicians have taken part in the challenge, and at present, the new record seems to be 826 vertices – but now that there's a fresh revival of interest in Hadwiger-Nelson and colours
that can't touch, there's no telling how far the research could go from here.
As for de Grey, the man who thinks we'll one day live to 1,000 is pretty humble about his contribution. He told Quanta, "I got extraordinarily lucky".
The pre-print findings are available at arXiv.org. | {"url":"https://www.sciencealert.com/amateur-solves-decades-old-maths-problem-about-colours-that-can-never-touch-hadwiger-nelson-problem","timestamp":"2024-11-02T09:20:09Z","content_type":"text/html","content_length":"139372","record_id":"<urn:uuid:6560118b-940e-4b7c-a190-f7996bdbfdee>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00371.warc.gz"} |
hardgrove grindability index of lime stone
Hardgrove Grindability Index -----: 40 – 50 Size, mm -----: 50 Fines, 0-2 mm -----: ... Apart from coal, Semirara Island has substantial silica, limestone and clay resources. The company has a
pending application for a Mineral Production Sharing Agreement (MPSA) which, if approved, will allow it to mine these resources. ... | {"url":"https://www.gitesdescoymes.fr/2021/Feb/09_1560.html","timestamp":"2024-11-06T13:46:06Z","content_type":"text/html","content_length":"52048","record_id":"<urn:uuid:70037b40-66e1-4eb0-a6a3-7b084012b630>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00675.warc.gz"} |
Why is cot(x) continious in (0,pi) ?
• Thread starter phymatter
• Start date
In summary, cot(x) is continuous in the interval (0, pi) because the limits of 1/tan(x) from both the left and right approach zero, making the function continuous at those points. Although cot(pi/2)
is not defined, the limit at that point is still equal to zero, making the function continuous overall. This is because the definition of cot(x) is cos(x)/sin(x), so at pi/2, cos(pi/2) is 0 and sin
(pi/2) is 1, resulting in a value of 0 for cot(pi/2).
why is cot(x) continious in (0,pi) ?
why is cot(x) continious in (0,pi) ?
I mean Cot(x)=1/Tan(x) , now at pi/2 , tan(x) tends to infinity => 1/tan(x) tends to 0 , now
1/tan(x) is certainly not = 0 , therefore how can cot(x) be continious ?
Staff Emeritus
Science Advisor
Gold Member
So for values near, but less then pi/2, Tan goes to +infinity; for values near, but greater then pi/2, tan goes to -infinity. What does that mean about the continuity of cot?
In a case like this you need to look at the two limits of 1/tan, one coming from the left, the other the right. If they are equal then cot is continuous. In this case both approach zero, so the
function is continuous and has the value zero.
Integral said:
So for values near, but less then pi/2, Tan goes to +infinity; for values near, but greater then pi/2, tan goes to -infinity. What does that mean about the continuity of cot?
In a case like this you need to look at the two limits of 1/tan, one coming from the left, the other the right. If they are equal then cot is continuous. In this case both approach zero, so the
function is continuous and has the value zero.
for a function to be continious at c LHL= RHL at c and also lim at c = f(c) , but here cot(pi/2) does not exist !
Staff Emeritus
Science Advisor
Gold Member
Integral said:
cot(pi/2) only tends to 0 , but is never 0! by defination of limit .
No, that does not follow and limits have nothing to do with it. cot(x) is defined as "cos(x)/sin(x)". When x= [itex]\pi/2[/itex], [itex]cos(\pi/2)= 0[/itex] and [itex]sin(\pi/2= 1[/itex] so [itex]cot
(\pi/2)= 0[/itex].
You seem to be thinking that cot(x)= 1/tan(x) for all x. It isn't- that is only true as long as both cot(x) and tan(x) exist.
FAQ: Why is cot(x) continious in (0,pi) ?
1. Why is cot(x) continuous in (0,pi)?
The function cot(x) is continuous in the interval (0,pi) because it is a trigonometric function that is defined and continuous for all values of x except for the points where sin(x) = 0, which occur
at x = 0, pi, and all integer multiples of pi. Since the interval (0,pi) does not include any of these points, the function remains continuous throughout.
2. What is the definition of continuity in mathematics?
In mathematics, continuity is a property of a function where the function's output changes only slightly when the input changes slightly. This means that the function is smooth and has no sudden
jumps or breaks in its graph.
3. How does the continuity of cot(x) in (0,pi) affect its behavior?
The continuity of cot(x) in the interval (0,pi) means that the function has a smooth and continuous graph in this interval. This allows us to make predictions and analyze its behavior without any
sudden changes or disruptions in the function's output.
4. Can cot(x) be continuous in other intervals besides (0,pi)?
Yes, cot(x) can be continuous in other intervals besides (0,pi). It is continuous in any interval that does not include the points where sin(x) = 0, as these points cause the function to be undefined
and discontinuous.
5. What is the importance of studying the continuity of functions?
Studying the continuity of functions is important in many areas of mathematics and science. It helps us understand the behavior of functions and make predictions about their output. Continuity is
also a fundamental concept in calculus, as it is necessary for the application of many important theorems and techniques. | {"url":"https://www.physicsforums.com/threads/why-is-cot-x-continious-in-0-pi.360800/","timestamp":"2024-11-02T02:46:40Z","content_type":"text/html","content_length":"96094","record_id":"<urn:uuid:08e58ddc-93ef-403f-a677-e306d6f9c255>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00750.warc.gz"} |
IIFT Data Sufficiency Questions and Answers for Entrance Test
16 November 2020
IIFT Data Sufficiency Questions and Answers for Entrance Test
Directions: Each of the questions below consists of a question and two statements numbered I and II given below it. You have to decide whether the data provided in the statements are sufficient to
answer the question. Read both the statements and give answer
(1) if the data in statement I alone are sufficient to answer the question, while the data in statement II alone are not sufficient to answer the question;
(2) if the data in statement II alone are sufficient to answer the question, while the data in statement I alone are not sufficient to answer the question
(3) if the data either in statement I alone or in statement II alone are sufficient to answer the question;
(4) if the data given in both the statements I and II together are not sufficient to answer the question
(5) If the data in both the statements I and II together are necessary to answer the question.
1. What does ‘Ta’ mean in a code language?
I. ‘Pe Bo Ta’ means ‘have that cover’ and ‘Ki Bp Se lie’ nieans ‘apply that blue cover’ in the code language.
II. ‘Ne Ka Pa Ia’ means have something to eat’ and ‘Si Li Ta Wa’ means ‘have they gone wrong’ in the code language.
2. Which village is to the North-East of village ‘A’?
I. Village ‘B’ is to the North of village ‘A’, and village ‘C’ and ‘D’ are to the East and West of village ‘B’, respectively.
II. Village ‘P’ is to the South of village ‘A’, and village ‘E’ is to the East of village ‘P’, village ‘K’ is to the North of village ‘P’.
3. Can Rohan retire from office ‘X’ in January 2000 with full pension benefits?
I. Rohan will complete 30 years of service in office ‘X’ in April 2000 and desires to retire.
II. As per office ‘X’ rules, an employee has to complete minimum 30 years of service and attain age of 60. Rohan has 3 years to complete age of 60.
1. What is the code for the word ‘CHANDRAKANTA’? One number can be used as a code for more than one letter.
I. The code for MAHABHARATA is 5810871089878 and the codes for different letters in the word are in the same sequence as the letters themselves.
II. The code for RAMAYANA is 98582848 and the codes for different letters in the word are in the same sequence as the letters themselves.
4. XTV has decided to launch its new channel. This channel would broadcast 3 types of programmes viz, music shows, news and serials. Different time slots are allotted for each of these programmes
morning slot 7 am. – 1 pm., afternoon slot 1 pm. – 5 pm. and evening slot 5 pm. – 11 pm. What is the time span for serials in the evening slot? Each programme in the morning and the evening slot
spans 2 hours.
I. The morning programme starts with the news which spans from 7 to 9 in the morning and is broadcasted at the same time in the evening as well.
II. The end time of serials in the evening slot and the starting time of the serial in the morning differ by 12 hours.
5. Anand, Babu, Senthil and Tarun are four friends. How many of them are married?
I. True statement : Senthil and Tarun are not married.
II. False statement : Neither Anand nor Babu are married.
6. From a group of seven boys A, B, C, D, E, F and G and six girls H, l, J, K, L and M a team of seven members is to be chosen such that: A
1) A and H are always together
2) C can’t be chosen with H.
3) D, E and F have to be together.
4) G can’t be chosen with M.
5) B and I have to be together
6) B and J cannot be chosen together. f‘
Is B in the team?
I. A, E and L are chosen in the team.
II. A, L and D are not chosen while M is chosen.
7. There are fifteen boxes of three different sizes – large, small and medium and are of five different colours – red, white, black, yellow and orange. Medium boxes are in between. one large box and
one small box. No two boxes of same size are kept together. There are five boxes of each size. Each size has boxes of all the five colours. White boxes are not together. The first box is a large
white box and the last box is a small black box. No small box is kept on the right of a large box. If white box is in the third place from left, then which box is in the seventh place?
I. The Black and White boxes are not together.
II. The Red and Orange boxes are always together. Red boxes are not in places which are a prime number or a perfect cube.
8. How many workers are employed in the factory?
I. If 4 more people join, there are more than 14 people in the factory.
II. If 4 people are removed, there are more than 14 people in the factory.
9. A, B, C, D, E and F are to be arranged around a circular table. B does not sit opposite E. D and C are together. What is the arrangement?
I. E sits to the left of A.
II. E doesn’t want to sit near F.
10. There are nine people A, B, C, D, E, F, G, H and I to be arranged in a row. G is at the center. H and A are not at the corners. C and D are together. E and F are not together. Who definitely
occupies a corner position?
I. B can occupy only odd positions.
II. I can occupy only prime numbered positions.
Read More Questions on Data Sufficiency
Problems on Arithmetic Questions on Mathematics
On Bank PO Reasoning On Blood Relation
For CAT Preparation CSAT Test Papers
IBPS Clerk Practice Set IBPS PO Mock Test
GMAT Sample Papers NMAT Model Set
TS ICET Previous Papers MAT Model Questions
RBI Preparation Questions SBI PO Old Question
TANCET Test Paper SSC Questions and Answers
Problems from UPSC SNAP Model Questions
XAT Sample Papers IIFT Previous Set
11. 4 friends A, B, C and D need to cross a bridge in the night. They have only one torch and two friends can cross the bridge at a time. What is the minimum time in which they can cross the bridge?
The time taken by the slower person is considered while crossing the bridge. The travel time for each person is a distinct integer value. The bridge cannot be crossed without a torch.
I. A takes l minute to cross the bridge and it is the smallest time required.
II. B takes twice as much time as A. D takes maximum time i.e., 10 minutes to cross the bridge.
12. Seven friends A, B, C, D, E, F and G go for a movie. Out of these, AD, FC and BG are couples of which A, F and B are husbands. The couples sit with husbands and wives in the left to right order.
Who sit at the two corners?
I. E sits at the centre with D sitting 2 places to the left of E.
II. B does not sit at the corner and nor does G.
13. In a family of 7 members Reema, Dev, Lakshmi, Krishna, Suja, Adi and Anita, state the relationship between Suja and Adi. Reema has a son and a daughter. Krishna and Dev are males. Suja and
Lakshmi are females.
I. Reema has a grandson and a granddaughter. Anita is Krishna’s niece.
II. Suja is Lakshmi’s sister-in-law. Dev has two children Adi and Anita.
14. There are four men Ankit, Ajay, Amit and Ajit married to Reshma, Ragini, Renu and Ranjani. Ankit is not married to Reshma. To whom is Amit married if he is not married to Renu?
I. Ajay is married to Ragini.
II. Amit is not married to Reshma.
15. In an intercom system of a certain office, all phone numbers consist of four digits. The manager of that office has a number which has distinct digits. What is his phone number?
I. The digits of the phone number are in A.P from left to right.
II. The product of the digits is divisible by 5 and 7. The sum of the digits is less than 19. | {"url":"https://www.examyear.com/iift-data-sufficiency/","timestamp":"2024-11-09T16:32:20Z","content_type":"text/html","content_length":"118367","record_id":"<urn:uuid:bcf80495-3354-49d3-9207-b077049df57c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00536.warc.gz"} |
Tanja Lange's homepage
2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 older presentations
Slides of recent (well..) talks
The talks come in chronological order.
• Fast arithmetic on hyperelliptic Koblitz curves ,
invited talk at the MAGiC conference in Urbana/Champaign
M1.ps M2.ps M3.ps Slides (3 files, middle one containing picture of Diffie-Hellman key-exchange)
• Efficient arithmetic on (hyper-)elliptic curves over finite fields,
talk at UCL Crypto Group - Seminar Series
• Efficient arithmetic on (hyper-)elliptic curves over finite fields,
invited talk at 2003 International Symposium on Next Generation Cryptography and Related Mathematics, Japan
• Efficient arithmetic on (hyper-)elliptic curves over finite fields,
invited talk at Computational Aspects of Algebraic Curves, and Cryptography, Gainesville
• Efficient arithmetic on (hyper-)elliptic curves over finite fields,
talk at Cryptography Seminar in Rennes
• Improved Algorithms for Efficient Arithmetic on Elliptic Curve using Fast Endomorphisms,
talk given by Francesco Sica at Eurocrypt 2003
Slides in pdf
• Efficient arithmetic on (hyper-) elliptic curves over finite fields,
invited talk at ECC 2003
• Cryptographic Applications of Trace Zero Varieties,
invited talk at Mathematics of Discrete Logarithms, Essen
• Cryptographic Applications of Trace Zero Varieties,
talk at Dagsthul Seminar -- Algorithms and Number Theory
• Mathematical Countermeasures Against Side-Channel Attacks on ECC/HECC,
talk at YACC 2004
• Introduction to Side-Channel Attacks on elliptic and hyperelliptic curves,
talk at ANTS VI 2004
• Montgomery Addition for Genus Two Curves,
talk at ANTS VI 2004
• Mathematical Countermeasures against Side-Channel Attacks on Elliptic and Hyperelliptic Curves,
invited talk at WARTACRYPT '04
• Pseudorandom Number Generators Based on Elliptic Curves,
invited talk at Number Theoretic Algorithms and Related Topics
• Hyperelliptic curves in cryptography,
talk at the "Seminar on zeta functions'' at the Technical University Tokyo
• Efficient arithmetic on (hyper-)elliptic curves over finite fields,
talk at the "COSIC Seminar" KU Leuven
• Curve Cryptography - suitable primitives for embedded device ,
invited talk at Cryptologie et Algorithmique En Normandie (CAEN'05)
• Pairings on ordinary hyperelliptic curves,
invited talk at Pairings in Cryptography
• Arithmetic on Binary Genus 2 Curves Suitable for Small Devices,
talk at Workshop on RFID and Lightweight Crypto
• Arithmetic of hyperelliptic curves over finite fields,
talk at Discrete Mathematics Seminar, University of Calgary
• Efficient computation of pairings on non-supersingular hyperelliptic curves,
talk at Number Theory Inspired by Cryptography, Banff
• Efficient computation of pairings on non-supersingular hyperelliptic curves,
invited talk at Algebraic Methods in Cryptography, Bochum
• Cryptographic Applications of Trace Zero Varieties,
seminar talk at Cryptology Research Group at the Indian Statistical Institute, Kolkata
• Pairings in Cryptography,
tutorial at ASIACRYPT 2005
• Distribution of Some Sequences of Points on Elliptic Curves,
invited talk at AMS Sectional Meeting Program by Special Session, Special Session on Number Theory
• Arithmetic of hyperelliptic curves over finite fields, part of lecture at
Summer School on "Computational Number Theory and Applications to Cryptography"
• Pairing Based Cryptography, part of lecture at
Summer School on "Computational Number Theory and Applications to Cryptography"
• Analysis of pseudo-random number generators based on elliptic curves,
talk at 31st Australasian Conference on Combinatorial Mathematics & Combinatorial Computing(ACCMCC)
• Fast bilinear maps from the Tate-Lichtenbaum pairing on hyperelliptic curves,
talk at ANTS VII, Berlin
• Efficient arithmetic on (hyper-)elliptic curves over finite fields,
invited talk at 2006 Workshop on Cryptography and Related Mathematics
• Hyperelliptic Curves,
talk at Information Security Summer School (ISSS) 2006. Taiwan.
• Efficient arithmetic on hyperelliptic curves over finite fields,
talk at Information Security Summer School (ISSS) 2006. Taiwan.
• Pairing Based Cryptography,
talk at Information Security Summer School (ISSS) 2006. Taiwan.
• Public Key Cryptography - Performance Comparison and Benchmarking,
keynote at Simpósio Brasileiro em Segurança da Informação e de Sistemas Computacionais (SBSeg)
• Index Calculus in Finite Fields & Hyperelliptic Curves,
tutorial at WCAP 2006 - III Workshop on Cryptographic Algorithms and Protocols
• Efficient arithmetic on hyperelliptic curves over finite fields & Pairings,
tutorial at WCAP 2006 - III Workshop on Cryptographic Algorithms and Protocols
• Elliptic vs. hyperelliptic, part 2,
invited talk at ECC 2006
Slides and Slides in ps.gz
Part 1 of the fight was excecuted by Daniel J. Bernstein, his slides can be found here.
• Open Problems in Pairings,
invited talk at Number Theory and Cryptography - Open Problems
• Tanja Lange,
on the occasion of presentating the new employees of the faculty for mathematics and computer science of the Technische Universiteit Eindhoven
Slides (pdf)
• Cryptographic applications of curves over finite fields,
invited talk at General Mathematical Colloquium Utrecht
• Unified addition formulae for elliptic curves,
invited talk at AMS Special Session on Mathematical Aspects of Cryptography, 2007 Spring AMS Eastern Section Meeting
• Elliptic vs. hyperelliptic, part 2,
talk at EIDMA Seminar Combinatorial Theory
• Mathematical Background of Pairings,
talk at ECRYPT PhD Summer School on Emerging Topics in Cryptographic Design and Cryptanalysis
• Fast scalar multiplication on elliptic curves,
invited talk at Conference on Algorithmic Number Theory
• Elliptic vs. hyperelliptic, part 3 - Elliptic Strikes Back,
talk at Eurocrypt 07 Rump Session
• Side-channel attacks and countermeasures for curve based cryptography,
invited talk at Quo vadis cryptology ? - Threat of Side-Channel Attacks
Slides in ps
Slides in pdf
• Fast scalar multiplication on elliptic curves,
talk at 8th International Conference on Finite Fields and Applications
• Elliptic vs. Hyperelliptic, part 3: Elliptic strikes back,
invited presentation at 11th Workshop on Elliptic Curve Cryptography 2007
Slides for my half
Slides for Dan Bernstein's half
• The EFD thing,
presentation at the rump session of CHES 2007 given jointly with Dan Bernstein
• Edwards Curves for Cyptography,
presentation at EIDMA/DIAMANT Cryptography Working Group
• Edwards coordinates for elliptic curves, part 1,
invited presentation at Explicit Methods in Number Theory In honour of Henri Cohen
Part 2 was given by Dan Bernstein
Dan's slides
• Edwards Coordinates for Elliptic Curves, part 1,
invited presentation at SAGE Days 6: Cryptology, Number theory, and Arithmetic Geometry
Part 2 was given by Dan Bernstein
Dan's slides
• Edwards Curves for Cryptography,
invited key-note presentation at Kolloquium über Kombinatorik
• Faster Addition and Doubling on Elliptic Curves,
joint presentation with Dan Bernstein at ASIACRYPT 2007
• Edwards Coordinates,
invited key-note presentation at Applied Algebra, Algebraic Algorithms, and Error Correcting Codes (AAECC-17)
• The power of mathematics to protect data and to break data protection,
presentation at Research day at TU/e
• Revisiting pairing based group key exchange,
presentation at Financial Cryptography and Data Security 2008
• Binary Edwards Curves,
presentation at the Eurocrypt 2008 Rump Session
• Faster arithmetic on elliptic curves -- blessing to ECC, harm to RSA,
presentation at EIPSI Grand Opening
• Binary Edwards Curves,
invited presentation at Seminario Matematico, Universidad Autonoma Madrid
• Scalar Multiplication and Weierstrass Curves,
presentation at 3rd ECRYPT PhD SUMMER SCHOOL Advanced Topics in Cryptography
• Shapes of Elliptic Curves,
rump-session presentation at ANTS 2008
See also the zoo pictures.
• Twisted Edwards Curves,
talk given by Daniel J. Bernstein on our joint paper at Africacrypt 2008
• Binary Edwards Curves,
invited presentation at Séminaire de Cryptographie de Rennes
• Binary Edwards Curves,
presentation at CHES 2008
• ECM on Graphics Cards,
presentation at CADO workshop on integer factorization
• Post-Quantum Cryptography,
invited presentation at INDOCRYPT 2008
Slides in ps
Slides in pdf
• Elliptic Curve Cryptography,
invited presentation at Malaviya National Institute of Technology, Jaipur, India (Department of Computer Engineering)
• Models of Elliptic Curves,
joint invited presentation with Dan Bernstein at Curves, Coding Theory, And Cryptography, ESF Exploratory Workshop - PESC
• Pairings on Edwards Curves,
invited presentation at Arithmétique, géométrie, cryptographie et théorie des codes
Preprint with more details
• Pairings on Edwards curves,
invited presentation at Fields Cryptography Retrospective Meeting
• Pairings on Edwards Curves,
presentation at Dagstuhl Seminar Algorithms and Number Theory
• Post-Quantum Cryptography,
presentation at Dagstuhl Seminar 09311Classical and Quantum Information Assurance Foundations and Practice
• Efficient Implementation of Pairings,
invited key note presentation at Pairing 2009
• Applied cryptanalysis - or - how to win an iPhone,
rump session presentation at ECC
• ECM using Edwards curves,
invited presentation at Workshop on Factoring Large Integers
• ECM using Edwards curves,
invited presentation at Workshop on Discovery and Experimentation in Number Theory held at Fields Institute, Toronto and Simon Fraser University, Burnaby.
See also here for audio recordings.
• What is a use case for quantum key exchange? Part I,
invited presentation at Workshop on quantum information
PartII was given by Daniel J. Bernstein, see his talks page.
• Coppersmith's factorization factory,
invited presentation at General Colloquium Leiden
• Post-Quantum Cryptography,
invited presentation at ISI Seminar at QUT Brisbane
• Breaking ECC2K-130,
presentation at Early Symmetric Crypto (ESC) seminar
• Small high-security public-key encryption and signatures,
invited presentation at The First Taiwanese Workshop on Security and System-on-Chip
• ECC minicourse,
talk given jointly with Daniel J. Bernstein in a minicourse following Africacrypt 2010.
• Starsh on strike,
talk given jointly with Daniel J. Bernstein at Latincrypt.
• Why CHES is better than CRYPTO,
talk given jointly with Daniel J. Bernstein that the CHES 2010 Rump Session.
• Attacking Elliptic Curve Challenges,
presentation at European Cryptography Day
• Breaking ECC2K-130,
invited presentation at Workshop on Elliptic Curves and Computation
• Elliptic Curve Cryptography,
key note presentation at 81. Arbeitstagung Allgemeine Algebra
• Code-based cryptography,
invited presentation at Workshop on: Solving polynomial equations
• On the correct use of the negation map in the Pollard rho method,
talk given jointly with Daniel J. Bernstein at PKC 2011
• Minicourse on the ECDLP,
given jointly with Daniel J. Bernstein at CSIT (Centre for Strategic Infocomm Technologies), Singapore
day 1
my part of day 2
my part of day 3
my part of day 4
• Breaking ECC2K-130,
key note presentation at CrossFyre
• Syndrome-based hash functions,
invited presentation at International Workshop on Coding & Cryptology (IWCC)
• Code-based cryptography,
key note presentation at The 10th International Conference on Finite Fields and their Applications
• Code-based cryptography,
invited presentation at Aspects of Coding Theory
• Advances in Elliptic-Curve Cryptography,
invited presentation at International Conference on Coding and Cryptography
• State-of-the-art branchless techniques for elliptic curve scalar multiplication,
talk at Dagstuhl seminar on Quantum Cryptanalysis
blackboard snapshots
• Advances in Elliptic-Curve Cryptography,
plenary talk at SIAM Conference on Applied Algebraic Geometry
• Advances in Elliptic-Curve Cryptography,
invited talk at the General Mathematics Colloquium
• Code-based cryptography,
invited talk at the Spanish Cryptography Days
• Elliptic curves for applications,
tutorial at Indocrypt 2011
• A battle of bits: building confidence in cryptography,
invited talk given jointly with Daniel J. Bernstein at Mathematical and Statistical Aspects of Cryptography
• The new SHA-3 software shootout,
Talk given jointly with Daniel J. Bernstein at the Third SHA-3 Candidate Conference
• Two grumpy giants and a baby,
Talk given jointly with Daniel J. Bernstein at the Ei/PSI Cryptography Working Group
• Factorization (tutorial),
given jointly with Daniel J. Bernstein at CSIT (Centre for Strategic Infocomm Technologies), Singapore.
• The security impact of a new cryptographic library.,
Talk given jointly with Daniel J. Bernstein at the ACNS 2012
• Never trust a bunny,
Talk given jointly with Daniel J. Bernstein at the RFIDsec 2012
• Two grumpy giants and a baby,
Talk given jointly with Daniel J. Bernstein at the ANTS 2012
• The security impact of a new cryptographic library.,
Invited talk given jointly with Daniel J. Bernstein at the "Short Subjects in Security seminar" at Qualcomm, San Diego,
• Two grumpy giants and a baby,
Talk given jointly with Daniel J. Bernstein at YACC 2012
• Post-quantum cryptography -- long-term confidentiality and integrity for communication,
invited presentation at This Week's Discoveries - science faculty Leiden
• Advances in Elliptic-Curve Cryptography,
invited presentation at Academia Sinica, IIS seminar (Taiwan)
• High-speed high-security cryptography on ARMs,
Talk given jointly with Daniel J. Bernstein at the escar 2012
Secure cryptography does not need to be big and slow. This talk explains the cryptographic primitives behind the record-setting software in the NaCl library (http://nacl.cr.yp.to), reports
timings on a variety of CPUs, and then focuses on ARM processors, with an emphasis on the popular ARM Cortex A8 CPU core.
• Computing small discrete logarithms faster,
Talk given jointly with Daniel J. Bernstein at Indocrypt 2012
• FactHacks - RSA factorization in the real world,
Talk given jointly with Daniel J. Bernstein and Nadia Heninger at 29C3
See also our related webpage http://facthacks.cr.yp.to/ and the video.
• The state of factoring algorithms and other cryptanalytic threats to RSA ,
Talk given jointly with Daniel J. Bernstein and Nadia Heninger.
See also our related webpage http://facthacks.cr.yp.to/
• Non-uniform cracks in the concrete: the power of free precomputation ,
Talk given jointly with Daniel J. Bernstein at ESC 2013
• Post-Quantum Cryptography ,
presentation at Crypto for 2020
• Crypto for Security and Privacy,
panel at Crypto for 2020
• Modeling the Security of Cryptography, Part 2: Public-Key Cryptography,
invited presentation at Modeling Intractability workshop
Part 1: secret-key cryptography. was given by Daniel J. Bernstein.
• Never trust a bunny,
rump session presentation at Modeling Intractability workshop
• The security impact of a new cryptographic library.,
Inivited talk given jointly with Daniel J. Bernstein at the Security seminar at the University of Haifa and 2 days later at the Theory Seminar at the Weizmann Institute of Science
• Security dangers of the NIST curves,
Invited talk given jointly with Daniel J. Bernstein at the International State of the Art in Cryptography - Security workshop in Athens.
• Public-key cryptography and the Discrete-Logarithm Problem,
first lecture in Summer School - Number Theory for Cryptography
• Signatures and DLP-I,
second lecture in Summer School - Number Theory for Cryptography
• DLP-II and curves with endomorphisms,
third lecture in Summer School - Number Theory for Cryptography
• Pairings and DLP-III,
fourth lecture in Summer School - Number Theory for Cryptography
• Factoring RSA keys from certified smart cards: Coppersmith in the wild,
invited presentation at Number Theory, Geometry and Cryptography
• Post-Quantum Cryptography,
presentation at SIAM conference on Applied Algebraic Geometry
• Spyin' NSA,
song with several others at Crypto 2013 Rump Session
YouTube recording.
• Under Surveillance,
song with several others at Crypto 2013 Rump Session
YouTube recording.
• Factoring RSA keys from certified smart cards: Coppersmith in the wild,
presentation at Cryptography Working Group, Sep 6, 2013
• Factoring RSA keys from certified smart cards: Coppersmith in the wild,
presentation at ECC Rump session
• Benchmarking of post-quantum cryptographyi,
invited presentation at ETSI Quantum-Safe-Crypto Workshop
• Factoring RSA keys from certified smart cards: Coppersmith in the wild,
invited presentations at Computer Science Colloquium (Macquarie University) and Computational Algebra Seminar (Sydney University)
• Presentation at Department Dialog (TU/e)
• Some Elliptic CUrve REsults presentation at the Asiacrypt 2013 rump session
• Non-uniform cracks in the concrete: the power of free precomputation,
Talk given jointly with Daniel J. Bernstein at Asiacrypt 2013
• Factoring RSA keys from certified smart cards: Coppersmith in the wild
Talk given jointly with Nadia Heninger at Asiacrypt 2013
• Cleaning up crypto,
Invited talk given jointly with Daniel J. Bernstein at the International View of the State-of-the-Art of Cryptography and Security and its Use in Practice (IV) in Bangalore
• The year in crypto,
Talk given jointly with Daniel J. Bernstein and Nadia Heninger at 30C3
Slides video
• (Tweet)NaCl,
Invited talk given jointly with Daniel J. Bernstein and Peter Schwabe at 30C3 during the #youbroketheinternet assembly.
Last modified: 2024.07.19 | {"url":"http://www.hyperelliptic.org/tanja/talks.html","timestamp":"2024-11-02T17:46:16Z","content_type":"text/html","content_length":"91565","record_id":"<urn:uuid:6b2e54ab-6ed5-4d3f-8854-54681ae4226a>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00792.warc.gz"} |
PARCC Grade 7 Math Practice Test Questions - Practice Test Geeks
PARCC Grade 7 Math Practice Test Questions
1. Edward spins the spinner below three times. If the spinner lands on a different number each time what is the highest total he could get?
Correct! Wrong!
1. A: I the spinner must land on a different number each time then the highest three numbers are 12,14, and 18. Added together they equal 44.
2. Which of the following is equivalent to 43+12÷4+82x3?
Correct! Wrong!
2. D: The order of operations states that numbers with exponents must be evaluated first. Thus, the expression can be rewritten as 64+12÷4+64x3. Next, multiplication and division must be computed as
they appear from left to right in the expression. Thus, the expression can be further simplified as 64+3+192, which equals 259.
3. Given the figure below what is the area of the shaded regions? Figure is not to scale.
Correct! Wrong!
3. 45 square inches: The top left shaded region can be found by first finding the width. Since 6 in. is given as the width of the whole rectangle and 4 in. is given for the width of the non shaded
region then the width of the shaded region is the difference of 2 inches. So, the area of that region is 7 in.x2in.=14 square inches. The other shaded region can be broken into a 3 in. by 3 in.
square and a 4 in. by 6 in. rectangle. So, 3 in.x3 in.=9 ssquare inches and 4 in.x6 in.=24 square inches. Added together the total area is 45 square inches.
4. The number 123 is the 11th term in a sequence with a constant rate of change. Which of the following sequences has this number as its 11th term?
Correct! Wrong!
4. B: All given sequences have a constant difference of 12. Subtraction of 12 from the starting term, given for Choice B, gives a y-intercept of ?9. The equation 123=12x-9 can thus be written.
Solving for x gives x=11; therefore, 123 is indeed the 11th term of this sequence. Manual computation of the 11th term by adding the constant difference of 12 also reveals 123 as the value of the
11th term of this sequence.
5. Ashton draws the parallelogram shown below. How many square units represent the area of the parallelogram?
Correct! Wrong!
5. 84: The area of a parallelogram can be found by using the formula A=bh, where b represents the length of the base and h represents the height of the parallelogram. The base and the height of the
parallelogram are 12 units and 7 units, respectively. Therefore, the area can be written as A=12?7, which equals 84. | {"url":"https://practicetestgeeks.com/wp_quiz/parcc-grade-7-math-practice-test-questions/","timestamp":"2024-11-04T18:34:59Z","content_type":"text/html","content_length":"99114","record_id":"<urn:uuid:a0fe0060-354b-477a-845a-edb46e88ccbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00478.warc.gz"} |
Complex number
complex number
is a number that can be expressed in the form, where and are real numbers and is the imaginary unit, which satisfies the equation . In this expression, is the
real part
and is the
imaginary part
of the complex number. Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis
for the imaginary part. The complex number can be identified with the point in the complex plane. A complex number whose real part is zero is said to be purely imaginary, whereas a complex number
whose imaginary part is zero is a real number.
The above text is a snippet from Wikipedia: Complex number
and as such is available under the Creative Commons Attribution/Share-Alike License.
complex number
1. A number of the form a + bi, where a and b are real numbers and i denotes the imaginary unit
The above text is a snippet from Wiktionary: complex number
and as such is available under the Creative Commons Attribution/Share-Alike License.
Need help with a clue?
Try your search in the crossword dictionary! | {"url":"https://crosswordnexus.com/word/COMPLEXNUMBER","timestamp":"2024-11-03T22:41:51Z","content_type":"application/xhtml+xml","content_length":"10582","record_id":"<urn:uuid:68f50ea5-fde9-4acd-ac20-342ee5a24b75>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00691.warc.gz"} |
Lesson 21
Rational Equations (Part 2)
• Let’s write and solve some more rational equations.
21.1: Math Talk: Adding Rationals
Solve each equation mentally:
\(\dfrac{x}{2} = \dfrac{3}{4}\)
\(\dfrac{3}{x} = \dfrac{1}{6}\)
\(\dfrac{1}{4} = \dfrac{1}{x^2}\)
\(\dfrac{2}{x} = \dfrac{x}{8}\)
21.2: A Rational River
Noah likes to go for boat rides along a river with his family. In still water, the boat travels about 8 kilometers per hour. In the river, it takes them the same amount of time \(t\) to go upstream 5
kilometers as it does to travel downstream 10 kilometers.
1. If the speed of the river is \(r\), write an expression for the time it takes to travel 5 kilometers upstream and an expression for the time it takes to travel 10 kilometers downstream.
2. Use your expressions to calculate the speed of the river. Explain or show your reasoning.
21.3: Rational Resistance
Circuits in parallel follow this law: The inverse of the total resistance is the sum of the inverses of each individual resistance. We can write this as: \(\displaystyle \frac{1}{R_T}=\frac{1}{R_1} +
\frac{1}{R_2}+ . . . + \frac{1}{R_n}\) where there are \(n\) parallel circuits and \(R_T\) is the total resistance. Resistance is measured in ohms.
1. Two circuits are placed in parallel. The first circuit has a resistance of 40 ohms and the second circuit has a resistance of 60 ohms. What is the total resistance of the two circuits?
2. Two circuits are placed in parallel. The second circuit has a resistance of 150 ohms more than the first. Write an equation for this situation showing the relationships between \(R_T\) and the
resistance \(R\) of the first circuit.
3. For this circuit, Clare wants to use graphs to estimate the resistance of the first circuit \(R\) if \(R_T\) is 85 ohms. Describe how she could use a graph to determine the value of \(R\) and
then follow your instructions to find \(R\).
Two circuits with resistances of 40 ohms and 60 ohms have a combined resistance of 24 ohms when connected in parallel. If we had used two circuits that each had a resistance of 48 ohms, they would
have had that same combined resistance. 48 is called the harmonic mean of 40 and 60. A more familiar way to find the mean of two numbers is to add them up and divide by 2. This is the arithmetic
mean. Here is how each kind of mean is calculated:
Harmonic mean of \(a\) and \(b\):
Arithmetic mean of \(a\) and \(b\):
The harmonic mean of 40 and 60 was 48, and their arithmetic mean is (40+60)/2=50. Experiment with other pairs of numbers. What can you conclude about the relationship between the harmonic mean and
arithmetic mean?
A boat travels about 6 kilometers per hour in still water. If the boat is on a river that flows at a constant speed of \(r\) kilometers per hour, it can travel at a speed of \(6+r\) kilometers per
hour downstream and \(6-r\) kilometers per hour upstream. (And if the river current is the same speed as the boat, the boat wouldn’t be able to travel upstream at all!)
On one particular river, the boat can travel 4 kilometers upstream in the same amount of time it takes to travel 12 kilometers downstream. Since time is equal to distance divided by speed, we can
express the travel time as either \(\frac{12}{6+r}\) hours or \(\frac{4}{6-r}\) hours. If we don’t know the travel time, we can make an equation using the fact that these two expressions are equal to
one another, and figure out the speed of the river.
\(\displaystyle \frac{12}{6+r} &= \frac{4}{6-r} \\ \frac{12}{6+r} \boldcdot (6+r)(6-r) &= \frac{4}{6-r} \boldcdot (6+r)(6-r) \\ 12(6-r) &= 4(6+r) \\ 72-12r &= 24 + 4r \\ 48 &= 16r \\ 3 &= r \\ \)
Substituting this value into the original expressions, we have \(\frac{12}{6+3}=\frac43\) and \(\frac{4}{6-3}=\frac43\), so these two expressions are equal when \(r=3\). This means that when the
water flow in the river is about 3 kilometers per hour, it takes the boat 1 hour and 20 minutes to go 4 kilometers upstream and 1 hour and 20 minutes to go 12 kilometers downstream.
Even though we started out with a rational expression on each side of the equation, multiplying each side by the product of the denominators, \((6+r)(6-r)\), resulted in an equation similar to ones
we have solved before. Multiplying to get an equation with no variables in denominators is sometimes called “clearing the denominators.” | {"url":"https://curriculum.illustrativemathematics.org/HS/students/3/2/21/index.html","timestamp":"2024-11-14T11:16:50Z","content_type":"text/html","content_length":"81347","record_id":"<urn:uuid:09acad68-df56-44d5-81aa-6180f968c598>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00205.warc.gz"} |
Probability Formula » Formula In Maths
Probability Formula
The formula of all chapters of Probability Math’s available in this website. Helps students a lot in solving the toughest math’s problems. All math’s students must remember the important formula. It
is not possible to solve Math’s problems without this formula. The formula for all chapters of Math’s is given below. Formula of Basic Concepts of Probability, Total Probability, Compound &
Conditional Probability, Binomial Probability, and Baye’s Theorem are available in this post.
S.No. Name of Chapters
1 Basic Concepts of Probability
2 Total Probability
3 Compound & Conditional Probability
4 Binomial Probability
5 Baye’s Theorem
Leave a Comment | {"url":"https://formulainmaths.in/probability-formula/","timestamp":"2024-11-13T15:57:10Z","content_type":"text/html","content_length":"170257","record_id":"<urn:uuid:f89c49b8-260c-49cb-ae68-2f31dfa36246>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00473.warc.gz"} |
Safety stock optimization for
Safety stock optimization for ship-from-store
March 8, 2019 • 16 min read
Speed of delivery is one of the key success factors in online retail. Retailers fiercely compete with each other in this space, implementing new order fulfillment options such as Same-Day Shipping,
Buy Online Pick Up in Store (BOPUS) and Ship from Store (SFS). These capabilities help to both improve customer experience, and utilize the in-store inventory more efficiently by making it available
for online customers.
Working with a number of retail clients on implementation of BOPUS and SFS capabilities, we figured out that these use cases bring in a number of interesting optimization challenges related to
inventory reservation. Moreover, the quality of these reservation decisions directly impacts both customer experience and inventory utilization, so that the business benefits offered by new
fulfillment options can be severely damaged by a lack of analysis and optimization.
In this post, we describe the inventory optimization problem for BOPUS and SFS use cases, and provide a case study that shows how this problem has been solved in real-life settings using machine
learning methods.
Problem overview
The main idea of BOPUS and SFS is to make in-store inventory available for purchase through digital channels, so that online customers can either reserve items in a local brick-and-mortar store for
pick up, or get them shipped and delivered promptly. Both scenarios assume that orders are served directly from store shelves, and thus order fulfillment personnel (order pickers) compete with
customers for available units:
Order processing has some latency, so the units sold online can become sold out by the time the order is being fulfilled, even if the inventory data is perfectly synchronized between the store and
online systems (e.g. in-store points of sale update the availability database used by the e-commerce system in real time). This leads to the inventory reservation problem: given that the store has a
certain number of units of some product on-hand, how many units can be made available for online purchase, and how many units need to be reserved for store customers? Let us use the terms Available
to Promise (ATP) for the first threshold, and Safety Stock (SS) for the second one. The relationship between these thresholds is shown below:
We assume that the online order placement process checks ATP for products in the shopping cart, and accepts the order only if the ordered quantity is less than the product’s ATP. If the ordered
quantity is greater than the ATP, the order is rejected to avoid a fulfillment exception.
We also assume that the system works in discrete time: the inventory is replenished by some external process at the beginning of each time interval, and the on-hand inventory in the beginning of the
interval is known to the order placement process. Thus, the problem boils down to the optimization of safety stock levels for individual products, stores and time intervals:
text{ATP(product, store, time)} = text{onhand(product, store, time)} – text{SS(product, store, time)}
For example, we worked with a retailer that was able to replenish their inventory overnight, so the goal was to set a safety stock level for each product, in each store, each day. The order placement
process uses the ATP calculated through the above formula as a daily quota, and compares it with the running total of units ordered online.
If our algorithm produces safety stock estimates that exactly match the in-store demand, it is a perfect solution – we do not have order fulfillment exceptions caused by stockouts, and all the
merchandise that is unsold in the store is available for online customers. If our safety stock estimate is above the true in-store demand, we still do not have fulfillment exceptions, but our online
sales can be worse than potentially possible, as some fulfillable orders will be rejected. If our estimate is below the true demand, we will have fulfilment exceptions, because some units will be
double-sold. In other words, the quality of our safety stock estimates can be measured using the following two conflicting metrics:
• Pick rate: The ratio between the number of successfully fulfilled items and the total number of ordered items over a certain time interval.
• Exposure rate: The ratio between the number of items potentially available for online ordering (the difference between on-hand inventory and true in-store demand) and the actual ATP.
A retailer can balance these two metrics differently depending on their business goals, the cost of fulfillment exceptions and the cost of underexposed inventory. For example, some retailers can
redirect SFS orders from one store to another in case of exceptions at a low cost, and thus can accept a low pick rate to achieve high exposure rates. On the other hand, some retailers might be
concerned about customer dissatisfaction caused by order fulfillment exceptions, and choose to maintain high pick rates at the expense of exposure rates. Consequently, it is important to develop a
safety stock optimization model that enables retailers to seamlessly balance the two metics.
The most basic approach to the optimization problem defined above is to set some fixed safety stock level for all products (such as 1 unit, 2 units, and so on). While this clearly provides some
flexibility in terms of balancing between the pick and exposure rates (the level of zero maximizes the exposure, the level of infinity maximizes the pick rate, etc.), differences between products and
demand fluctuations are not taken into account.
We can expect to build a better solution for safety stock settings using predictive models that forecast the demand, taking into account product properties and sales data, as well as external signals
such as weather. In the remainder of this article, we will discuss a case study on how such a model has been built, and then explain how it was wrapped into another econometric model to estimate and
optimize pick and exposure rates.
Data exploration
The first step towards building a safety stock optimization model was to explore the available data sets to understand the gaps and challenges in the data.
The first challenge apparent from the preliminary data analysis was high sparsity of sales data at a store level. We had the sales history for several years for a catalog with tens of thousands
products, so that the total number of product-date pairs was close to a hundred million. However, the sales rate for the absolute majority of the products was far less than one unit a day, as shown
in the table below, and thus only about 0.7% of product-date pairs had non-zero sales numbers.
The second major challenge was that although we had store sales and catalog data at our disposal, the historical inventory data was not available. Consequently, it was not possible to distinguish
between zero demand and out of stock situations, which made it even more challenging to deal with zeros in the sales data.
The third challenge was that we had data for only one store for the pilot project, so it was not possible to improve the quality of demand forecasting by learning common patterns across different
The sparsity of the sales data makes it difficult to predict the demand using standard machine learning methods, which is the key to a good safety stock model. One possible approach to deal with this
problem is to calculate special statistics that quantify the variability of the demand timing and magnitude, and then use these statistics to switch between different demand prediction models. [1][2]
Consider the following two metrics that can be used for this purpose:
• Average demand interval (ADI): The average interval between two non-zero demand observations. This metric quantifies the time regularity of the demand. It is measured in the demand aggregation
intervals (days in our case, because we worked with daily demand and safety stock values).
• Squared coefficient of variation (CV2): The standard deviation of the demand divided by the average demand. This metric quantifies the variability of the demand by magnitude.
These metrics can be used to classify the demand histories into the following four categories or patterns that can often be observed in practice:
• Smooth demand (low ADI and low CV2): The demand history has low variability both in time and magnitude. This pattern in often easy to forecast with good accuracy.
• Intermittent demand (high ADI and low CV2): The demand magnitude is relatively constant, but there are large and irregular gaps between non-zero demand values that make forecasting more
challenging. Specialized forecasting methods can be applied in this case.
• Erratic demand (low ADI and high CV2): The demand does not have a lot of gaps, but the magnitudes vary significantly and irregularly, which makes forecasting more challenging. Specialized
forecasting methods can be applied in this case.
• Lumpy demand (high ADI and high CV2): The demand is both intermittent and erratic, with irregular gaps and sharp changes in the magnitude. This pattern is very challenging or impossible to
The cut-off values that are commonly used to differentiate between high and low ADI and CV2 values are 1.32 and 0.49, respectively. In our case, the distribution of demand histories according to the
above classification was as follows:
Only 0.94% of demand histories were smooth, 0.16% were erratic, 3.56% were lumpy, and the absolute majority of histories, or 95.33%, were intermittent.
The demand classification model above can be used to switch between different demand forecasting models and techniques. However, we eventually chose to use the notion of intermittent demand in a
different way, and create features that help to combine individual zeros in the demand history into groups, then use these features as inputs to demand prediction models to combat data sparsity. We
will discuss this approach in the next section in further detail.
Design of the demand prediction model
We built several preliminary demand models using XGBoost and LightGBM, and found that it is quite challenging to achieve good forecast accuracy using just basic techniques. However, this preliminary
analysis helped to identify several areas of improvement:
• The large number of intermittent demand histories significantly impacts the accuracy of the forecast, and needs to be addressed.
• The absence of inventory data needs to be addressed and, ideally, stockouts need to be identified.
• The model should be able to balance pick and exposure rates.
Weather has a significant impact on the model accuracy, so weather signals (historical data or forecasts) have to be incorporated into the model as well.Each of these four issues deserves a detailed
discussion, and we will go through them one by one in the following sections.
Intermittent demand statistics
As was mentioned earlier, we concluded that the problem with intermittent demand can be sufficiently mitigated by calculating the following three statistics for each date in the demand history, and
using these series of statistics as inputs to the demand forecasting model:
• Number of zeros before the target date
• Number of non-zero demand samples before the target date (length of the series)
• Interval between the two previous non-zero demand series
These metrics help the forecasting model to group zeros together and find some regularities, which decreases the noise in the original data created by the sparsity of non-zero demand observations.
This approach is conceptually similar to latent variables in Bayesian statistics.
Stockout model
Intermittent demand statistics improve the accuracy of the forecast by finding regularities in zero demand samples, so we expected to get even better results by building a model that discovers even
more regularities caused by stockout (that were not observed explicitly because we lacked inventory data).
We started with a heuristic assumption that a long series of zeros in the demand history indicates that the item is out of stock. We used this assumption to attribute each date in the demand
histories from the training set with a binary label that indicates whether or not the product is considered out of stock. This label was then used as a training label to build a classification model
that uses demand history, product attributes and intermittent demand statistics as input features to predict stockouts for a given pair of a product and date.
We implemented this model using LightGBM and a Bayesian optimization from the GPyOpt package for hyperparameter tuning. This model in isolation achieved quite good accuracy on the test set, as shown
in the confusion matrix below:
Balancing pick and exposure rates
There are two elements to the problem of balancing pick and exposure rates:
• First, we need to provide some mechanism to trade the pick rate for the exposure rate and vice versa using some hyperparameter.
• Second, we need to estimate the actual values of pick and exposure rates for a selected tradeoff. Note that this problem is different from the first one, as the tradeoff hyperparameter can be
just a number that is not explicitly connected with the absolute values of the rates.
The first problem can be solved by introducing a controllable bias into the demand prediction model. A model that systematically underestimates the demand will be biased towards higher exposure rate,
and a model that systematically overestimates will be biased toward higher pick rate.
This idea can be implemented using an asymmetric loss function where the asymmetry is controlled by a hyperparameter. We decided to use the following loss function, which can be readily implemented
in LightGBM:
L(x) = begin{cases}
beta cdot x^2, quad &xle 0 \
x^2, quad &x > 0
where $beta$ is the asymmetry parameter that controls the model bias. This asymmetric loss function enabled us to build a family of models for different values of the penalty parameter, and switch
between them to achieve arbitrary tradeoffs.
The second problem with estimating the actual business metrics is much more challenging, and it required us to develop a special mathematical model which we will discuss later in this article. The
model for business metrics is clearly important for safety stock optimization, but it does not directly impact the design of the demand prediction model.
Weather signals
The in-store demand is obviously influenced by weather conditions, so we used the data set provided by the National Oceanic and Atmospheric Administration (NOAA) to pull features like the following:
• Average temperature
• Average daily wind speed
• Precipitation
• Direction of fastest 2-minute and 5-minute wind
• Snowfall, fog, thunder, hail and rime flags
The final design of the demand model
The final architecture includes two predictive models – the stockout classification model described above, and the second one for the final demand prediction. This layout is shown in the following
Both models are implemented using LightGBM and the GPyOpt library for hyperparameter tuning. This architecture was used to produce 20 different demand predictions for 20 different values of the
asymmetry parameter.
The following chart shows how the accuracy of prediction gradually improved as more components and features were added (for the non-biased variant of the model):
Model for exposure and pick rates
The demand prediction model produces the outputs that can be used for setting safety stock. However, safety stock values alone are not enough to estimate the exposure and pick rates. In this section,
we show how we developed a model that helps to estimate these business metrics. The estimation of exposure and pick rates is important because the inventory optimization system has to guarantee
certain service level agreements (SLAs) that are defined or constrained in terms of such metrics, and cannot just blindly pick some value of the asymmetry parameter.
Let us denote the observed in-store demand (quantity sold) for product $i$ at time interval $t$ as $d_{it}$. The demand prediction model estimates this value as $widehat{d}_{it}$, and we set safety
stock based on this estimate
text{safety stock}_{it} = T(widehat{d}_{it})
where $T(x)$ is a rounding function, as we can stock and sell only integer numbers of product units. The estimation error is given by the following expression:
varepsilon_{it} = d_{it} – widehat{d}_{it}
A positive error means that safety stock is underestimated (too few units allocated for in-store customers) and generally leads to double selling in-store and online. A negative error means an
overestimate (too many units allocated for in-store customers), leading to low exposure rates. Let us also denote the on-hand inventory as $q_{it}$. Thus, the actual in-store residuals available for
online sales (which is the ideal ATP) will be as follows:
x_{it} = q_{it} – d_{it}
In our case, the on-hand inventory was not known, so we made an assumption that this quantity is proportional to several past demand observations:
widehat{q}_{it} = alpha cdot frac{1}{n} sum_{tau=1}^n d_{i, t-tau}
where $alpha$ is a model parameter that controls or reflects the product replenishment policy. The smaller values of this parameter correspond to lower average stock levels (compared to the average
demand), and the greater values correspond to higher stocks levels. From a business perspective, this parameter is linked to the inventory turnover rate, and reflects how conservative the inventory
management strategy is. We can set the inventory level parameter based on our estimate of what this level actually is for the given replenishment policy, or we can evaluate the model for different
levels, see how the stock level influences the pick and exposure rates, and adjust the replenishment policy based on the results.
The above assumption enables us to estimate the residuals for online sales as follows:
widehat{x}_{it} = max left[ T(widehat{q}_{it} – d_{it}), 0 right]
Using our models, we can now easily estimate the online exposure rate as the expected ratio between the predicted ATP (which includes the prediction error) and the ideal ATP (residuals) based on the
historical data:
text{ER}(alpha) = mathbb{E}_{it} left[ frac{widehat{x}_{it} + T(varepsilon_{it})}{widehat{x}_{it}} right]
Next, we need to estimate the pick rate, which is slightly more complicated, as it generally depends on the online demand. Let us start with the observation that an order fulfillment exception occurs
when the online demand (let’s denote it as $d_{it}^o$) exceeds the in-store residual:
text{fulfilment exception:}quad d_{it}^o > x_{it}
The pick rate is essentially a probability of the fulfilment without any exceptions that can be expressed through probabilities, and is conditioned on positive and negative prediction errors:
text{PR}(alpha) &= p(d_{it}^o le x_{it}) \
&= p(varepsilon_{it} le 0) cdot p(d_{it}^o le x_{it} | varepsilon le 0) –
p(varepsilon_{it} > 0) cdot p(d_{it}^o le x_{it} | varepsilon > 0)
In the first term, the probability of fulfillment without an exception given the non-positive prediction error equals 1 (the exception cannot occur because we overestimate the demand and thus set
safety stock too conservatively).
The probability in the second term generally depends on the online demand distribution. If this distribution is known, then the term can be estimated straightforwardly. In our case, the distribution
of online demand was unknown because the the safety stock model was developed in parallel with transactional systems for the ship from store functionality. We worked around this issue by making
certain assumptions about the demand distribution. First, let us note that the online demand is bounded by the estimated ATP (the online system will simply start to reject orders once ATP is
d_{it}^o le x_{it} + T(varepsilon_{it})
Second, we can choose some parametric demand distribution over the interval from zero to the ATP based on what we know about the online demand. The simplest choice will be a uniform distribution:
d_{it}^o sim text{uniform}(0, x_{it} + T(varepsilon_{it}))
This assumption can be illustrated as follows:
Consequently, the probability that a demand sample falls into the availability zone is given by:
pleft(d_{it}^o le x_{it} | varepsilon > 0right) = frac{x_{it}}{x_{it} + T(varepsilon_{it})}
Collecting everything together, we get the following expression for the pick rate that can be evaluated as empirical probabilities based on the historical data:
text{PR}(alpha) = mathbb{E}_{it}left[ p(varepsilon_{it} le 0) – p(varepsilon_{it} > 0) cdot frac{x_{it}}{x_{it} + T(varepsilon_{it})} right]
The above model is a convenient and flexible framework for safety stock evaluation. This model makes a number of assumptions regarding the online demand and inventory distributions to work around the
data gaps we had, but it is quite easy to adjust for other scenarios depending on the available information and data about the environment.
The predictive and econometric model defined above can be used to estimate the pick and exposure rates for different values of model parameters, and then choose the optimal tradeoff. Recall that
these models have two major parameters:
• The replenishment policy parameter $alpha$ that reflects or defines the ratio between the in-store demand and the number of product units stocked after each replenishment.
• The loss function asymmetry parameter $beta$ that controls the bias of the demand prediction model, either towards the pick rate or exposure rate.
In this section, we show how the full model was evaluated for different values of these two parameters to build safety stock optimization profiles, find optimal values, and estimate the uplift
delivered by the model.
First, we plot the dependency between the pick rate and the replenishment parameter $alpha$ averaged by all products. The following plot shows this dependency for different values of the model bias
parameter $beta$, and the curves that correspond to the baseline (naive) policy of setting safety stock to 1, 2 or 3 units:
We can achieve arbitrarily high pick rates by choosing large values of the replenishment parameter – the safety stock policy becomes immaterial when the stock level significantly exceeds the in-store
demand. Consequently, all curves in the chart are monotonically increasing, and are approaching the ideal pick rate as the replenishment parameter increases. The steepness of the curves depends on
how aggressive the reservation policy is – the higher the value of the safety stock, the steeper the curve. The curves produced by our predictive model are in between the curves for the baseline
policies, as the predictive model is essentially trying to differentiate safety stock values by products that result in lower average safety stock levels.
Next, we can make a similar plot for the dependency between the mean exposure or the exposure rate and the replenishment parameter $alpha$. Again, the exposure metrics can be made arbitrarily good by
increasing the average stock level controlled by the parameter $alpha$. For example, the mean ATP (measured in the number of product units) can be visualized as follows:
The above charts can help to make operational decisions, such as selecting the best parameter $beta$ for a given replenishment SLA parameter $alpha$. The drawback of this representation is that the
pick and exposure rates are directly related, and the separate charts do not visualize the tradeoff between the two metrics. From that perspective, it makes sense to plot the pick and exposure rates
on one plot, so that each curve corresponds to a certain value of $alpha$, and each point on a curve corresponds to a certain pair of parameters $alpha$ and $beta$:
This chart clearly shows the overall performance of the solution – the family of curves that correspond to our optimization model are well above the curves that correspond to the baseline policies
(fixed safety stock values). This means that the optimization model consistently provides a better tradeoff between the pick rate and exposure objectives than the baseline policy. For example, the
pick rate uplift in the above chart is about 10% for a wide range of exposure rates. We used this representation to find the optimal tradeoff based on the pick and exposure rate preferences, as well
as stock level constraints reflected by the curves for different values of $alpha$. We were then able to choose the optimal value for the model bias parameter $beta$.
Innovations in the area of order delivery are critically important for online retail. These innovations often come with new operational and optimization challenges that can be addressed using
advanced analytics and machine learning. In this article, we presented a case study on how such an optimization apparatus was developed for Ship from Store and Buy Online Pick Up in Store use cases.
We showed that demand prediction at the level of individual stores has major challenges caused by sporadic demand patterns, and these challenges can be partly addressed by specialized demand modeling
techniques and choice of input signals. We also showed that predictive modeling does not fully solve the problem, and one needs to build an econometric model to properly interpret the outputs of the
predictive model and the benefits from it. The econometric model we developed also illustrates some techniques that can be used to work around data gaps, which can be caused by various reasons,
including parallel development of optimization and transactional systems, and organizational challenges.
1. A. Ghobbar and C. Friend, Evaluation of forecasting methods for intermittent parts
demand in the field of aviation: a predictive model, 2003 ↩︎ | {"url":"https://test.stage.griddynamics.net/blog/safety-stock-optimization-for-ship-from-store","timestamp":"2024-11-07T03:39:55Z","content_type":"text/html","content_length":"372928","record_id":"<urn:uuid:79b8f4e2-7372-403e-9b6b-c1c0751da3b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00042.warc.gz"} |
RSA_generate_key — generate RSA key pair
#include <openssl/rsa.h>
*RSA_generate_key(int num, unsigned long e, void (*callback)(int,int,void
*), void *cb_arg);
RSA_generate_key() generates a key pair and returns it in a newly allocated RSA structure. The pseudo-random number generator must be seeded prior to calling RSA_generate_key().
The modulus size will be num bits, and the public exponent will be e. Key sizes with num < 1024 should be considered insecure. The exponent is an odd number, typically 3, 17 or 65537.
A callback function may be used to provide feedback about the progress of the key generation. If callback is not NULL, it will be called as follows:
• While a random prime number is generated, it is called as described in BN_generate_prime(3).
• When the n-th randomly generated prime is rejected as not suitable for the key, callback(2, n, cb_arg) is called.
• When a random p has been found with p-1 relatively prime to e, it is called as callback(3, 0, cb_arg).
The process is then repeated for prime q with callback(3, 1, cb_arg).
If key generation fails, RSA_generate_key() returns NULL; the error codes can be obtained by ERR_get_error(3).
callback(2, x, cb_arg) is used with two different meanings.
RSA_generate_key() goes into an infinite loop for illegal input values.
ERR_get_error(3), rand(3), rsa(3), RSA_free(3)
The cb_arg argument was added in SSLeay 0.9.0. | {"url":"http://h41379.www4.hpe.com/doc/83final/ba554_90007/rn02re167.html","timestamp":"2024-11-05T20:22:58Z","content_type":"text/html","content_length":"10746","record_id":"<urn:uuid:c20a56b6-673d-46e7-85ba-daf57cb44fef>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00731.warc.gz"} |
Latin Squares
Orthogonal Latin Squares
The following story is told by W. McWorter.
It was the early 1960's. A famous conjecture of Leonhard Euler on orthogonal latin squares had recently been proven false after more than 100 years of valiant effort. Now it seemed that everybody was
scrambling to construct orthogonal latin squares, including me. Even when a french visitor to Ohio State University, Dominique Foata, invited me to accompany him on his european vacation, I took
along my trusty clipboard to jot down any ideas that might come to me suddenly.
Even when I was sitting in the beautiful garden of a friend of Dominique's in northern France, I was testing ideas on orthogonal latin squares. The lady of the house asked me, mercifully in English,
what I was doing. I told her I was working on a problem in mathematics but she wanted to know more. So I described the problem of the "trente six officiers" as a diplomatic problem for France and
Algeria. Algeria had won its independence in 1962 but problems remained and there were angry demonstrations on both sides.
We have six representatives from Algeria, six representatives from France, and six mediators. We want to schedule six tours of Paris to let the negotiating parties get to know each other as follows.
Each representative from Algeria goes on all six tours with a french representative and a mediator in such a way that the algerian representative tours with each of the six mediators and no mediator
has to go on the same tour twice. The same conditions must also hold for french representatives.
The lady wanted more details. I illustrated the situation with a schedule for three algerian representatives A[1], A[2], and A[3], french representatives F[1], F[2], and F[3], tours t[1], t[2], and t
[3], and mediators m[1], m[2], and m[3]. The table below provides a schedule.
F[1] F[2] F[3]
A[1] | t[1] m[1] t[2] m[2] t[3] m[3]
A[2] | t[2] m[3] t[3] m[1] t[1] m[2]
A[3] | t[3] m[2] t[1] m[3] t[2] m[1]
The table says that A[1] (row 1) and F[1] (column 1) go on tour 1 (t[1]) with mediator 1 (m[1]), A[2] (row 2) and F[3] (column 3) go on tour 1 with mediator 2, A[3] and F[2] go on tour 1 with
mediator m[3], etc. Since each row and column of the table have all tours and all mediators, all representatives go on all tours with all mediators. Also, the table is designed so that no mediator
goes on the same tour twice because each goes on all three tours.
I wish I had told the lady that the problem I described with six tours has no solution, but I had no idea that she would try to solve the problem. For, later, while I was attending the Salzburg
festival in Austria, I received a long letter from the lady detailing her 'solution' to the problem! That a housewife has attempted to solve an abstract mathematical problem - this kind of thing was
unheard of in America of the early 1960s. I don't recall what I wrote to her in response but I felt terrible about misleading her.
A pair of latin squares A=(a[ij]) and B=(b[ij]) are orthogonal iff the ordered pairs (a[ij],b[ij]) are distinct for all i and j. Here are a pair of orthogonal latin squares of order 3.
A and B
A B superimposed
A and B are clearly latin squares and, when superimposed, you can see that all ordered pairs from corresponding square entries are distinct.
Orthogonal latin squares are generally hard to come by. There are no orthogonal latin squares of order 2 because there are only two latin squares of order 2 in the same symbols and they are not
orthogonal. There are orthogonal latin squares of order 3 as exemplified above. Orthogonal latin squares of order 4 exist but won't be exposed without a little struggle. Orthogonal latin squares of
order 5, or any odd order, on the other hand, are not so hard to find. Let A=(i+j) be the addition table for the integers modulo 2n+1 and let B=(2i+j), entries taken modulo 2n+1. Then A and B are
orthogonal latin squares. For, A is a latin square because it is the addition table for the integers modulo 2n+1 and B is a latin square because it is just A with its rows rearranged. To show that A
and B are orthogonal, suppose that for some i,j,m,n, the ordered pairs (i+j,2i+j) and (m+n,2m+n) are equal. Then i+j=m+n and 2i+j=2m+n. Subtracting (modulo 2n+1) the first equation from the last, we
get i=m, from which it follows that j=n. Hence all ordered pairs from different cells of the two latin squares are distinct and the squares are orthogonal. Here is an example for order 5.
B C
There are no orthogonal latin squares of order 6 but it took a long, long time to find this out. Leonhard Euler began to believe that there are no orthogonal latin squares for orders 2, 6, 10, 14,
and indeed for all orders of the form 4n+2. In 1900, G. Tarry has proven that no orthogonal squares of order 6 exist thus lending credibility to the Euler's conjecture. 60 years later, in 1960, it
was shown by Bose, Shrikhande, and Parker that, except for this one case, the conjecture was false.
With Euler's conjecture settled, interest shifted to finding how many pairs of mutually orthogonal latin squares there are for a given order. It turns out that the maximum number of mutually
orthogonal latin squares of order n is at most n-1. In particular, there are at most 3 mutually orthogonal latin squares of order 4. We construct three such using a generalization of the integers
modulo n.
The integers modulo two under addition and multiplication is a number system like the real number system in that every nonzero number has a multiplicative inverse, that is, it has a reciprocal. In
this system the quadratic equation x^2+x+1=0 has no solution. However, we can "adjoin" a solution, say a, of this equation to our number system to obtain a larger number system called the Galois
field of order 4 (named after the mathematician Evariste Galois who singlehandedly invented a whole field of mathematics before he was killed in a senseless duel when he was only 21 years old). If
you feel that a is not really a number, then you can understand why the ancient mathematicians balked at accepting the 'incommensurable' square root of 2 as a legitimate number. The addition table
for this number system is
0 1 a a+1
1 0 a+1 a
a a+1 0 1
a+1 a 1 0
Let A=(s+t) be the above latin square with s and t ranging over {0,1,a,a+1}. As we did earlier with modular arithmetic, but now with this new arithmetic, form the latin squares B=(as+t) and C=((a+1)
s+t). Then, using the same argument used with earlier with the integers modulo 2n+1, the latin squares A, B, and C are mutually orthogonal. Here are B and C.
0 1 a a+1 0 1 a a+1
a a+1 0 1 a+1 a 1 0
a+1 a 1 0 1 0 a+1 a
1 0 a+1 a a a+1 0 1
B C
(a0=0; a1=a; aa=a+1; a(a+1)=1)
Euler began his research in this area with what he called a new kind of magic square. Let's end by applying his kind of magic squares to the old magic squares, those n by n squares which contain all
numbers from 1 to n^2, or equivalently, 0 to n^2-1, in such a way that all rows, columns, and diagonals have the same sum. Let's replace the symbols in the latin squares of order 4 above with 0, 1,
2, and 3, and use the squares B and C to form an array of two-digit numbers written in base 4.
23 32 01 10 converted 11 14 1 4
31 20 13 02 to base 10 13 8 7 2
Voila! An old fashioned magic square of size 4! All row, column, and diagonal sums are equal to the magic number 30. The row and column sums are all the same because latin squares B and C are
orthogonal. The diagonal sums are the same as those of the rows and columns because latin square A is orthogonal to both B and C, and one diagonal of A has all entries 0 and the other diagonal has
all entries a+1.
Latin Squares
|Contact| |Front page| |Contents| |Algebra|
Copyright © 1996-2018
Alexander Bogomolny | {"url":"https://www.cut-the-knot.org/arithmetic/latin3.shtml","timestamp":"2024-11-10T00:03:11Z","content_type":"text/html","content_length":"21242","record_id":"<urn:uuid:7ac5aa36-3018-4954-b2fb-048004e37357>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00669.warc.gz"} |
Questions for PLOTTING
Answer the following questions
1. What is the average track (°M) and distance between KER NDB (N5210.9 W00931.5) and CRN NDB (N5318.1 W00856.5) on the Jeppesen E(LO)1 chart?
2. Given : SHA VOR (N5243.3 W00853.1) DME 50NM, CRK VOR (N5150.4 W00829) DME 41 NM, Aircraft heading 270° (M), Both DME distances increasing. What is the aircraft position on the Jeppesen E(LO)1
3. Given: SHA VOR (N5243.3 W00853.1) radial 143°, CRK VOR (N5150.4 W00829.7) radial 050° on the Jeppesen E(LO)1 chart. What is the aircraft position? To solve this type of problem
4. Given : CRK VOR / DME (N5150.4 W00829.7) Kerry aerodrome (N5210.9 W00931.4) What is the CRK radial and DME distance when overhead Kerry aerodrome on the Jeppesen E(LO)1 chart ?
5. At position N53 40 W008 00 on the Jeppesen E(LO)1 chart, what is the magnetic bearing from the SHA VOR (N52 43.3 W008 53.1)?
6. Given: SHA VOR (N5243.3 W00853.1) radial 120°, CRK VOR (N5150.4 W00829.7) radial 033° on the Jeppesen E(LO)1 chart. What is the aircraft position?
7. What feature is shown on the chart at position N5311 W00637 on the Jeppesen E(LO)1 chart?
8. What is the radial and DME distance from CRK VOR/DME (N5150.4 W00829.7) to position N5220 W00810 on the Jeppesen E(LO)1 chart?
9. Given: SHA VOR/DME (N5243.3 W00853.1) radial 165°/36 NM on the Jeppesen E(LO)1 chart. What is the aircraft position?
10. Which of the following lists all the aeronautical chart symbols shown at position N5150.4 W00829.7 on the Jeppesen E(LO)1 chart?
11. What is the radial and DME distance from CRK VOR/DME (N5150.4 W00829.7) to position N5140 W00730 on the Jeppesen E(LO)1 chart?
12. SHA VOR/DME (N5243.3 W00853.1) radial 120°/35 NM on the Jeppesen E(LO)1 chart. What is the aircraft position?
13. What is the average track (°T) and distance between SHA VOR (N5243.3 W00853.1) and CON VOR (N5354.8 W00849.1) on the Jeppesen E(LO)1 chart?
14. What feature is shown on the chart at position N5212 W00612 on the Jeppesen E(LO)1 chart?
15. What is the average track (°T) and distance between BAL VOR (N5318.0 W00626.9) and CRN NDB (N5318.1 W00856.5) on the Jeppesen E(LO)1 chart?
16. Given : SHA VOR (N5243.3 W00853.1) CON VOR N5354.8 W00849.1 Aircraft position N5320 W00950 on the Jeppesen E(LO)1 chart, which of the following lists two radials that are applicable to the
aircraft position?
17. What is the radial and DME distance from CRK VOR/DME (N5150.4 W00829.7) to position N5210 W00920 on the Jeppesen E(LO)1 chart?
Which aeronautical chart symbol on page 186 indicates an exceptionally high lighted obstacle?
Which of the aeronautical chart symbols on page 186 indicates a DME?
20. What is the average track (°T) and distance between WTD NDB (N5211.3 W00705.0) and FOY NDB (N5234.0 W00911.7) on the Jeppesen E(LO)1 chart?
21. What is the average track (°M) and distance between WTD NDB (N5211.3 W00705.0) and KER NDB (N5210.9 W00931.5) on the Jeppesen E(LO)1 chart?
22. Given: SHA VOR (N5243.3 W00853.1) radial 223°, CRK VOR (N5150.4 W00829.7) radial 322° on the Jeppesen E(LO)1 chart. What is the aircraft position?
23. What is the average track (°T) and distance between SLG NDB (N5416.7 W00836.0) and CFN NDB (N5502.6 W00820.4) on the Jeppesen E(LO)1 chart?
24. An aircraft is on the 025° radial from SHA VOR (N52 43.3 W008 53.1) at 49 DME on the Jeppesen E(LO)1 chart. What is its position?
25. What is the average track (°T) and distance between CRN NDB (N5318.1 W00856.5) and EKN NDB (N5423.6 W00738.7) on the Jeppesen E(LO)1 chart?
26. What is the mean true track and distance from the BAL (N53 18.0 W006 26.9) to CRN (N53 18.1 W008 56.5) on the Jeppesen chart E(LO)?
27. What is the airport at N52 11 W009 32 on the Jeppesen E(LO)1 chart?
28. Given : SHA VOR (N5243.3 W00853.1) DME 41NM, CRK VOR (N5150.4 W00829.7) DME 30 NM, Aircraft heading 270° (M), Both DME distances decreasing. What is the aircraft position on the Jeppesen E(LO)1
29. What is the average track (°M) and distance between BAL VOR (N5318.0 W00626.9) and SLG NDB (N5416.7 W00836.0) on the Jeppesen E(LO)1 chart?
What is the meaning of aeronautical chart symbol No 3 on page 186?
31. Given the following information determine the aircraft position using the Jeppesen chart E(LO)1? CRN (N53 18.1 W008 56.5) 18 DME SHA VOR (N52 43.3 W008 53.1) 20 DME Heading 270°M. Both DME ranges
32. What is the magnetic track and distance from Connaught VOR/DME (CON, N53 54.8 W008 49.1) to overhead Abbey Shrule aerodrome (N5336 W007 39)?
Which aeronautical chart symbol on page 186 indicates an aeronautical ground light?
34. The aircraft positiin is at N53 30 W008 00. At that position the VOR radial from SHA(N52 43.3 W008 53.1) on the Jeppesen E(LO)1 chart, would be (1) and the radial from CON (N53 54.8 W008 49.1)
would be (2). The correct combination is?
35. What feature is shown on the chart at position N5351 W00917 on the Jeppesen E(LO)1 chart?
Which aeronautical chart symbols on page 186 indicates a group of unlighted obstacle?
37. Given : SHA VOR (N5243.3 W00853.1) CRK VOR N5150.4 W00829.7 Aircraft position N5220 W00910 on the Jeppesen E(LO)1 chart, which of the following lists two radials that are applicable to the
aircraft position?
38. Given : SHA VOR/DME (N5243.3 W00853.1) radial 232°/32 NM on the Jeppesen E(LO)1 chart. What is the aircraft position?
39. What is the average track (°T) and distance between WTD NDB (N5211.3 W00705.0) and FOY NDB (N5234.0 W00911.7) on the Jeppesen E(LO)1 chart?
Which of the aeronautical chart symbols on page 186 indicates a VOR?
41. Given : CON VOR/DME (N5354.8 W00849.1) Abbey Shrule aerodrome (N5335 W00739) What is the CON radial and DME distance when overhead Abbey Shrule aerodrome on the Jeppesen E(LO)1 chart?
42. What is the radial and DME distance from SHA VOR/DME (N5243.3 W00853.3) to position N5300 W00940 on the Jeppesen E(LO)1 chart?
43. What is the approximate course (T) and distance Waterford NDB (WTD N52 11.3 W007 04.9) and Sligo NDB (SLG N54 16.7 W008 36.0) on the Jeppesen E(LO)1 chart?
44. Given: SHA VOR (N5243.3 W00853.1) radial 205°, CRK VOR (N5150.4 W00829.7) radial 317° on the Jeppesen E(LO)1 chart. What is the aircraft position?
45. What is the radial and DME distance from CRK VOR/DME (N51 50.4 W008 29.7) to position N5140 W00730 on the Jeppesen E(LO)1 chart?
Which aeronautical chart symbol on page 186 indicates a group of lighted obstacle?
47. What is the average track (°M) and distance between CRN NDB (N5318.1 W00856.5) and BEL VOR (N5439.7 W00613.8) on the Jeppesen E(LO)1 chart?
48. What is the radial and distance from SHA (N52 43.3 W008 53.1) to Birr (N53 04.0 W007 54.0) on the Jeppesen chart E(LO)1?
Which aeronautical chart symbols on page 186 indicates a lighted obstacle?
50. Given : CON VOR (N5354.8 W00849.1) DME 30NM, CRN VOR (N5318.1 W00856.5) DME 25 NM, Aircraft heading 270° (M), Both DME distances decreasing. What is the aircraft position on the Jeppesen E(LO)1 | {"url":"https://vayudootaviation.com/question-bank/subtopic/101","timestamp":"2024-11-11T16:26:17Z","content_type":"text/html","content_length":"269765","record_id":"<urn:uuid:0847355b-9bfb-4377-a184-6d357c6282e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00622.warc.gz"} |
How To - How to Calculate a Planet’s Mass
How To
How to Calculate a Planet’s Mass
Planets are big. Strike that, they’re massive. It’s hard to comprehend just how much stuff there is packed into one planet. Take Earth, for instance. The mass of our home planet is 5.972 x10^24 kg.
Can you even comprehend that number? It gets even crazier when you consider how much bigger Jupiter is. These sizes are staggering, but how do we know the exact masses? What do we use to calculate
the mass of not just our planet, but all the planets in our solar system? Here’s how it works.
1. Gravity has to be measured.
Before you can calculate mass, you need to know the force of gravity. But how on earth do you measure something that’s invisible? The answer is in the question. We’re on Earth, and we experience
gravity on a daily basis. It keeps us firmly rooted to our home world. Surely we should be able to measure the force, right? That’s exactly what Henry Cavendish thought, and he succeeded. He was able
to measure what we now call the gravitational constant, represented as G, which is equal to 6.67*10^-11 m^3/ kg-s^2.
2. Newton’s Laws tell us the relationship between gravity and mass.
Mass and gravity are directly related, and the laws derived by Newton prove this. He figured out that the gravitational pull between any two objects is dependent on their distance and masses. Using
the gravitational constant and the weight of something on Earth, such as yourself (since your weight is a measurement of the force of gravity exerted between your body and the center of the Earth),
you can determine the mass of Earth.
3. Observe, record, and plug in the numbers.
While we can stand on the Earth and directly measure gravity and mass here, it’s a little harder for other planets. What we do instead is observe objects orbiting around other planets, such as their
moons. By observing how closely the planet orbits and at what speed, we can determine the force of gravity between the planet and its moon. Just as on Earth, once you know the force of gravity you
can figure out the mass of both objects. To do so, use this formula:
In this formula, “p” is the orbital period, “G” is the gravitational constant, “M1” and “M2” are the masses of the two objects, and “a” is the average distance between the centers of the two objects.
In certain cases, when one object is far more massive than the other, you can just use the mass of the larger object since the combined mass of the two objects will be a close approximation of the
mass of the larger one alone. | {"url":"https://forums.space.com/threads/how-to-calculate-a-planet%E2%80%99s-mass.27468/","timestamp":"2024-11-13T11:36:58Z","content_type":"text/html","content_length":"114895","record_id":"<urn:uuid:e6bedd5d-7e9d-4e7b-b4e1-6a2e7ffbf7ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00488.warc.gz"} |
Principles of Projectile Motion (1.1.4) | IB DP Physics 2025 SL Notes | TutorChase
The study of projectile motion provides insights into the behaviour of objects launched into the air, tracing a path dictated by the gravitational force while ignoring air resistance. This intricate
dance between vertical and horizontal movements manifests in distinct yet interconnected ways, offering a rich field for exploration and understanding.
Comprehensive Analysis of Projectiles
The quest to understand projectile motion begins with an intricate examination of its fundamentals. Here, gravitational force reigns supreme, drawing objects towards the Earth and shaping their
Vertical Motion
In the realm of vertical motion, gravity plays the leading role.
• Influence of Gravity: Every projectile, irrespective of its size, shape, or mass, is subject to a constant acceleration due to gravity of approximately 9.8 m/s^2 on Earth's surface. This
universal pull dictates the ascent and descent of all projectiles.
• Initial Vertical Velocity: It depends on both the speed of launch and the angle of projection. This component diminishes as the object ascends, reaching zero at the peak of the trajectory before
increasing again during descent, under the persistent pull of gravity.
• Time of Flight: This is the duration for which the projectile remains in the air. It is intimately tied to the initial vertical velocity and the unyielding acceleration due to gravity.
Horizontal Motion
On the horizontal plane, a different set of rules applies.
• Constant Velocity: The absence of air resistance ensures that the horizontal component of velocity remains unaltered throughout the flight. It's a world of uniform motion, unimpeded by external
• Displacement: This measure of the horizontal distance covered is a product of the unchanging horizontal velocity and the time of flight, offering insights into the projectile’s journey from
launch to landing.
Horizontal Projectile Motion
Image Courtesy CK12
Application of Motion Equations
Equations of motion are instrumental in dissecting the behaviour of projectiles, casting light on their vertical and horizontal components.
Vertical Component
• Equation of Motion: The constant pull of gravity lends itself to analysis through the equations of motion. For example, the final vertical velocity can be elucidated using v = u + gt.
• Maximum Height: This pinnacle of the projectile’s journey is unveiled when the final vertical velocity pauses at zero. The kinetic energy is momentarily exhausted, with potential energy at its
Horizontal Component
• Uniform Motion: A world untainted by air resistance sees the horizontal velocity maintaining its initial fervour, making the horizontal equation of motion a tale of uniform movement.
Understanding the Parabolic Nature
In the grand theatre of projectile motion, the parabolic trajectory is a spectacle of mathematical and physical harmony.
Key Features
• Symmetry: Like a reflection in still waters, the ascending and descending paths of the projectile mirror each other when launched at an angle.
• Maximum Height: This zenith in the vertical sojourn occurs at the midpoint of the horizontal displacement.
• Range: Dependent on both the launch speed and angle, it unveils the total horizontal expanse conquered by the projectile.
Trajectory Equation
Diving deeper, the equation of the trajectory, though not a requisite, offers profound insights.
y = x * tan(theta) - (gx^2) / (2v0^2 * cos^2(theta))
Here, every symbol, every variable, weaves into the narrative of the projectile's flight, from the gravitational pull to the initial velocity's dual dance of magnitude and direction.
Studying Various Launch Angles
The angle at which a projectile is launched is not just a geometric entity but a profound influencer of the trajectory and range.
Horizontal Launch
• Zero Launch Angle: In the world of horizontal launches, the vertical velocity at the onset is nil, and gravity immediately asserts its pull.
• Trajectory: Even here, the path carved is parabolic, a testament to the constancy of horizontal motion and the accelerated dance of vertical movement.
Angled Launch
• Initial Velocities: Here, the initial speed unfurls into horizontal and vertical components, each telling a tale of motion influenced by the angle of launch.
• Optimal Angle for Maximum Range: At 45 degrees, the range reaches its pinnacle, a sweet spot where the horizontal and vertical velocities collaborate to extend the flight.
Image Courtesy Khan Academy
Practical Applications and Insights
The theoretical constructs and mathematical equations governing projectile motion are not confined to textbooks but breathe life into real-world scenarios. In the absence of air resistance,
principles of projectile motion find applications in an array of fields.
Sports Science
• Optimising Performance: Athletes, especially in sports like basketball and football, can harness insights from projectile principles to enhance their performance. Understanding the effect of
launch angles and speeds on the range and trajectory can inform strategies for optimal goal scoring and passing.
• Structural Analysis: Engineers often turn to the principles of projectile motion to predict the trajectories of objects, aiding in the design of structures that can withstand impacts or
facilitate specific types of motion.
• Space Exploration: The launch of satellites and space vehicles demands a nuanced understanding of projectile motion. It’s a dance of physics and engineering, where calculations of speed, angle,
and trajectory are pivotal in transcending Earth’s gravitational pull and venturing into space.
Through a meticulous exploration of these principles, learners step into a world where mathematics and physics intertwine, laying a robust foundation for advanced studies and real-world applications
in the awe-inspiring dance of projectiles under gravity’s unyielding gaze. Each equation, each principle, echoes the symphony of forces and motions painting the canvas of the physical universe.
Yes, it is possible for two different launch angles to result in the same range, a concept often referred to as complementary angles. For example, angles of 30 and 60 degrees can yield the same range
but different maximum heights and times of flight. This is because the horizontal and vertical components of the initial velocity have effectively swapped, maintaining the same overall kinetic energy
and, consequently, the same range. However, the projectile's flight characteristics, such as the maximum height reached and the time of flight, will vary due to the different distributions of the
velocity components.
The symmetrical nature of the trajectory when a projectile is launched at an angle is a consequence of the consistent acceleration due to gravity and the initial velocity components. The horizontal
component of velocity remains constant throughout the flight. In contrast, the vertical component is influenced by gravity, causing the projectile to slow down as it ascends and speed up at the same
rate as it descends. The equal influence of gravity on the upward and downward motions results in symmetrical time intervals and distances for the ascent and descent, creating a mirror-like
The launch height significantly impacts the range of a projectile when launched horizontally. Since there is no initial vertical velocity, the time of flight is solely determined by how long it takes
for the projectile to fall from its launch height to the ground under gravity. The greater the height, the more time it will take for the projectile to reach the ground, and consequently, the greater
the horizontal distance it will cover, assuming a constant horizontal velocity. It's essential to note that this is under the assumption of no air resistance and a constant gravitational pull
throughout the projectile's flight.
Altering the launch angle directly affects both the time of flight and maximum height attained by a projectile. A larger launch angle increases the initial vertical velocity component, leading to a
higher maximum height and a longer time of flight. However, there is an optimal angle, typically 45 degrees, where the range is maximised due to a balanced contribution from both the vertical and
horizontal velocity components. Beyond this angle, although the projectile reaches a greater height and stays in the air longer, the horizontal distance covered (or range) starts to decrease due to a
reduced horizontal velocity component.
The initial speed and angle of projection together play a pivotal role in determining the subsequent motion of a projectile. The initial speed directly impacts the magnitude of the horizontal and
vertical components of velocity. A greater initial speed results in a longer range, higher maximum height, and longer time of flight. Simultaneously, the angle of projection determines the
distribution of this speed into its horizontal and vertical components. A balanced angle, like 45 degrees, often maximises the range by optimally distributing the initial speed between the two
directions, ensuring that the projectile covers the maximum possible horizontal distance before hitting the ground.
Practice Questions
A projectile is launched horizontally from a cliff with an initial speed of 20 m/s. Calculate the time it takes for the projectile to hit the ground 50 m below. Also, determine the horizontal
distance it travels during this time. Neglect air resistance.
The time it takes for the projectile to hit the ground can be calculated using the second equation of motion, s = ut + 1/2at^2, where s is the vertical distance, u is the initial vertical velocity, t
is the time, and a is the acceleration due to gravity. Since the projectile is launched horizontally, the initial vertical velocity is zero. Substituting the given values, 50 = 0 + 1/2 * 9.8 * t^2,
we find t ≈ 3.2 s. Now, using the horizontal motion equation, s = ut, where u is the horizontal velocity, and t is the time, we have s = 20 * 3.2 ≈ 64 m. So, the projectile hits the ground after
approximately 3.2 seconds and travels about 64 meters horizontally.
A ball is kicked with an initial velocity of 25 m/s at an angle of 40 degrees to the horizontal. Determine the maximum height attained by the ball. Ignore air resistance.
To find the maximum height, we first need to calculate the initial vertical velocity using the formula u = u0 * sin(θ), where u0 is the initial velocity and θ is the angle of projection. Substituting
in the given values, u = 25 * sin(40) ≈ 16 m/s. Now, we use the third equation of motion, v^2 = u^2 + 2as, where v is the final vertical velocity, u is the initial vertical velocity, a is the
acceleration (due to gravity in this case), and s is the distance. At the maximum height, the final vertical velocity is zero. Rearranging and solving the equation, we get s = u^2 / (2 * g) = (16^2)
/ (2 * 9.8) ≈ 13 m. Therefore, the maximum height attained by the ball is approximately 13 meters. | {"url":"https://www.tutorchase.com/notes/ib/physics-2025/1-1-4-principles-of-projectile-motion","timestamp":"2024-11-10T19:01:34Z","content_type":"text/html","content_length":"1049132","record_id":"<urn:uuid:abd86d0d-2efb-44bd-8a0a-fecf6f5d7ea0>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00560.warc.gz"} |
[Resource Topic] 2024/1716: Rate-1 Statistical Non-Interactive Zero-Knowledge
Welcome to the resource topic for 2024/1716
Rate-1 Statistical Non-Interactive Zero-Knowledge
Authors: Pedro Branco, Nico Döttling, Akshayaram Srinivasan
We give the first construction of a rate-1 statistical non-interactive zero-knowledge argument of knowledge. For the \mathsf{circuitSAT} language, our construction achieves a proof length of |w| + |w
|^\epsilon \cdot \mathsf{poly}(\lambda) where w denotes the witness, \lambda is the security parameter, \epsilon is a small constant less than 1, and \mathsf{poly}(\cdot) is a fixed polynomial that
is independent of the instance or the witness size. The soundness of our construction follows from either the LWE assumption, or the O(1)-\mathsf{LIN} assumption on prime-order groups with
efficiently computable bilinear maps, or the sub-exponential DDH assumption. Previously, Gentry et al. (Journal of Cryptology, 2015) achieved NIZKs with statistical soundness and computational
zero-knowledge with the aforementioned proof length by relying on the Learning with Errors (LWE) assumption.
ePrint: https://eprint.iacr.org/2024/1716
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics . | {"url":"https://askcryp.to/t/resource-topic-2024-1716-rate-1-statistical-non-interactive-zero-knowledge/22906","timestamp":"2024-11-11T10:46:46Z","content_type":"text/html","content_length":"18057","record_id":"<urn:uuid:5c5e2b84-1bfd-4022-a1e1-cdb2ec3e75e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00353.warc.gz"} |
Pradipto Das - SUNY Buffalo, CSE Dept. Homepage
(+) I am happy to declare University at Buffalo, the State University of New York to be my alma mater!
(+) Important results on matrix algebra for statistics can be found in this
Matrix Cookbook
(+) Jonathan Shewchuk's
Painless Conjugate Gradient (PCG)
≈ 110 word summary
from PCG: The intuition behind CG is easier to understand for a quadratic function f(
) as follows: In the case where the coefficient matrix
in f(
) with
∈ R
is not diagonal, for e.g., if we flatten an f(
) with
∈ R
onto a two dimensional plane one latitute at a time, the rings or contours corresponding to the latitutes on the surface of
will not be axes aligned ellipses. CG transforms
such that the latter constitutes a basis formed by a set of P orthogonal search directions obtained from searching for the optimum
. This new basis makes the contours of the convex f(
) axes aligned so that minimization can happen in P steps.
Next generation challenge
Can a machine automatically generate this summary from PCG?
This is similar to making an artificial assistant who can study and sum up a quantum phenomenon saying "Everything that can happen does happen."
All views expressed in these subset of web pages are my own and does not necessarily reflect the views of my | {"url":"http://pradipto.com/","timestamp":"2024-11-10T11:59:26Z","content_type":"text/html","content_length":"18536","record_id":"<urn:uuid:5551aa2e-229e-4ed5-9b75-4801a1fb144e>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00092.warc.gz"} |
Woodworking Tools??
I am building a robot out of wood. Could u tell me what tools would be best to use to cut 1/4" or 1/8" wood? This is a small home project, and thus, I want to use small and easy-to-use tools.
I read that the 'jigsaw' is pretty useful. Can anyone elaborate on that? How flexible is it? Is it small enough to be used in my garage?
And also...is it easier to manipulate HDPE that it is to work with wood?(consider the fact that I am working at home)
Thanks a lot...
Waiting for ur reply. | {"url":"https://www.societyofrobots.com/robotforum/index.php?PHPSESSID=59f02313d32d9769a9d2f3c40486261e&topic=643.0","timestamp":"2024-11-07T14:12:49Z","content_type":"application/xhtml+xml","content_length":"81394","record_id":"<urn:uuid:5ff56679-1b58-403b-8a8e-21c103f2d43e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00712.warc.gz"} |
PrepTest 79, Game 4, Question 18
To view this video please enable Javascript
The content provides a detailed strategy for solving a global question in a logic game, focusing on the process of elimination using given rules to find a possible route of a virus from the first
computer to Q.
• Introduction to a global question in a logic game, emphasizing the importance of using rules to eliminate incorrect answers.
• Application of the third rule to eliminate an answer choice that incorrectly shows S passing to R.
• Utilization of the fourth and fifth rules to further narrow down the possible answers by identifying which computers can pass the virus to Q and P.
• Detailed analysis of the remaining answer choices, leading to the elimination of one that violates the rule regarding S's transmission capabilities.
• Confirmation that answer choice D is correct through logical deduction and rule application, demonstrating a methodical approach to solving the question.
Understanding the Global Question
Applying Rules to Eliminate Answers
Analyzing Remaining Choices
Confirming the Correct Answer | {"url":"https://lsat.magoosh.com/lessons/4491-preptest-79-game-4-question-18","timestamp":"2024-11-09T19:53:59Z","content_type":"text/html","content_length":"48041","record_id":"<urn:uuid:04b76818-6c20-4673-ac77-641412ba031d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00760.warc.gz"} |
Why we use Gauss-Seidel method in power system?
The reason the Gauss–Seidel method is commonly known as the successive displacement method is because the second unknown is determined from the first unknown in the current iteration, the third
unknown is determined from the first and second unknowns, etc.
What is the formula for Gauss-Seidel method?
Let’s apply the Gauss-Seidel Method to the system from Example 1: . x1(1) = 3/4 = 0.750. x2(1) = [9 + 2(0.750)] / 6 = 1.750.
What are the methods of power flow?
In this paper three methods for load flow analysis: Gauss-Siedel method, Newton-Raphson method and Fast-Decoupled method, have been used. The three load flow methods have been compared on the basis
of number of iterations obtained. In a power system, power flows from generating stations to load centres.
What are the advantages of Newton Raphson power flow method over Gauss Seidel power flow method?
Compared to Gauss-Seidel method, Newton-Raphson method takes
• less number of iterations and more time per iteration.
• less number of iterations and less time per iteration.
• more number of iterations and more time per iteration.
• more number of iterations and less time per iteration.
How does Gauss-Seidel work?
The Gauss-Seidel method is the modification of the gauss-iteration method. This modification reduces the number of iteration. In this methods the value of unknown immediately reduces the number of
iterations, the calculated value replace the earlier value only at the end of the iteration. .
What is Gauss-Seidel method with example?
Example 2x+5y=21,x+2y=8. The coefficient matrix of the given system is not diagonally dominant. Hence, we re-arrange the equations as follows, such that the elements in the coefficient matrix are
diagonally dominant. Solution By Gauss Seidel Method.
What is Gauss-Seidel method example?
1. Example 2x+5y=21,x+2y=8. The coefficient matrix of the given system is not diagonally dominant. Hence, we re-arrange the equations as follows, such that the elements in the coefficient matrix are
diagonally dominant.
Why Gauss-Seidel method is better than Gauss Jacobi method?
Gauss-Seidel method is more efficient than Jacobi method as Gauss-Seidel method requires less number of iterations to converge to the actual solution with a certain degree of accuracy.
What is the main disadvantage of Gauss Seidel method?
sensitivity to the choice of slack bus.
What is power flow diagram?
The Power Flow Diagram is used to determine the efficiency of a generator or motor. In the below figure of power flow diagram of DC Generator, it is shown that initially the mechanical power is given
as an input which is converted into electrical power, and the output is obtained in the form of electrical power.
What is the main disadvantage of Gauss-Seidel method?
Why is Newton-Raphson better than Gauss Seidel?
Due to good computational characteristic, Gauss-Seidel method is useful for small system with less computational complexity whereas Newton Raphson method is the most effective and reliable one for
its fast convergence and accuracy.
Does Gauss-Seidel always work?
These methods do not always work. However, there is a class of square matrices for which we can prove they do work. This is the class of strictly diagonally dominant matrices. One should alos have
hope that the method will converge if the matrix is diagonally dominant.
Which method is similar to Gauss-Seidel method?
Yes, Gauss Jacobi or Jacobi method is an iterative method for solving equations of diagonally dominant system of linear equations.
Does Gauss-Seidel method always converge?
Gauss-Seidel method is an iterative technique whose solution may or may not converge. Convergence is only ensured is the coefficient matrix, @ADnxn,is diagonally dominant, otherwise the method may or
may not converge.
Does Gauss-Seidel always converge?
What are the applications of Gauss-Seidel method?
The application of the Gauss–Seidel diagonal element isolation method is examined for obtaining an iterative solution of the system of thermal-radiation transfer equations for absorbing, radiating,
and scattering media.
Why is Gauss-Seidel less accurate?
The Gauss Seidel method requires the fewest number of arithmetic operations to complete an iteration. This is because of the sparsity of the network matrix and the simplicity of the solution
Which method is best for fast load flow solution?
The effective and most reliable amongst the three load flow methods is the Newton-Raphson method because it converges fast and is more accurate.
What is power flow equation?
Real and reactive power can be calculated from the following equations: Pi=Re{Vi∗(Vi∑j=0j≠iyij−∑j=1j≠iyijVj)}=Re{Vi∗(ViYii+∑j=1j≠iYijVj)} Qi =−Im{Vi∗(Vi∑j=0j≠iyij−∑j=1j≠iyijVj)}=−Im{Vi∗(ViYii+∑j=
1j≠iYijVj)} Or. Pi=∑nj=1|Vi||Vj||Yij|cos(θij−δi+δj)E4.
What is the difference between power flow and load flow?
A load flow study is especially valuable for a system with multiple load centers, such as a refinery complex. The power-flow study is an analysis of the system’s capability to adequately supply the
connected load. The total system losses, as well as individual line losses, also are tabulated.
What is limitation of Gauss-Seidel method?
9. What is the limitation of Gauss-seidal method? Explanation: It does not guarantee convergence for each and every matrix. Convergence is only possible if the matrix is either diagonally dominant,
positive definite or symmetric.
What is the limitation of Gauss method?
Answer: c) It doesn’t guarantees convergence for each and every matrix is the correct answer. Extra Information: The limitation that it doesn’t guarantee convergence for each and every matrix because
if a matrix is diagonally dominant, positive definite or symmetric then only convergence is possible.
Where is error in Gauss-Seidel method?
Basic Procedure:
1. Algebraically solve each linear equation for x. i
2. Assume an initial guess solution array.
3. Solve for each xi and repeat.
4. Use absolute relative approximate error after each iteration to check if error is within a pre-specified tolerance.
What are the techniques that can be used to improve Gauss-Seidel convergence?
Multigrid methods instead apply a few iteration of e.g. Gauss Seidel on your original mesh and then approximate the smooth residual on a coarser mesh and apply a few iterations, approximate on an
even coarser mesh and so on until finally the mesh so coarse that a direct solver is efficient and the problem is solved on … | {"url":"https://www.trentonsocial.com/why-we-use-gauss-seidel-method-in-power-system/","timestamp":"2024-11-04T17:48:39Z","content_type":"text/html","content_length":"65194","record_id":"<urn:uuid:ac243429-cd0e-49fa-9362-64db5310a4a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00679.warc.gz"} |
2.3 Time, Velocity, and Speed
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Explain the relationships between instantaneous velocity, average velocity, instantaneous speed, average speed, displacement, and time
• Calculate velocity and speed given initial position, initial time, final position, and final time
• Derive a graph of velocity vs. time given a graph of position vs. time
• Interpret a graph of velocity vs. time
The information presented in this section supports the following AP® learning objectives and science practices:
• 3.A.1.1 The student is able to express the motion of an object using narrative, mathematical, and graphical representations. (S.P. 1.5, 2.1, 2.2)
• 3.A.1.3 The student is able to analyze experimental data describing the motion of an object and is able to express the results of the analysis using narrative, mathematical, and graphical
representations. (S.P. 5.1)
There is more to motion than distance and displacement. Questions such as “How long does a foot race take?” and “What was the runner's speed?” cannot be answered without an understanding of other
concepts. In this section, we add definitions of time, velocity, and speed to expand our description of motion.
As discussed in Physical Quantities and Units, the most fundamental physical quantities are defined by how they are measured. This is the case with time. Every measurement of time involves measuring
a change in some physical quantity. It may be a number on a digital clock, a heartbeat, or the position of the Sun in the sky. In physics, the definition of time is simple—time is change, or the
interval over which change occurs. It is impossible to know that time has passed unless something changes.
The amount of time or change is calibrated by comparison with a standard. The SI unit for time is the second, abbreviated s. We might, for example, observe that a certain pendulum makes one full
swing every 0.75 s. We could then use the pendulum to measure time by counting its swings or, of course, by connecting the pendulum to a clock mechanism that registers time on a dial. This allows us
to not only measure the amount of time but also to determine a sequence of events.
How does time relate to motion? We are usually interested in elapsed time for a particular motion, such as how long it takes an airplane passenger to get from his seat to the back of the plane. To
find elapsed time, we note the time at the beginning and end of the motion and subtract the two. For example, a lecture may start at 11:00 a.m. and end at 11:50 a.m., so that the elapsed time would
be 50 min. Elapsed time $ΔtΔt$ is the difference between the ending time and beginning time,
2.4 $Δt=tf−t0,Δt=tf−t0,$
where $ΔtΔt size 12{Δt} {}$ is the change in time or elapsed time, $tftf$ is the time at the end of the motion, and $t0t0$ is the time at the beginning of the motion. As usual, the delta symbol,
$Δ,Δ, size 12{Δ} {}$ means the change in the quantity that follows it.
Life is simpler if the beginning time $t0t0$ is taken to be zero, as when we use a stopwatch. If we were using a stopwatch, it would simply read zero at the start of the lecture and 50 min at the
end. If $t0=0,t0=0,$ then $Δt=tf≡t.Δt=tf≡t.$
In this text, for simplicity's sake
• motion starts at time equal to zero $(t0=0),(t0=0), size 12{ \( t rSub { size 8{0} } =0 \) } {}$ and
• the symbol $tt size 12{t} {}$ is used for elapsed time unless otherwise specified $(Δt=tf≡t).(Δt=tf≡t). size 12{ \( Δt=t rSub { size 8{f} } equiv t \) } {}$
Your notion of velocity is probably the same as its scientific definition. You know that if you have a large displacement in a small amount of time you have a large velocity, and that velocity has
units of distance divided by time, such as miles per hour or kilometers per hour.
Average Velocity
Average velocity is displacement (change in position) divided by the time of travel,
2.5 $v - = Δx Δt = x f − x 0 t f − t 0 , v - = Δx Δt = x f − x 0 t f − t 0 , size 12{ { bar {v}}= { {Δx} over {Δt} } = { {x rSub { size 8{f} } - x rSub { size 8{0} } } over {t rSub { size 8{f} } - t
rSub { size 8{0} } } } ,} {}$
where $v-v- size 12{ { bar {v}}} {}$ is the average (indicated by the bar over the $vv$) velocity, $ΔxΔx$ is the change in position (or displacement), and $xfxf$ and $x0x0$ are the final and
beginning positions at times $tftf$ and $t0,t0,$ respectively. If the starting time $t0t0$ is taken to be zero, then the average velocity is simply
2.6 $v - = Δx t . v - = Δx t . size 12{ { bar {v}}= { {Δx} over {t} } "." } {}$
Notice that this definition indicates that velocity is a vector because displacement is a vector. It has both magnitude and direction. The SI unit for velocity is meters per second or m/s, but many
other units, such as km/h, mi/h (also written as mph), and cm/s, are in common use. Suppose, for example, an airplane passenger took 5 seconds to move −4 m—the minus sign indicates that displacement
is toward the back of the plane. His average velocity would be
2.7 $v - = Δx t = − 4m 5s = − 0.8 m/s. v - = Δx t = − 4m 5s = − 0.8 m/s. size 12{ { bar {v}}= { {Δx} over {t} } = { { - 4`m} over {5`s} } = - 0 "." 8`"m/s" "." } {}$
The minus sign indicates the average velocity is also toward the rear of the plane.
The average velocity of an object does not tell us anything about what happens to it between the starting point and ending point, however. For example, we cannot tell from average velocity whether
the airplane passenger stops momentarily or backs up before he goes to the back of the plane. To get more details, we must consider smaller segments of the trip over smaller time intervals.
The smaller the time intervals considered in a motion, the more detailed the information. When we carry this process to its logical conclusion, we are left with an infinitesimally small interval.
Over such an interval, the average velocity becomes the instantaneous velocityor the velocity at a specific instant. A car's speedometer, for example, shows the magnitude (but not the direction) of
the instantaneous velocity of the car. Police give tickets based on instantaneous velocity, but when calculating how long it will take to get from one place to another on a road trip, you need to use
average velocity. Instantaneous velocity $vv size 12{v} {}$ is the average velocity at a specific instant in time or over an infinitesimally small time interval.
Mathematically, finding instantaneous velocity, $v,v, size 12{v} {}$ at a precise instant $tt size 12{t} {}$ can involve taking a limit, which is a calculus operation beyond the scope of this text.
However, under many circumstances, we can find precise values for instantaneous velocity without calculus.
In everyday language, most people use the terms speed and velocity interchangeably. In physics, however, they do not have the same meaning and they are distinct concepts. One major difference is that
speed has no direction. Thus speed is a scalar. Just as we need to distinguish between instantaneous velocity and average velocity, we also need to distinguish between instantaneous speed and average
Instantaneous speed is the magnitude of instantaneous velocity. For example, suppose the airplane passenger at one instant had an instantaneous velocity of −3.0 m/s—the minus meaning toward the rear
of the plane. At that same time, his instantaneous speed was 3.0 m/s. Or suppose that at one time during a shopping trip your instantaneous velocity is 40 km/h due north. Your instantaneous speed at
that instant would be 40 km/h—the same magnitude but without a direction. Average speed, however, is very different from average velocity. Average speed is the distance traveled divided by elapsed
We have noted that distance traveled can be greater than displacement. So average speed can be greater than average velocity, which is displacement divided by time. For example, if you drive to a
store and return home in half an hour, and your car's odometer shows the total distance traveled was 6 km, then your average speed was 12 km/h. Your average velocity, however, was zero, because your
displacement for the round trip is zero. Displacement is change in position and, thus, is zero for a round trip. Thus, average speed is not simply the magnitude of average velocity.
Another way of visualizing the motion of an object is to use a graph. A plot of position or of velocity as a function of time can be very useful. For example, for this trip to the store, the
position, velocity, and speed-vs.-time graphs are displayed in Figure 2.11. Note that these graphs depict a very simplified model of the trip. We are assuming that speed is constant during the trip,
which is unrealistic given that we'll probably stop at the store. But for simplicity's sake, we will model it with no stops or changes in speed. We are also assuming that the route between the store
and the house is a perfectly straight line.
Making Connections: Take-Home Investigation—Getting a Sense of Speed
If you have spent much time driving, you probably have a good sense of speeds between about 10 and 70 miles per hour. But what are these in meters per second? What do we mean when we say that
something is moving at 10 m/s? To get a better sense of what these values really mean, do some observations and calculations on your own:
• Calculate typical car speeds in meters per second
• Estimate jogging and walking speed by timing yourself; convert the measurements into both m/s and mi/h
• Determine the speed of an ant, snail, or falling leaf
Check Your Understanding
A commuter train travels from Baltimore to Washington, DC, and back in 1 hour and 45 minutes. The distance between the two stations is approximately 40 miles. What is (a) the average velocity of the
train and (b) the average speed of the train in m/s?
(a) The average velocity of the train is zero because $xf=x0;xf=x0; size 12{x rSub { size 8{f} } =x rSub { size 8{0} } } {}$ the train ends up at the same place it starts.
(b) The average speed of the train is calculated below. Note that the train travels 40 miles one way and 40 miles back, for a total distance of 80 miles.
2.8 $distance time = 80 miles 105 minutes , distance time = 80 miles 105 minutes , size 12{ { {"distance"} over {"time"} } = { {"80 miles"} over {"105 minutes"} } } {}$
2.9 $80 miles 105 minutes × 5280 feet 1 mile × 1 meter 3 . 28 feet × 1 minute 60 seconds = 20 m/s. 80 miles 105 minutes × 5280 feet 1 mile × 1 meter 3 . 28 feet × 1 minute 60 seconds = 20 m/s.$ | {"url":"https://texasgateway.org/resource/23-time-velocity-and-speed?book=79096&binder_id=78516","timestamp":"2024-11-02T17:28:39Z","content_type":"text/html","content_length":"80560","record_id":"<urn:uuid:f36dbcf6-99b4-4efe-a467-624d3d46ae11>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00092.warc.gz"} |
19-03-2024: Electronics 12
19-03-2024: Electronics 12#
Date: Tuesday, March 19 2024
Location: Chip
Time: 10:45 - 12:30
Question of the day
How do we relate the controller requirements to the amplifier requirements?
Knowledge Test#
Press the button(s) below to test your knowledge and understanding of the topics covered this lecture.
Port impedance of single-loop feedback amplifiers#
Accuracy, bandwidth and frequency stability of negative feedback amplifiers#
Bandwidth of a negative feedback amplifier
For design purposes it is convenient to decouple the definition of the bandwitdth of a negative feedback amplifier from its desired frequency characteristic. This can be achieved by defining the
bandwidth of a negative feedback amplifier by that of its servo function.
The presentation Bandwidth of a negative feedback amplifier shows that the bandwidth of a negative feedback amplifier will be defined as that of its servo function.
Presentation in parts
Bandwidth of a negative feedback amplifier (parts)
Bandwidth definition for negative feedback amplifiers (3:40)
Chapter 11.4.1
Butterworth or Maximally Flat Magnitude (MFM) responses
The -3dB cut-off frequency of systems with a Butterworth or MFM transfer equals the Nth root of the magnitude of the product of their N poles, where N is the order of the system.
In this course we will design the frequency response of a feedback amplifier in such a way that the servo function obtains an MFM or Butterworth filter characteristic over the frequency range of
interest. Design procudures for other filter characteristics, such as, Bessel or Chebyshev do not differ. Only the numeric relation between the -3dB bandwidth and the gain-poles product of the loop
gain will be different.
The presentation Butterworth or Maximally Flat Magnitude (MFM) responses shows the Laplace transfer functions, the pole patterns and the magnitude characteristics of first, second and third order
Butterworth transfers.
Presentation in parts
Butterworth or Maximally Flat Magnitude (MFM) responses (parts)
Butterworth frequency responses (4:07)
Chapter 11.4.3
Continue with homework 9 and:
1. Evaluate the frequency characteristics of the asymptotic-gain, the loop gain and the servo function for the transmitter equipped with the TLV4111, designed to deliver 100mA peak into the transmit
coil. Use SLiCAP to plot the frequency characteristics.
□ If the voltage drive capability, the midband accuracy and the bandwidth of the transmitter amplifier with the TLV4111, designed to deliver 100mA peak into the transmit coil are OK, finalize
the transmitter design (prepare your poster).
2. Evaluate the frequency characteristics of the asymptotic-gain, the loop gain and the servo function for the receiver equipped with the OPA209, designed with a transmitter that delivers 100mA peak
into the transmit coil. Use SLiCAP to plot the frequency characteristics.
□ If the noise, the midband accuracy and the bandwidth of the receiver amplifier with the OPA209, combined with the above transmitter are OK, finalize the receiver design (prepare your poster). | {"url":"https://analog-electronics.tudelft.nl/EE3C11-2023-2024/courseWebSite/Lecture12/EE3C11-12.html","timestamp":"2024-11-05T18:41:56Z","content_type":"text/html","content_length":"42392","record_id":"<urn:uuid:0e6af0ea-5888-44e2-b7e2-75c023024bba>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00521.warc.gz"} |
es mathématiques
Friday, November 15, 2024
Critical well-posedness for the derivative nonlinear Schrödinger equation: dispersion and integrability
The focus of this talk is the derivative nonlinear Schrödinger equation (DNLS), a PDE arising as a model in magnetohydrodynamics. It is a nonlinear dispersive equation, meaning that, in the absence
of a boundary, solutions to the underlying linear flow tend to spread out in space as they evolve in time. It is also known to be completely integrable: in addition to a conserved mass and energy, it
has an infinite hierarchy of conserved quantities. These intriguing features have captured the interest of mathematicians and have played an important role in the investigation of the well-posedness
of DNLS, that is, the question of whether solutions exist, are unique, and depend continuously on the initial data. However, until recently not much was known regarding the evolution of rough and
slowly decaying initial data. We will discuss why previous methods failed to solve this problem and recent progress towards closing this gap, culminating in our proof of sharp well-posendess in the
critical space in joint work with Benjamin Harrop-Griffiths, Rowan Killip, and Monica Visan.
Date: November 15, 2024, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, November 8, 2024
From spacetime quanta to quantum spacetime (Conférence du Prix ACP-CRM 2024)
General relativity taught us that space time is dynamical and quantum theory posits that dynamical objects are quantum. Hence the Newtonian notion of space time as a passive stage where physics takes
place needs to be replaced by a notion of quantum space time. Using techniques from topological quantum field theory I will explain how to construct quantizations of geometry that lead to a notion of
spacetime quanta. We will then address two key questions: Firstly, how a smooth spacetime could emerge from these spacetime quanta, and secondly whether the dynamics imposed on these spacetime quanta
can lead to general relativity in the classical limit.
Date: November 8, 2024, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Wednesday, November 6, 2024
This is the story of a meeting between two women: Ingrid Daubechies, an internationally renowned mathematician, and Dominique Ehrmann, a Quebec artist.
With more than 20 accomplices passionate about the arts and mathematics, they created a giant, poetic art installation based on mathematical concepts: a storm of Koch flakes, a cryptographic quilt, a
turtle on Zeno's path. On the occasion of the launch of the Mathemalchemy exhibition at UQAM, they tell us about their creative adventure and some beautiful mathematical stories.
The Mathemalchemy art installation will be on display at UQAM's Complexe des sciences Pierre-Dansereau from November 6, 2024 to May 2, 2025.
Conference: Wednesday, November 6, 2024, 19:00
Place : Auditorium, UQAM Sherbrooke Building, 200 Sherbrooke Street West
See the event site for ticket prices and reservations.
The conference will be in French.
Friday, November 1, 2024
On mixture of experts in large-scale statistical machine learning applications
Mixtures of experts (MoEs), a class of statistical machine learning models that combine multiple models, known as experts, to form more complex and accurate models, have been combined into deep
learning architectures to improve the ability of these architectures and AI models to capture the heterogeneity of the data and to scale up these architectures without increasing the computational
cost. In mixtures of experts, each expert specializes in a different aspect of the data, which is then combined with a gating function to produce the final output. Therefore, parameter and expert
estimates play a crucial role by enabling statisticians and data scientists to articulate and make sense of the diverse patterns present in the data. However, the statistical behaviors of parameters
and experts in a mixture of experts have remained unsolved, which is due to the complex interaction between gating function and expert parameters.
In the first part of the talk, we investigate the performance of the least squares estimators (LSE) under a deterministic MoEs model where the data are sampled according to a regression model, a
setting that has remained largely unexplored. We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions. We demonstrate that
the rates for estimating strongly identifiable experts, namely the widely used feed-forward networks with activation functions sigmoid(·) and tanh(·), are substantially faster than those of
polynomial experts, which we show to exhibit a surprising slow estimation rate.
In the second part of the talk, we show that the insights from theories shed light into understanding and improving important practical applications in machine learning and artificial intelligence
(AI), in- cluding effectively scaling up massive AI models with several billion parameters, efficiently finetuning large-scale AI models for downstream tasks, and enhancing the performance of
Transformer model, state-of-the-art deep learning architecture, with a novel self-attention mechanism.
Date: November 1, 2024, 3:30 PM
Place: Hybrid - McGill University, Burnside Hall, Room 1104
Friday, November 1, 2024
Recent progress in mean curvature flow (Nirenberg Lectures in Geometric Analysis)
An evolving surface in space is a mean curvature flow if its (normal) velocity vector field is given by its mean curvature, at every time. Mean curvature flow is the steepest descent (gradient) flow
for the area functional, and the natural analog of the heat equation for an evolving surface. It first appeared in the physics literature in the 1950's in the context of annealing, and it has served
as a mathematical model for a variety of physical situations where an interface (or surface) has an energy proportional to area, and inertial effects are negligible. It also has applications in
mathematics, engineering and computation. Mean curvature flow has been studied intensely by mathematicians since the late 70s using a wide range of tools from analysis and geometry, with the aim of
developing a rigorous mathematical theory incorporating the formation and resolution of singularities. The lecture will discuss the developments over the decades, including recent breakthroughs in
the last 5-10 years.
Date: November 1, 2024, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, October 18, 2024
Échapper à la malédicition de la dimensionnalité grâce aux interpolations de lois (CRM-SSC Prize Lecture)
On pense souvent que la « malédiction de la dimensionalité » est incurable, mais en fait, dans le contexte de l'intégration, on peut souvent s'en échapper grâce à des méthodes basées sur
l'interpolation de lois. Il est même possible d'utiliser ces méthodes d'interpolation quand la notion de dimensionnalité n'est pas définie, c'est-à-dire dans le cadre abstrait de l'intégration de
Lebesgue, ce qui a des conséquences pratiques dans l'analyse bayésienne de données.
Dans cet exposé je vais d'abord donner un tour d'horizon sur les méthodes d'approximation d'intégrales basées sur les interpolations de lois. Cette introduction est accessible aux étudiants du
premier cycle. Je vais aussi présenter des résultats récents qui créent une connection et comparent deux familles d'interpolation de loi, soit d'une part, les approches de type MCMC (recuit parallèle
/ parallel tempering, recuit simulé / simulated tempering, etc.), et de l'autre, les approches de type AIS/SMC (recuit par échantillonnage préférentiel / Annealed Importance Sampling, recuit
particulaire / Sequential Monte Carlo).
Date: October 18, 2024, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Friday, October 4, 2024
Is the world smooth or non-smooth? Can mathematics tell us?
(Lecture for a general mathematical audience - will be followed by a wine and cheese reception) In modeling different dynamical processes in physics, engineering, and the life sciences we often use
models that have some kind of smoothness associated with their evolution. Meanwhile, models with abrupt changes - non-smoothness - have seen more frequent application, success, and necessity. These
changes raise a number of questions, such as, which should we use? Are the non-smooth models driving new mathematics, or has our preference for certain types of mathematical properties kept us from
considering certain types of models? We consider some non-smooth models on completely different scales, in the areas climate, energy transfer, and neural feedback, to see what these models can
capture about the systems they describe. These also provide some perspectives about the mathematics needed and available in understanding and applying these models.
Date: October 4, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Thursday, September 26, 2024
Extremely amenable automorphism groups of countable structures
Extreme amenability is a remarkable property for a topological group to have. Ever since the first example of such groups was constructed in 1975, many groups have been shown to be extremely
amenable. In this talk I will address the question: How many pairwise non-isomorphic extremely amenable groups are there? We demonstrate that there are continuum many pairwise non-isomorphic
extremely amenable groups which are automorphism groups of countable structures, and in particular Polish. I will also talk about some related results from the point of view of descriptive set
theory. This is joint work with Mahmood Etedadialiabadi and Feng Li.
A coffee break will be served before the colloquium at 2pm in the graduate lounge at McGill
Date: September 26, 2:30 pm
Place: Hybrid - McGill University, Burnside Hall, Room 920
Friday, September 20, 2024
What is an L-function?
Zeta and L-functions are fundamental in the study of the distribution of prime numbers. We will give a survey of the evolution of the concept of an L-function from Euler to Langlands. The talk will
be aimed at a general mathematical audience and will not be too technical.
Date: September 20, 3:30 PM
Place: Hybrid -Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Friday, September 13, 2024
Polynomial inequalities and combinatorial interpretationsIn combinatorics, many inequalities can be proved by an explicit injection. Such proofs can be both elegant and insightful. As an application,
they give a combinatorial interpretation for the difference between two sides. But what happens to the inequalities when this naive approach doesn't work? Can you actually prove that this approach is
impossible? In the first half of the talk I will give a broad overview or what's known and give many examples. In the second half, I will discuss how to prove negative results for algebraic
inequalities and what does all of that mean. This is joint work with Christian Ikenmeyer.
Date: Friday, September 13, 2024, 3:30 PM
Place: Hybrid - UQAM, Room PK-5115, Pavillon Président-Kennedy
Friday, September 6, 2024
Mathematics and Machines: Navigating New Frontiers
Over the past 50 years, computing power has undergone tremendous growth. However, understanding physical phenomena, such as climate change or fluid dynamics, and their mathematical formulation
remains a long-standing challenge in mathematical and computational sciences. In recent years, with the development of new hardware and software, a new paradigm has emerged in physics and engineering
through machine learning, and we are beginning to see machines being employed in increasingly creative ways in our work. In this talk, I will explore how traditional and computational mathematics can
intersect and how computers may play a transformative role in mathematics in the coming years. I will also discuss and speculate on exciting future directions. No prior background is assumed.
Date: September 6, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Thursday, June 20, 2024
On Lusztig's Asymptotic Algebra (in affine type A)
Kazhdan-Lusztig theory plays a fundamental role in the representation theory of Coxeter groups, Hecke algebras, groups of Lie type, and algebraic groups. One of the most fascinating objects in the
theory is the "asymptotic Hecke algebra" introduced by Lusztig in 1987. This algebra is "simpler" than the associated Hecke algebra, yet still encapsulates essential features of the representation
theory. This apparent simplicity is somewhat offset by the considerable difficulty one faces in explicitly realising the asymptotic algebra for a given Coxeter group, because on face value it
requires a detailed understanding of the entire Kazhdan-Lusztig basis, and the structure constants with respect to this basis. A significant part of this talk will be a gentle introduction to the
basic setup of Kazhdan-Lusztig theory (the Kazhdan-Lusztig basis, cells, and the asymptotic algebra). We will then report on a new approach (joint with N. Chapelier, J. Guilhot, and E. Little) to
construct the asymptotic algebra for affine type A, focusing on some of the main novelties of this approach, including the notion of a balanced system of cell modules, combinatorial formulae for
induced representations, and an asymptotic version of Opdam's Plancherel Theorem.
Date: Thursday, June 21, 2024, 3:30 PM
Place: Hybrid - UQAM, Room PK-5115, Pavillon Président-Kennedy
Monday, June 3, 2024
This summer school will explore some of the emerging applications of representation theory of quivers and algebras, particularly their fruitful connections to Topological Data Analysis (TDA) and
Geometric Invariant Theory (GIT). Through two mini-courses offered by distinguished experts in the aforementioned areas, these new connections will be introduced to a wider range of researchers and
junior participants. Additionally, the program includes some research talks related to the mini-courses, a poster session, as well as several discussion sessions to stimulate colaborative research.
Friday, May 24, 2024
The organizing commitee is pleased to invite you to the XXVIth ISM Graduate Student Conference! The colloquium will be held at the Université du Québec à Montréal (UQAM) from Friday May 24 to Sunday
May 26, 2024.
This annual colloquium brings together the community of graduate students in mathematics from all of Quebec’s universities. This year, participants will have the opportunity to attend five plenary
sessions, to present their own work through a 40-minute presentation, and to take part in social activities. As in the previous edition, we invite participants to create mathematical postcards which
will be printed and displayed at the colloquium.
During the event, the ISM will award the Carl Herz prize and the recipient will give a plenary talk about their research.
The registration fee (30$) includes all meals from Friday evening to Sunday morning: a wine and cheese event, a lunch, a dinner at a restaurant, two breakfasts, as well as coffee and snacks
throughout the weekend.
To register and to obtain more information, please visit our website : https://event.fourwaves.com/ismcolloque2024/
If you have any questions, please feel free to contact us at : colloque.ism2024@gmail.com
Saturday, May 11, 2024
Joignez-vous à nous le 11 mai 2024 au Pavillon Président-Kennedy de l'UQAM pour une journée d'échanges inspirants sur les liens entre les arts et les mathématiques lors de l'événement interordre
"Points de Convergence". Assistez à une conférence d'ouverture de Eva Knoll sur les mathématiques dans les arts, participez à des ateliers interactifs, découvrez un panel sur "Donner du sens à
l'enseignement des mathématiques", et engagez-vous dans une discussion en plénière sur la manière de diffuser les mathématiques au-delà de nos cercles. Enseignants, chercheurs et étudiants, explorez
l'intersection fascinante entre ces deux disciplines ! Et cerise sur le gâteau : le dîner est offert ! Pour plus d'informations et pour vous inscrire, visitez notre site web : https://
sites.google.com/view/les-points-de-convergence/. #PointsDeConvergence #MathsEtArts #Éducation
Friday, May 10, 2024
Free resolutions of powers of square-free monomial ideals
Free resolutions, introduced by David Hilbert over a century ago, provide us with a tool to study relations between polynomials via a sequence of free modules, or vector spaces. The numerical
information provided by free resolutions is invaluable in the study of solution sets of those polynomials.
The study of free resolutions is a central problem is commutative algebra, and depending on the type of polynomials in question, different tools are applied to study or construct free resolutions or
find bounds on their numerical invariants. Some examples are, combinatorics, geometry, topology, homological and computational algebra.
This talk will give a gentle introduction to resolutions of ideals generated by monomials, and the methods of discrete topology used to study them. Our focus will be taking powers of these ideals,
and the combinatorics associated to these powers. We will report on joint work with (subsets of): Trung Chau, Susan M. Cooper, Art Duval, Sabine El Khoury, Tai Ha, Thiago Holleben, Takayuki Hibi,
Sarah Mayes-Tang, Susan Morey, Liana M. Sega, Sandra Spiroff.
Date: May 10, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Thursday, May 9, 2024
This is the captivating story of the life and work of Maryam Mirzakhani, an Iranian mathematician who immigrated to the United States. In 2014, she was the first woman to receive the prestigious
Fields Medal, the highest award in mathematics. Her journey, her successes and her constant passion for her discipline make her an inspiring role model. In this film, mathematicians from around the
world, former teachers, classmates and students from today's Iran, express the profound influence of Maryam's achievements.
Secrets of the surface : The Mathematical Vision of Maryam Mirzakhani is a documentary written and directed by George Csicsery.
In English and Persian, sub-titled in French, United States, 2020, 59 minutes.
The screening will be followed by a discussion with :
• Termeh Kousha, Iranian-born mathematician and Executive Director of the Canadian Mathematical Society;
• Nadia Lafrenière, assistant professor in the Department of Mathematics and Statistics at Concordia University and graduate of UQAM;
• Mathilde Gerbelli-Gauthier, postdoctoral fellow in the Department of Mathematics and Statistics at McGill University.
Friday, April 26, 2024
Isolated and parametrized points on curves
Let C be an algebraic curve over Q, i.e., a 1-dimensional complex manifold defined by polynomial equations with rational coefficients. A celebrated result of Faltings implies that all algebraic
points on C come in families of bounded degree, with finitely many exceptions. These exceptions are known as isolated points. We explore how these isolated points behave in families of curves and
deduce consequences for the arithmetic of elliptic curves. We also explore how the beginnings of how the geometry of the parameterized points can be used to explore the arithmetic of the curve
This talk is on joint work with A. Bourdon, Ö. Ejder, Y. Liu, and F. Odumodu, with I. Vogt, and with work in progress with I. Balçik, S. Chan, and Y. Liu.
Date: April 26, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, April 19, 2024
Functional Neural Networks
Functional data analysis (FDA) is a growing statistical field for analyzing curves, images, or any multidimensional functions, in which each random function is treated as a sample element. Functional
data is found commonly in many applications such as longitudinal studies and brain imaging. In this talk, I will present a methodology for integrating functional data into deep neural networks. The
model is defined for scalar responses with multiple functional and scalar covariates. A by-product of the method is a set of dynamic functional weights that can be visualized during the optimization
process. This visualization leads to greater interpretability of the relationship between the covariates and the response relative to conventional neural networks. The model is shown to perform well
in a number of contexts including prediction of new data and recovery of the true underlying relationship between the functional covariate and scalar response; these results were confirmed through
real data applications and simulation studies.
Date: April 19, 3:30 PM
Place: Hybrid - UQAM, Room PK-5115, Pavillon Président-Kennedy
Friday, April 12, 2024
Classification using sets of reals as invariants
The theory of Borel equivalence relations provides a rigorous framework to analyze the complexity of classification problems in mathematics, to determine when a successful classification is possible,
and if so, to determine the optimal classifying invariants. Central to this theory are the iterated Friedman-Stanley jumps, which capture the complexity of classification using invariants which are
countable sets of reals, countable sets of countable sets of reals, and so on. In this talk I will present structural dichotomies for the Friedman-Stanley jumps. This in turn provides a general tool
for proving that a given classification problem is more difficult than the k'th Friedman-Stanley jump, for k=1,2,3,.... This extends results previously only known for the case k=1. The talk will
begin by discussing the basic definitions and general goals behind the theory of Borel equivalence relations. We will discuss some known structure and non-structure results, and motivate these new
Date: April 12, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, April 5, 2024
Bipermutohedral combinatorics
The classical permutohedron plays a central role in the mathematics were ideas of convexity, matroid theory and type-A combinatorics overlap and intersect. A newer construction, the bipermutohedron,
has related structures and applications that are still being understood. I will give an introduction to this object and describe two ways in which it can be used. One is in matroid combinatorics
(with Ardila and Huh) and the other for some geometry of Feynman amplitudes (with Schulze and Walther).
Date: April 5, 3:30 PM
Place: Hybrid - UQAM, Room PK-5115, Pavillon Président-Kennedy
Friday, March 22, 2024
How to prove isoperimetric inequalities by random sampling
In the class of convex sets, the isoperimetric inequality can be derived from several different affine inequalities. Fundamental constructions of convex sets, such as polar bodies and centroid
bodies, all satisfy strengthened isoperimetric theorems, as proved by Blaschke, Busemann and Petty. A powerful analytic framework for kindred problems was developed by Lutwak, Yang and Zhang, with
their introduction of Lp affine isoperimetric inequalities. Establishing isoperimetric inequalities for the highly non-convex Lp objects when p<1 (or p is even negative), has proved to be a challenge
due to the lack of convexity. However, this range of p is important to bridge inequalities between Brunn-Minkowski theory and dual Brunn-Minkowski theory. I will discuss a probabilistic approach to
proving Lp affine isoperimetric inequalities in the non-convex range. Gems from geometric probability, going back to Sylvester's famous four point problem, motivate empirical definitions of polar
bodies and their Lp-analogues. These empirical versions turn out to be more susceptible to convex analytic methods, and in turn provide a bridge between the convex and non-convex worlds. Based on
joint work with R. Adamczak, G. Paouris, and P. Simanjuntak.
Date: March 22, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, March 15, 2024
Skew-symmetric approximations of posterior distributions
A broad class of regression models that routinely appear in several fields of application can be expressed as partially or fully discretized Gaussian linear regressions. Besides incorporating the
classical Gaussian response setting, this class crucially encompasses probit, multinomial probit and tobit models, among others. The relevance of these representations has motivated decades of active
research within the Bayesian field. A main reason for this constant interest is that, unlike for the Gaussian response setting, the posterior distributions induced by these models do not seem to
belong to a known and tractable class, under the commonly-assumed Gaussian priors. In this seminar, I will review, unify and extend recent advances in Bayesian inference and computation for such a
class of models, proving that unified skew-normal (SUN) distributions (which include Gaussians as a special case) are conjugate to the general form of the likelihood induced by these formulations.
This result opens new avenues for improved sampling-based methods and more accurate and scalable deterministic approximations from variational Bayes. These results are further extended via a general
and provably-optimal strategy to improve, via a simple perturbation, the accuracy of any symmetric approximation of a generic posterior distribution. Crucially, such a novel perturbation is derived
without additional optimization steps and yields a similarly-tractable approximation within the class of skew-symmetric densities that provably enhances the finite-sample accuracy of the original
symmetric approximation. Theoretical support is provided, in asymptotic settings, via a refined version of the Bernstein–von Mises theorem that relies on skew-symmetric limiting densities.
Date: March 15, 11:00
Place: webinar
Friday, March 8, 2024
Degeneracy loci in geometry and combinatorics
Given a matrix of homogeneous polynomials, there is a “degeneracy locus” of points where specified submatrices drop rank. These loci are ubiquitous, and formulas for their degrees go back to Cayley
and Salmon in the mid-1800s. The search for more general and refined degree formulas led to a rich interaction between geometry and combinatorics in the late 20th century, and that interplay
continues today. I will describe recent and new formulas relating the geometry of degeneracy loci with the combinatorics of Schubert polynomials, including some ongoing joint work with William
Date: March 8, 3:30 PM
Place: Hybrid - UQAM, Room PK-5115, Pavillon Président-Kennedy
Friday, March 1, 2024
Corona Rigidity
This story started with Weyl’s work on compact perturbations of pseudo-differential operators. The Weyl-von Neumann theorem asserts that two self-adjoint operators on a complex Hilbert space are
unitarily equivalent modulo compact perturbations if and only if their essential spectra coincide. This was extended to normal operators by Berg and Sikonia. New impetus was given in the work of
Brown, Douglas, and Fillmore, who replaced single operators with (separable) C*-algebras and compact perturbations with extensions by the ideal of compact operators. After passing to the quotient
(the Calkin algebra, Q) and identifying an extension with a *-homomorhism into Q, analytic methods have to be supplemented with methods from algebraic topology, homological algebra, and (most
recently) logic. Some attention will be given to the (still half-open) question of Brown-Douglas-Fillmore, whether Q has an automorphism that flips the Fredholm index. It is related to a very
general question about isomorphisms of quotients, asking under what additional assumptions such isomorphism can be lifted to a morphism between the underlying structures. As general as it is, many
natural instances of this question have surprisingly precise (and surprising) answers. This talk will be partially based on the preprint Farah, I., Ghasemi, S., Vaccaro, A., and Vignati, A. (2022).
Corona rigidity. arXiv preprint arXiv:2201.11618 https://arxiv.org/abs/2201.11618 and some more recent results.
Date: March 1, 3:30 PM
Place: Hybrid - Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, February 23, 2024
Adaptive Bayesian predictive inference
Bayesian predictive inference provides a coherent description of entire predictive uncertainty through predictive distributions. We examine several widely used sparsity priors from the predictive (as
opposed to estimation) inference viewpoint. Our context is estimating a predictive distribution of a high-dimensional Gaussian observation with a known variance but an unknown sparse mean under the
Kullback–Leibler loss. First, we show that LASSO (Laplace) priors are incapable of achieving rate-optimal performance. This new result contributes to the literature on negative findings about
Bayesian LASSO posteriors. However, deploying the Laplace prior inside the Spike-and-Slab framework (for example with the Spike-and-Slab LASSO prior), rate-minimax performance can be attained with
properly tuned parameters (depending on the sparsity level sn). We highlight the discrepancy be- tween prior calibration for the purpose of prediction and estimation. Going further, we investigate
popular hierarchical priors which are known to attain adaptive rate-minimax performance for estimation. Whether or not they are rate-minimax also for predictive inference has, until now, been
unclear. We answer affirmatively by showing that hierarchical Spike-and-Slab priors are adaptive and attain the minimax rate without the knowledge of sn. This is the first rate-adaptive result in the
literature on predictive density estimation in sparse setups. This finding celebrates benefits of a fully Bayesian inference.
Date: February 23, 3:30 PM
Place: Webinar
Friday, February 16, 2024
Special points on moduli spaces
The study of moduli spaces, and special points in moduli spaces, has been of arithmetic interest. In this talk, I will speak about results pertaining to the algebraic and analytic distribution of
special points, and touch upon topics such as the Andre-Oort conjecture and the p-adic distribution of special points in these moduli spaces.
Date: February 16, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Tuesday, February 13, 2024
On Tuesday, February 13, 2024, join the Department of Mathematics and Statistics at Concordia University for a screening of Journeys of Black Mathematicians : Forging Resilience by George Csicsery
in celebration of Black History Month. Light refreshments will be served after the screening. The event is free for all to attend.
Please share with your network and colleagues.
Film Screening on
Tuesday, February 13, 2024 @ 6:30pm
Followed by a reception
Concordia university
J.A De Sève Cinema (1st floor LB building)
1400 De Maisonneuve Blvd. W
Visit the event page for more information.
Friday, February 2, 2024
The local-global conjecture for Apollonian circle packings is false
Primitive integral Apollonian circle packings are fractal arrangements of tangent circles with integer curvatures. The curvatures form an orbit of a 'thin group,' a subgroup of an algebraic group
having infinite index in its Zariski closure. The curvatures that appear must fall into one of six or eight residue classes modulo 24. The twenty-year old local-global conjecture states that every
sufficiently large integer in one of these residue classes will appear as a curvature in the packing. We prove that this conjecture is false for many packings, by proving that certain quadratic and
quartic families are missed. The new obstructions are a property of the thin Apollonian group (and not its Zariski closure), and are a result of quadratic and quartic reciprocity, reminiscent of a
Brauer-Manin obstruction. Based on computational evidence, we formulate a new conjecture. This is joint work with Summer Haag, Clyde Kertzer, and James Rickards. Time permitting, I will discuss
some new results, joint with Rickards, that extend these phenomena to certain settings in the study of continued fractions.
Date: February 2, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, January 26, 2024
An exploration-agnostic characterization of the ergodicity of parallel tempering
Non-reversible parallel tempering (NRPT) is an effective algorithm for sampling from target distributions with complex geometry, such as those arising from posterior distributions of weakly
identifiable and high-dimensional Bayesian models. In this talk I will establish the uniform geometric ergodicity of NRPT under an efficient local exploration hypothesis, which avoids the intricacies
of dealing with kernel-specific properties. The rates that we obtain are bounded in terms of an easily-estimable divergence, the global communication barrier (GCB), that was recently introduced in
the literature. We obtain analogous ergodicity results for classical reversible parallel tempering, providing new evidence that NRPT dominates its reversible counterpart. I will also present some
general properties of the GCB and bound it in terms of the total variation distance and the inclusive/exclusive Kullback-Leibler divergences. I will conclude the talk with simulations that validate
the new theoretical analysis. This is based on joint work with Nikola Surjanovic, Saifuddin Syed, and Alexandre Bouchard-Côté.
Date: January 26, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, January 19, 2024
Why p-adic numbers are better than real for representation theory
The p-adic numbers, discovered over a century ago, unveil aspects of number theory that the real numbers alone can’t. In this talk, we introduce p-adic fields and their fractal geometry, and then
apply this to the (complex!) representation theory of the p-adic group SL(2). We describe a surprising conclusion: that close to the identity, all representations are a sum of finitely many rather
simple building blocks arising from nilpotent orbits in the Lie algebra.
Date: January 19, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, January 12, 2024
Hamiltonian geometry of fluids
In the '60s V.Arnold suggested a group-theoretic approach to ideal hydrodynamics via the geodesic flow of the right-invariant energy metric on the group of volume-preserving diffeomorphisms of the
flow domain. We describe several recent ramifications of this approach related to compressible fluids, optimal mass transport, as well as Newton's equations on diffeomorphism groups and smooth
probability densities. It turns out that various important PDEs of hydrodynamical origin can be described in this geometric framework in a natural way.
Date: January 12, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, January 5, 2024
The Seminars on Undergraduate Mathematics in Montreal (SUMM) is organized by undergraduate students from Montreal universities. The main objective is to create an environment facilitating exchange of
ideas and interests as well as allowing students to network.
The SUMM weekend is aimed at undergraduate students in mathematics or related domains. This year, the conference will be held from January 5-7, 2024.
The weekend consist of two days of presentations given by undergraduate students and invited professors. The presentations can cover a broad range of subjects, from mathematical physics to the
applications of artificial intelligence as well as the history and philosophy of mathematics.
During the SUMM, students can choose to give a talk, or simply to attend presentations given by their peers. It's an occasion to share the passion for mathematics in a stimulating environment, while
networking with other passionate students over the weekend.
We hope to see you there!
Friday, December 8, 2023
Computational Complexity in Algebraic Combinatorics
Algebraic Combinatorics studies objects and quantities originating in Algebra, Representation Theory and Algebraic Geometry via combinatorial methods, finding formulas and neat interpretations. Some
of its feats include the hook-length formula for the dimension of an irreducible symmetric group ($S_n$) module, or the Littlewood-Richardson rule to determine multiplicities of GL irreducibles in
tensor products. Yet some natural multiplicities elude us, among them the Kronecker coefficients for the decomposition of tensor products of $S_n$ irreducibles, and the plethysm coefficients for
compositions of GL modules. Answering those questions could help Geometric Complexity Theory establish lower bounds for the far reaching goal to show that $P \neq NP$.
We will discuss how Computational Complexity Theory provides a theoretical framework for understanding what kind of formulas or rules we could actually have. We will use this to show that the square
of a symmetric group character could not have a combinatorial interpretation.
Based on joint works with Christian Ikenmeyer and Igor Pak.
Date: December 8, 2023, 3:30 PM
Place: UQAM, President-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
Friday, November 24, 2023
State-dependent Sampling in Observational Cohort Studies
Observational cohort studies of chronic disease involve the recruitment and follow-up of a sample of individuals with the goal of learning about the course of the disease, the effect of fixed and
time-varying risk factors. Analysis of this information is often facilitated by using multistate models with intensity functions governing transition between disease states. Chronic disease studies
often involve conditions for recruitment, for example incident cohort involves individuals who are healthy at accrual, prevalent cohort samples individuals who have already developed the disease, and
a length biased sampling includes individual who are alive at the time of recruitment. In this talk we discuss the impact of ignoring state-dependent sampling in life history analysis and the ways of
addressing the issue using auxiliary information. A longitudinal study of aging and cognition among religious sisters is used to illustrate the related methodology.
Date: November 24, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, November 17, 2023
The local Langlands program and characteristic p
Let n be an integer greater than or equal to 2. The Langlands program for GLn connects n-dimensional representations of Galois groups and infinite dimensional representations of GLn. For the
purposes of number theory, we are led to consider all these representations on vector spaces over fields of characteristic p for p an arbitrary prime number. On the GLn side, this leads in
particular to the following problem: understand and construct the - or some - representations of GLn(K) in characteristic p where K is a finite extension of the field of p-adic numbers Qp (for the
same prime number p!), and most importantly those of these representations which appear on cohomology spaces. This problem has challenged experts for more than 20 years and is still largely open even
for GL2(K). I will recall the history, the difficulties encountered, and will state some recent results for GL2(K).
Date: November 17, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, November 10, 2023
Model Based optimization with convex-composite optimization
Optimization problems have an enormous variety and complexity and solving them requires techniques for exploiting their underlying mathematical structure. The modeler needs to balance model
complexity with computational tractability as well as viable techniques for post optimal analysis and stability measures.
In this talk we describe the convex-composite modeling framework which covers a broad range of optimization problems including nonlinear programming, feasibility, minimax optimization, sparcity
optimization, feature selection, Kalman smoothing, parameter selection, and nonlinear maximum likelihood to name a few. The goal is to identify and exploit the underlying convexity that a given
problem may possess since convexity allows one to tap into the very rich theoretical foundation as well as the wide range of highly efficient numerical methods available for convex problems. The
systematic study of convex-composite problems began in the 1970's concurrent with the emergence of modern nonsmooth variational analysis. The synergy between these ideas was natural since
convex-composite functions are neither convex nor smooth. The recent resurgence in interest for this problem class is due to emerging methods for approximation, regularization and smoothing as well
as the relevance to a number of problems in global health, environmental modeling, image segmentation, dynamical systems, signal processing, machine learning, and AI. In this talk we review the
convex-composite problem structure and variational properties. We then discuss algorithm design and if time permits, we discuss applications to filtering methods for signal processing.
Date: November 10, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, November 3, 2023
Point Counting over finite fields and the cohomology of moduli spaces of curves
Algebraic geometry studies solution sets of polynomial equations. For instance, over the complex numbers, one may examine the topology of the solution set, whereas over a finite field, one may count
its points. For polynomials with integer coefficients, these two fundamental invariants are intimately related via cohomological comparison theorems and trace formulas for the action of Frobenius. I
will present recent results regarding point counts over finite fields and the cohomology of moduli spaces of curves that resolve longstanding questions in algebraic geometry and confirm more recent
predictions from the Langlands program.
Date: November 3, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, October 27, 2023
(Conférence Chaire Aisenstadt) On the hunt for chirps
The detection of specific oscillatory behaviours in signals is a key issue in turbulence, gravitational waves, or physiological data, where they turn out to be the signature of important phenomena
localized in time or space. We will show how some recently introduced methods of harmonic analysis allow us to characterize and classify such behaviors, and ultimately yield numerical methods to
perform their detection.
Date: October 27, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Friday, October 20, 2023
Billiards in conics revisted
Optical properties of conics have been known since the classical antiquity. The reflection in an ideal mirror is also known as the billiard reflection and, in modern terms, the billiard inside an
ellipse is completely integrable. The interior of an ellipse is foliated by confocal ellipses that are its caustics: a ray of light tangent to a caustic remains tangent to it after reflection
(“caustic” means burning).
I shall explain these classic results and some of their geometric consequences, including the Ivory lemma asserting that the diagonals of a curvilinear quadrilateral made by arcs of confocal ellipses
and hyperbolas are equal (this lemma is in the heart of Ivory's calculation of the gravitational potential of a homogeneous ellipsoid). Other applications include the Poncelet Porism, a famous
theorem of projective geometry that has celebrated its bicentennial, and its lesser known ramifications, such as the Poncelet Grid theorem and the related circle patterns and configuration theorems.
Date: October 20, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, October 13, 2023
Patterns in tri-block co-polymers: droplets, double-bubbles and core-shells, and a "new" partitioning problem
We study the Nakazawa-Ohta ternary inhibitory system, which describes domain morphologies in a triblock copolymer as a nonlocal isoperimetric problem for three interacting phase domains. The free
energy consists of two parts: the local interface energy measures the total perimeter of the phase boundaries, while a longer-range Coulomb interaction energy reflects the connectivity of the
polymer chains and promotes splitting into micro-domains. We consider global minimizers on the two-dimensional torus, in a limit in which two of the species have vanishingly small mass but the
interaction strength is correspondingly large. In this limit there is splitting of the masses, and each vanishing component rescales to a minimizer of an isoperimetric problem for clusters in 2D.
Depending on the relative strengths of the coefficients of the interaction terms we may see different structures for the global minimizers, ranging from a lattice of isolated simple droplets of each
minority species to double-bubbles or core-shells. These results have led to a new type of partitioning problem that I will also introduce. These represent work with S. Alama, with X. Lu, and C.
Wang, as well as with S. Vriend.
Date: October 13, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, October 6, 2023
(CRM-SSC Prize) Auto-regressive approximations to non-stationary time series, with inference and applications
Understanding the time-varying structure of complex temporal systems is one of the main challenges of modern time series analysis. In this talk, I will demonstrate that a wide range of short-range
dependent non-stationary and nonlinear time series can be well approximated globally by a white-noise-driven auto-regressive (AR) process of slowly diverging order. Uniform statistical inference of
the latter AR structure will be discussed through a class of high-dimensional L2 tests. I will further discuss applications of the AR approximation theory to globally optimal short-term forecasting,
efficient estimation, and resampling inference under complex temporal dynamics.
Date: October 6, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Wednesday, October 4, 2023
October 4-7, 2023
The relative Langlands program was born out of the methods used to study automorphic L-functions by representing them by various integrals of automorphic forms. The work of Jacquet, D. Prasad, and
others, highlighted the connections between Langlands functoriality and the problem of distinction - or, harmonic analysis on certain (almost) homogeneous G-spaces X, such as spherical varieties. The
conjectures of Gan-Gross-Prasad and Ichino-Ikeda, based on work of Waldspurger and many others, revealed a pattern that relates global integrals of automorphic forms to local harmonic analysis. The
work of Gaitsgory-Nadler and Sakellaridis-Venkatesh allowed the formulation of a general program, based on the dual group of a spherical variety. The goal of this workshop will be to introduce the
more recent work of Ben-Zvi-Sakellaridis-Venkatesh, which takes a step further, introducing a categorical version of the relative Langlands program.
Friday, September 29, 2023
On strong solutions of time inhomogeneous Itô's equations with Morrey diffusion gradient and drift and related PDEs results. A supercritical case.
We prove the existence of strong solutions of Itô's stochastic time dependent equations with irregular diffusion and drift terms of Morrey spaces. Strong uniqueness is also discussed. The results
are new even if there is no drift. The results are based on the solvability of parabolic equations with Morrey
drift in Morrey spaces, which is also new.
Date: September 29, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, September 15, 2023
Branching processes in random matrix theory and analytic number theory
The limiting distributions for maxima of independent random variables have been classified during the first half of last century. This classification does not extend to strong interactions, in
particular to the flurry of processes with natural logarithmic (or multiscale) correlations. These include branching random walks or the 2d Gaussian free field. More recently, Fyodorov, Hiary and
Keating (2012) exhibited new examples of log-correlated phenomena in number theory and random matrix theory. As a result (and as a testing ground of their observations) they have formulated very
precise conjectures about maxima of the characteristic polynomial of random matrices, and the maximum of L-functions on typical interval the critical line. I will describe the recent progress towards
these conjectures in both the random and deterministic setting.
Date: September 15, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
ISM Discovery School: Geometry and spectra of random hyperbolic surfaces
Monday, June 12, 2023
June 12-16, 2023
The study of lengths of closed geodesics and of the Laplace spectrum on hyperbolic surfaces was initiated by work of Huber in the 1970’s where he used the Selberg trace formula to determine various
asymptotics and bounds. Another major development came with Mirzakhani’s thesis and subsequent works, in which she developed new techniques to integrate functions over moduli spaces of hyperbolic
surfaces. This opened the way for proving results about random surfaces or for proving existence results based on probabilistic arguments. In the past few years, there has been a renewed interest for
the Selberg trace formula, especially in combination with Mirzakhani’s integration technique. Various other models of random surfaces have also been studied with great success, and we now understand
the behaviour of several geometric invariants thanks to recent breakthroughs, but many open problems remain.
The goal of this discovery school is to introduce graduate students and advanced undergraduate students to the above circle of ideas and give them the chance to learn about recent advances from
leading experts in the field.
We support the statement of inclusiveness. Everyone is welcome to attend.
Friday, June 9, 2023
The organizing committee of the XXVth Graduate Student Conference of the Institut des Sciences Mathématiques is pleased to welcome you to the Université de Sherbrooke in June 2023! This conference
brings together the community of graduate students in mathematics from all of Quebec’s universities for a weekend each year. During the event, participants have the opportunity to attend five plenary
sessions, to present their own work if they wish, and to take part in social activities.
ISM Discovery School: Random trees, graphs and maps
Monday, June 5, 2023
June 5-9, 2023
Discrete probability explores the structure of the objects studied in discrete mathematics. In theory, the study of large random discrete structures can frequently be reduced to understanding a
uniform sample from a finite set. In practice, however, for structured discrete objects (such as trees, graphs and maps), understanding what a uniform sample typically looks like frequently involves
a rich interplay between combinatorial, probabilistic and algorithmic arguments. The courses in this discovery school will showcase this interplay, presenting both classical and recent results on the
asymptotic behaviour of large random structures. The schedule of lectures will be relatively light, leaving plenty of time for students to discuss together to deepen their understanding of the
Friday, May 12, 2023
Estimating individualized treatment rules without individual data in multicentre studies
Estimating individualized treatment rules is challenging, as the treatment effect heterogeneity of interest often suffers from low power. This motivates the use of very large datasets such as those
from multiple health systems or multicentre studies, which may raise concerns of data privacy. In this talk, I will introduce a statistical framework for of estimation individualized treatment rules
and show how distributed regression can be used in combination with dynamic weighted regression to find an optimal individualized treatment rule whilst obscuring individual-level data. The robustness
of this approach and its flexibility to address local treatment practices will be shown in simulation. The work is motivated by, and illustrated with, an analysis of the U.K.’s Clinical Practice
Research Datalink on the treatment of depression.
Date: May 12, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Friday, May 5, 2023
The 2023 McGill (Bio)Statistics Research and Career Day will be held in person on Friday, May 5th. Students from McGill and other universities in Montreal are invited to give oral presentations about
their recent or ongoing research projects in statistics-related fields. We will also have three keynote speakers and an exciting career panel of professionals with different types of related work
Friday, May 5, 2023
Eigenvalues and minimal surfaces
Eigenvalues of the Laplace operator of Euclidean domains govern many physical phenomena, including heat flow and sound propagation. In particular, various inequalities for Laplace eigenvalues have
fascinated mathematicians since XIXth century. The following question was first formulated by Lord Rayleigh in his “Theory of sound”: which planar domain of given area has the lowest first Dirichlet
eigenvalue? This is an example of an isoperimetric eigenvalue problem for planar domains. The focus of the present talk is on more general isoperimetric problems, where one considers surfaces
equipped with Riemannian metrics. More specifically, sharp upper bounds for Laplace and Steklov eigenvalues have been an active area of research for the past decade, largely due to their fascinating
connection to fundamental geometric objects, minimal surfaces. We will survey recent results exploring the applications of this connection both to minimal surface theory and to isoperimetric
eigenvalue problems, culminating in a surprising link between Laplace and Steklov spectra.
Date: May 5, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, April 28, 2023
Aggregation-Diffusion and Kinetic PDEs for collective behavior: applications in the sciences
I will present a survey of micro, meso and macroscopic models where repulsion and attraction effects are included through pairwise potentials. I will discuss their interesting mathematical features
and applications in mathematical biology and engineering. Qualitative properties of local minimizers of the interaction energies are crucial in order to understand these complex behaviors. I will
showcase the breadth of possible applications with three different phenomena in applications: segregation, phase transitions and consensus.
Date: April 28, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, April 21, 2023
The Search for New Physics
The standard model (SM) of particle physics explains nearly all experimental results to date. There is no doubt that it is correct. However, for a variety of reasons it is understood to be incomplete
– there must be physics beyond the SM. In this talk, I provide a brief review of the SM, discuss the reasons we believe it is incomplete, and present some examples of my contributions over the years
to this search for new physics.
Date: April 21, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 6214
Friday, March 31, 2023
Set Theory and Topological Algebra
We shall discuss some recent applications of set-theoretic and model-theoretic methods to the study of topological groups. In particular, we shall outline how ultra powers can be used to solve old
problems of Comfort and van Douwen and introduce a new set-theoretic axiom to study convergence properties in topological groups. If time permits we may also briefly mention the use of Fraissé theory
in the study of groups of homeomorphisms.
Date: March 31, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, March 24, 2023
Laplace Eigenfunctions and the Frequency Function Method
A classical idea in the study of eigenfunctions of the Laplace-Beltrami operator is that they behave like polynomials of degree corresponding to the eigenvalue. In the first part of the talk we
present some recent results on eigenfunctions which confirm this idea. As a corollary, we formulate a local version of the celebrated Courant theorem on the number of nodal domains of
eigenfunctions. The second part of the talk is devoted to Dirichlet-Laplace eigenfunctions in subdomains of the Euclidean space. We give a sharp bound of the size of the zero set of eigenfunctions
under some mild assumptions on the regularity of the boundary. Versions of the almost monotonicity of the frequency function are important tools for both parts of the talk.
Date: March 24, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, March 17, 2023
Sums of the Divisor Function and Randam Matric Distributions
The divisor function gives the number of positive divisors of a natural number. How can we go about understanding the behavior of this function when going over the natural numbers? In this talk we
will discuss strategies to better understand this function, issues related to the distribution of these values, and connections to the Riemann zeta function and some groups of random matrices.
Date: March 17, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, March 10, 2023
Some expected and unexpected applications of Riemann surface theory in mathematical physics
Riemann surfaces and algebraic curves are ubiquitous in mathematics. Without attempting a review of the variety of ways in which they are useful, we will look at some specific examples and discuss
some curious links with physics and mathematical physics, including a link between the quantum harmonic oscillator and moduli spaces of Riemann surfaces, and a link between the periodic Toda chain,
Chebyshev polynomials, and Painlevé equations.
Date: March 10, 2023, 3:30 PM
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Wednesday, March 8, 2023
The CRM’s EDI committee warmly invites all mathematicians and statisticians who identify as women (students, postdocs or professors), and their friends, for a casual get together honouring
International Women’s Day and women in science. It will be an informal occasion to (re-) connect with each other and share experiences and ideas with friends and colleagues. We are happy to have as
distinguished guest Professor Vasilisa Shramchenko from Sherbrooke University who will also be giving the colloquium on Friday, March 10^th.
Where: Salon Maurice Labbé, 6^th Floor, Pavillon André-Aisenstadt, Université de Montréal
Date: March 8, 2023
Time: 4:00 PM – 6:00 PM
Friday, February 24, 2023
An introduction to rigid systems
A representation of a group G is said to be rigid, if it cannot be continuously deformed to a non-isomorphic representation. If G happens to be the fundamental group of a complex projective manifold,
rigid representations are conjectured (by Carlos Simpson) to be of geometric origin. In this talk I will outline the basic properties of rigid local systems and discuss several consequences of
Simpson‘s conjecture. I will then outline recent progress on these questions (joint work with Hélène Esnault) and briefly mention applications to geometry and number theory such as the recent
resolution of the André-Oort conjecture by Pila-Shankar-Tsimerman.
Date: February 24, 2023
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, February 10, 2023
The Mathematical Foundations of Deep Learning: From Rating Impossibility to Practical Existence Theorems
Deep learning is having a profound impact on industry and scientific research. Yet, while this paradigm continues to show impressive performance in a wide variety of applications, its mathematical
foundations are far from being well established. In this talk, I will present recent developments in this area by illustrating two case studies.
First, motivated by applications in cognitive science, I will present “rating impossibility" theorems. They identify frameworks where deep learning is provably unable to generalize outside the
training set for the seemingly simple task of learning identity effects, i.e. classifying whether pairs of objects are identical or not.
Second, motivated by applications in scientific computing, I will illustrate “practical existence" theorems. They combine universal approximation results for deep neural networks with compressed
sensing and high-dimensional polynomial approximation theory. As a result, they yield sufficient conditions on the network architecture, the training strategy, and the number of samples able to
guarantee accurate approximation of smooth functions of many variables.
Time permitting, I will also discuss work in progress and open questions.
Date: February 10, 2023
Place: Centre de recherches mathématiques, Pavillon André-Aisenstadt, Université de Montréal, Room 5340
Friday, January 27, 2023
The distribution of Selmer groups of elliptic curves
The Goldfeld and Katz--Sarnak conjectures predict that 50% of elliptic curves have rank 0, that 50% have rank 1, and that the average rank of elliptic curves is 1/2 (the remaining 0% of elliptic
curves not interfering in the average). Successive works of Brumer, Heath-Brown, and Young, have approached this problem by studying the central values of the L functions of elliptic curves. In this
talk, we will take an algebraic approach, in which we study the ranks of elliptic curves via studying their Selmer groups.
Poonen and Stoll developed a beautiful model for the behaviours of p-Selmer groups of elliptic curves, and gave heuristics for all moments of the sizes of these groups.
In this talk, I will describe joint work with Manjul Bhargava and Ashvin Swaminathan, in which we prove that the second moment of the size of the 2-Selmer groups of elliptic curves is bounded above
by 15 (which is the constant predicted by Poonen--Stoll).
The conference will not be broadcast on Zoom.
Date: January 27, 2023
Place: McGill University, Burnside Hall, Room 1205
Friday, January 20, 2023
Ramsey Theory, Sparsity and Limits (Combinatorics and Model Theory)
Several combinatorial problems are treated in the context of model theory. We survey three such instances which were investigated recently, coming from Ramsey theory, sparsity of graphs and limits
of sequences of structures. These are diverse areas but share some properties where the connection to model theory is non-trivial and interesting. It also presents several open problems of interest
to both combinatorics and model theory.
Date: January 20, 2023, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Friday, January 13, 2023
Sticky Kakeya sets, and the sticky Kakeya conjecture
A Kakeya set is a compact subset of R^n that contains a unit line segment pointing in every direction. The Kakeya conjecture asserts that such sets must have dimension n. This conjecture is closely
related to several open problems in harmonic analysis, and it sits at the base of a hierarchy of increasingly difficult questions about the behavior of the Fourier transform in Euclidean space.
There is a special class of Kakeya sets, called sticky Kakeya sets. Sticky Kakeya sets exhibit an approximate self-similarity at many scales, and sets of this type played an important role in Katz,
Łaba, and Tao's groundbreaking 1999 work on the Kakeya problem. In this talk, I will discuss a special case of the Kakeya conjecture, which asserts that sticky Kakeya sets must have dimension n. I
will discuss the proof of this conjecture in dimension 3. This is joint work with Hong Wang.
Date: January 13, 2023, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Friday, January 6, 2023
The Seminars on Undergraduate Mathematics in Montreal (SUMM) is organized by undergraduate students from Montreal universities. The main objective is to create an environment facilitating exchange of
ideas and interests as well as allowing students to network.
The SUMM weekend is aimed at undergraduate students in mathematics or related domains. This year, the conference will be held from January 6 to January 8, 2023.
The weekend consist of two days of presentations given by undergraduate students and invited professors. The presentations can cover a broad range of subjects, from mathematical physics to the
applications of artificial intelligence as well as the history and philosophy of mathematics.
During the SUMM, students can choose to give a talk, or simply to attend presentations given by their peers. It's an occasion to share the passion for mathematics in a stimulating environment, while
networking with other passionate students over the weekend.
We hope to see you there!
Friday, December 9, 2022
Ergodic theory of the stochastic Burgers equation
I am interested in stationary distributions for the Burgers equation with random forcing. I will first consider an oversimplified random dynamical system to illustrate the power of a general approach
based on the so-called pullback procedure. For the Burgers equation, which is a basic evolutionary stochastic PDE of Hamilton-Jacobi type related to fluid dynamics, growth models, and the KPZ
equation, one can realize this approach via studying long-term properties of random Lagrangian action minimizers and directed polymer measures in random environments. The compact space case was
studied in 2000's. This talk is based on my work on the noncompact case, joint with Eric Cator, Kostya Khanin, Liying Li.
Date: December 9, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Friday, December 2, 2022
p-adic methods for solving Diophantine equations
The problem of finding rational or integer solutions to polynomial equations is one of the oldest problems in mathematics and is one of the key driving forces in the development of Number Theory. In
the last 15 years new methods were developed that can sometimes effectively solve this problem. These methods attempt to find the solutions inside the larger set of solutions of the same equation in
the field of p-adic numbers as the vanishing set of some computable function. When these methods work they give the rational solutions to arbitrarily large p-adic precision, which usually suffices to
rigorously recover the full set of solutions.
I will survey the new methods, originating from the work of Kim and from the more recent work of Lawrence and Venkatesh. I will then explain my work with Muller and Srinivasan that uses a p-adic
version of the notion of norms on line bundles and associated heights, as used for example in arithmetic dynamics, to give a new approach to some Kim type results.
Date: December 2, 2022, 3:30 pm
Place: Concordia University, Library Building, 9th floor, room LB 921-4
Friday, November 25, 2022
(Almost) all of Entity Resolution
Whether the goal is to estimate the number of people that live in a congressional district, to estimate the number of individuals that have died in an armed conflict, or to disambiguate individual
authors using bibliographic data, all these applications have a common theme — integrating information from multiple sources. Before such questions can be answered, databases must be cleaned and
integrated in a systematic and accurate way, commonly known as record linkage, de-duplication, or entity resolution. In this article, we review motivational applications and seminal papers that have
led to the growth of this area. Specifically, we review the foundational work that began in the 1940’s and 50’s that have led to modern probabilistic record linkage. We review clustering approaches
to entity resolution, semi- and fully supervised methods, and canonicalization, which are being used throughout industry and academia in applications such as human rights, official statistics,
medicine, citation networks, among others. Finally, we discuss current research topics of practical importance. This is joint work with Olivier Binette.
Date: November 25, 2022
To obtain the zoom link: https://forms.gle/8cqLPYuXwiqdVU1L8
Friday, November 18, 2022
Quantitative Stability in the Calculus of Variations
Among all subsets of Euclidean space with a fixed volume, balls have the smallest perimeter. Furthermore, any set with nearly minimal perimeter is geometrically close, in a quantitative sense, to a
ball. This latter statement reflects the quantitative stability of balls with respect to the perimeter functional. We will discuss recent advances in quantitative stability and applications in
various contexts. The talk includes joint work with several collaborators and will be accessible to a broad research audience.
Date: November 18, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Friday, November 11, 2022
An equation of Diophantus
This is the story of an old equation of Diophantus, which will take us on an excursion along a branch of number theory that stretches over a large part of the subject's history. It will be our
motivation for a friendly introduction to some modern developments on torsion and ranks of elliptic curves.
Date: November 11, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Friday, November 4, 2022
Complexity of Submanifolds and Colding-Minicozzi Entropy
Given a submanifold of Euclidean space, Colding and Minicozzi defined its entropy to be the supremum of the Gaussian weighted surface areas of all of its translations and dilations. While initially
introduced to study singularities of mean curvature flow, it has proven to be an interesting geometric measure of complexity. In this talk I will survey some of the recent progress made on studying
the Colding-Minicozzi entropy of hypersurfaces. In particular, I will discuss a series of work by Lu Wang and myself showing closed hypersurfaces with small entropy are simple in various senses.
Date: November 4, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Friday, October 28, 2022
Computer Model Emulation using Deep Gaussian Processes
Computer models are often used to explore physical systems. Increasingly, there are cases where the model is fast, the code is not readily accessible to scientists, but a large suite of model
evaluations is available. In these cases, an “emulator” is used to stand in for the computer model. This work was motivated by a simulator for the chirp mass of binary black hole mergers where no
output is observed for large portions of the input space and more than 10^6 simulator evaluations are available. This poses two problems: (i) the need to address the discontinuity when observing no
chirp mass; and (ii) performing statistical inference with a large number of simulator evaluations. The traditional approach for emulation is to use a stationary Gaussian process (GP) because it
provides a foundation for uncertainty quantification for deterministic systems. We explore the impact of the choices when setting up the deep GP on posterior inference, apply the proposed approach to
the real application and propose a sequential design approach for identifying new simulations.
Date: October 28, 2022, 3:30 pm
Place: HEC, room Hélène-Desmarais, chemin de la Côte-Sainte-Catherine
Friday, October 14, 2022
New isoperimetric inequalities and their applications to systolic geometry and minimal surfaces (André Aisenstadt Prize 2022)
I will describe two new isoperimetric inequalities for k-dimensional submanifolds of R^n or a Banach space. As a consequence of one we obtain a new systolic inequality that was conjectured by Larry
Guth. As a consequence of another, we obtain an asymptotic formula for volumes of minimal submanifolds that was conjectured by Mikhail Gromov. The talk is based on joint works with Boris Lishak,
Alexander Nabutovsky and Regina Rotman; Fernando Marques and Andre Neves; Larry Guth.
Date: October 14, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 1355, 2920, chemin de la tour, Montréal
Register for zoom link.
Friday, October 7, 2022
Random plane geometry - a gentle introduction
Consider Z^2, and assign a random length of 1 or 2 to each edge based on independent fair coin tosses. The resulting random geometry, first passage percloation, is conjectured to have a scaling
Most random plane geometric models (including hidden geometries) should have the same scaling limit.
I will explain the basics of the limiting geometry, the "directed landscape", the central object in the class of models named after Kardar, Parisi and Zhang.
Date: October 7, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 6214, 2920, chemin de la tour, Montréal
Register for zoom link.
Friday, September 30, 2022
(Prix SSC) Full likelihood inference for abundance from capture-recapture data: semiparametric efficiency and EM-algorithm
Capture-recapture experiments are widely used to collect data needed to estimate the abundance of a closed population. To account for heterogeneity in the capture probabilities, Huggins (1989) and
Alho (1990) proposed a semiparametric model in which the capture probabilities are modelled parametrically and the distribution of individual characteristics is left unspecified. A conditional
likelihood method was then proposed to obtain point estimates and Wald-type confidence intervals for the abundance. Empirical studies show that the small-sample distribution of the maximum
conditional likelihood estimator is strongly skewed to the right, which may produce Wald-type confidence intervals with lower limits that are less than the number of captured individuals or even
In this talk, we present a full likelihood approach based on Huggins and Alho's model. We show that the null distribution of the empirical likelihood ratio for the abundance is asymptotically
chi-square with one degree of freedom, and the maximum empirical likelihood estimator achieves semiparametric efficiency. We further propose an expectation–maximization algorithm to numerically
calculate the proposed point estimate and empirical likelihood ratio function. Simulation studies show that the empirical-likelihood-based method is superior to the conditional-likelihood-based
method: its confidence interval has much better coverage, and the maximum empirical likelihood estimator has a smaller mean square error.
Date: September 30, 2022, 3:30 pm
Register for zoom link.
Friday, September 23, 2022
A story about pointwise ergodic theorems
Pointwise ergodic theorems provide a bridge between the global behaviour of the dynamical system and the local combinatorial statistics of the system at a point. Such theorem have been proven in
different contexts, but typically for actions of semigroups on a probability space. Dating back to Birkhoff (1931), the first known pointwise ergodic theorem states that for a measure-preserving
ergodic transformation T on a probability space, the mean of a function (its global average) can be approximated by taking local averages of the function at a point x over finite sets in the
forward-orbit of x, namely {x, Tx, ..., T^n x}. Almost a century later, we revisit Birkhoff's theorem and turn it backwards, showing that the averages along trees of possible pasts also approximate
the global average. This backward theorem for a single transformation surprisingly has applications to actions of free groups, which we will also discuss. This is joint work with Jenna Zomback.
Date: September 23, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Register for zoom link.
Friday, September 16, 2022
Modularity of Galois representations, from Ramanujan to Serre's conjecture and beyond
Ramanujan made a series of influential conjectures in his 1916 paper "On some arithmetical functions" on what is now called the Ramanujan τ\tauτ function. A congruence Ramanujan observed for τ(n)\tau
(n)τ(n) modulo 691 in the paper led to Serre and Swinnerton-Dyer developing a geometric theory of mod ppp modular forms. It was in the context of the theory of mod ppp modular forms that Serre made
his modularity conjecture, which was initially formulated in a letter of Serre to Tate in 1973.
I will describe the path from Ramanujan's work in 1916, to the formulation of a first version of Serre's conjecture in 1973, to its resolution in 2009 by Jean-Pierre Wintenberger and myself. I will
also try to indicate why this subject is very much alive and, in spite of all the progress, still in its infancy.
Date: September 16, 2022, 3:30 pm
Place: Pavillon André Aisenstadt, Room 5340, 2920, chemin de la tour, Montréal
Register for zoom link.
Friday, September 2, 2022
Algèbres amassées et théorie des noeuds
Les algèbres amassées sont des algèbres de polynômes de Laurent dont les générateurs s’obtiennent par un processus récursif appelé mutation. On commence avec une graine, paire formée d’un ensemble de
n variables, appelé amas, et d’un graphe orienté à n points. La mutation d’une graine remplace une variable à la fois et modifie le graphe, donnant ainsi une nouvelle graine. L’algèbre amassée est
engendrée par toutes les variables obtenues par mutations successives, qu’on appelle variables amassées.
Les algèbres amassées ont été définies il y a 20 ans, et depuis, des liens entre elles et divers champs de recherche ont été découverts. Récemment, avec mon collaborateur, nous avons établi un lien
entre la théorie des noeuds et les algèbres amassées. Cet exposé introduira d'abord les algèbres amassées puis présentera la connection entre ces dernières et le polynôme d'Alexander d'un noeud.
Date: September 2, 2022, 3:30 pm
Place: Pavillon André Ainsenstadt, room 6214/6254, Université de Montréal
Monday, July 4, 2022
The aim of this ISM discovery school is to explore different topics in representation theory through the lens of mutations.
UQAM, Montréal
July 4-8, 2022
Monday, June 6, 2022
There are relatively new developments connecting the geometry of Hessenberg varieties and symmetric functions and their associated combinatorics, which have shown that the (equivariant) geometry and
topology of Hessenberg varieties are intimately connected with a deep unsolved problem in the theory of symmetric (and quasisymmetric) functions called the Stanley-Stembridge conjecture.
The subject of Hessenberg varieties lies in the fruitful intersection of algebraic geometry, combinatorics, and geometric representation theory. A fundamental contribution in this area, over a decade
ago, was Julianna Tymoczko’s construction of an action of the symmetric group on the cohomology rings of regular semisimple Hessenberg varieties. Tymoczko’s action provided the first link between
Hessenberg varieties and symmetric functions because representations of symmetric groups give rise to symmetric functions via the Frobenius characteristic map.
The second link was developed via the notion of the chromatic symmetric function of a graph, introduced by Richard Stanley in 1995 as a generalization of the classic chromatic polynomial of a graph.
The Stanley-Stembridge conjecture concerns the structure of the chromatic symmetric functions of a special family of graphs; it states that the chromatic symmetric functions of these graphs are
non-negative linear combinations of elementary symmetric functions. This conjecture is still open.
A close relationship between chromatic symmetric functions and Hessenberg varieties was discovered by John Shareshian and Michelle Wachs, who associated a graph with each regular semisimple
Hessenberg variety and formulated a conjecture relating the chromatic symmetric function of the graph with the symmetric function associated with the Hessenberg variety via Tymoczko’s action. Their
conjecture has since been proved, and there is reason to hope that further progress on the Stanley-Stembridge conjecture can be made by better understanding the relation between these two areas.
The Summer School is aimed at graduate students specializing in geometry or combinatorics, and the goal is to introduce both the theories of Hessenberg varieties and of symmetric functions in such a
way that a student can have access to the exciting developments linking these areas.
Friday, May 27, 2022
The 24th edition of the Colloque Panquébécois de l’Institut des Sciences Mathématiques (ISM) will be held in person at Université Laval this May 27-29, 2022. The goal of this annual conference is to
bring together graduate students in mathematics from all of Quebec’s universities.
Participants are invited to give a 20 minute talk on a mathematical subject of their choice. In addition, four plenary talks will be given by professors. The event will also feature a talk by the
recipient of the Carl Herz prize, awarded by the ISM, and many social activities to give participants the opportunity to meet and network with other students in mathematics.
Friday, May 20, 2022
Mathematical analysis of dilute gases: derivation of the Boltzmann equation, fluctuations and large deviations
he evolution of a gas can be described by different models depending on the observation scale. A natural question, raised by Hilbert in his sixth problem, is whether these models provide consistent
predictions. In particular, for rarefied gases, it is expected that continuum laws of kinetic theory can be obtained directly from molecular dynamics governed by the fundamental principles of
mechanics. In the case of hard sphere gases, Lanford showed in 1975 that the Boltzmann equation emerges as the law of large numbers in the low density limit, at least for very short times. The goal
of this talk is to explain the heuristics of his proof and present recent progress in the understanding of this limiting process.
Date: May 20, 2022, 3:30 pm
Friday, May 6, 2022
Generic measure preserving transformations and descriptive set theory
The behavior of a measure preserving transformation, even a generic one, is highly non-uniform. In contrast to this observation, a different picture of a very uniform behavior of the closed group
generated by a generic measure preserving transformation has emerged. This picture included substantial evidence that pointed to these groups being all topologically isomorphic to a single group,
namely, L^0---the non-locally compact, topological group of all Lebesgue measurable functions from [0,1] to the circle. In fact, Glasner and Weiss asked if this was the case.
We will describe the background touched on above, including the connections with Descriptive Set Theory. Further, we will indicate a proof of the following theorem that answers the Glasner--Weiss
question in the negative: for a generic measure preserving transformation T, the closed group generated by T is not topologically isomorphic to L^0.
Date: May 6, 2022, 3:30 pm
Friday, April 29, 2022
COVID-19 transmission models in the real world: models, data, and policy
Simple mathematical models of COVID-19 transmission gained prominence in the early days of the pandemic. These models provided researchers and policymakers with qualitative insight into the dynamics
of transmission and quantitative predictions of disease incidence. More sophisticated models incorporated new information about the natural history of COVID-19 disease and the interaction of
infected individuals with the healthcare system, to predict diagnosed cases, hospitalization, ventilator usage, and death. Models also provided intuition for discussions about outbreaks, vaccination,
and the effects of non-pharmaceutical interventions like social distancing guidelines and stay-at-home orders. But as the pandemic progressed, complex real-world interventions took effect, people
everywhere changed their behavior, and the usefulness of simple mathematical models of COVID-19 transmission diminished. This challenge forced researchers to think more broadly about empirical data
sources that could help predictive models regain their utility for guiding public policy. In this presentation, I will describe my view of the successes and failures of population-level transmission
models in the context of the COVID-19 pandemic. I will outline the evolution of a project to predict COVID-19 incidence in the state of Connecticut, from development of a transmission model to
engagement with public health policymakers and initiation of a new data collection effort. I argue that a new data source – passive measurement of close interpersonal contact via mobile device
location data – is a promising way to overcome many of the shortcomings of traditional transmission models. I conclude with a summary of the impact this work has had on the COVID-19 response in
Connecticut and beyond.
Date: April 29, 2022, 3:30 pm
Friday, April 22, 2022
Cactus groups and monodromy
The cactus group is a cousin of the braid group and shares many of its beautiful properties. It is the fundamental group of the moduli space of points on RP^1. It also acts on many collections of
combinatorial objects. I will explain how we use the cactus group to understand monodromy of eigenvectors for Gaudin algebras.
This conference will be held in hybrid mode (on site and by Zoom).
Date: April 22, 2022, 3:30 pm
Place: Pavillon André Ainsenstadt, room 6214/6254, Université de Montréal
For the Zoom link: Register
Friday, April 15, 2022
Some aspects of mean games
Mean Field Game is the study of the dynamical behavior of a large number of agents in interaction. For instance, it can model be the dynamics of a crowd, or the production of a renewable resource by
a large amount of producers. The analysis of these models, first introduced in the economic literature under the terminology of “heterogenous agent models, has known a spectacular development with
the pioneering woks of Lasry and Lions and of Caines, Huang and Malhamé. The aim of the talk will be to illustrate the theory through a few models and present some of the main results and open
Date: April 15, 2022, 3:00 pm
Friday, April 8, 2022
Hidden Variable Model for Universal Quantum Computation with Magic States on Qubits
We show that every quantum computation can be described by a probabilistic update of a probability distribution on a finite phase space. Negativity in a quasiprobability function is not required in
states or operations, which is a very unusual feature. Nonetheless, our result is consistent with Gleason’s Theorem and the Pusey-Barrett- Rudolph theorem.
The reason I have chosen this subject for my talk is two-fold: (i) It gives the audience a glimpse of the quest to understand the quantum mechanical cause for speed-up in quantum computation, which
is one of the central questions on the theory side of the field, and (ii) Maybe there can be feedback from the audience. The structures underlying the above probabilistic model are the so-called
Lambda-polytopes, which are highly symmetric objects. At present we only know very few general facts about them. Help with analysing them would be appreciated!
Joint work with Michael Zurel and Cihan Okay,
Journal reference: Phys. Rev. Lett. 125, 260404 (2020)
Date: April 8, 2022, 3:30 pm
Friday, April 1, 2022
Gentle algebras, surfaces and a glimpse of homological mirror symmetry
Derived categories are in general not easy to parse. However, in certain cases, combinatorial models give a good picture of these categories. One such case are the bounded derived categories of
gentle algebras which can be represented in terms of curves and crossings of curves on surfaces. In this talk, we will give the construction of these surface models and briefly explain how they are
connected to the homological mirror symmetry programme. We will show how a combination of surface combinatorics and representation theory can give new insights into the associated categories.
Date: April 1, 2022, 2:00 pm
Friday, March 25, 2022
Making mathematics computer-checkable
In the last thirty years, computer proof verification became a mature technology, with successes including the checking of the Four-Colour Theorem, the Odd Order Theorem, and Hales' proof of the
Kepler Conjecture. Recent advances such as the "Liquid Tensor Experiment" verifying a recent theorem of Scholze have provided further momentum, as likewise have promising experiments integrating
this technology with machine learning.
I will briefly describe some of these developments. I will then try to describe, more generally, what it *feels* like to carry out research-level computer verifications of mathematics proofs: the
level of expression one has access to, the ways one finds oneself interrogating and reorganizing a paper proof, the kinds of arguments which are more tedious (or less tedious!) than on paper.
Date: March 25, 2022, 3:30 pm
Friday, March 18, 2022
The importance of large deviations in non-equilibrium systems
Statistical Physics allowed to unify, at the end of the 19th century, Newton's mechanics and thermodynamics. It gave a way to predict the amplitude of fluctuations around the physical laws which were
known at that time. Einstein, in his very first works, showed that the measurement of these fluctuations allowed to estimate the size of atoms. His reasoning, which was at the origin of the linear
response theory, applied to the black body gave one of the first evidences of the duality wave-particle in Quantum Mechanics. Statistical Physics gives also a framework to predict large deviations
for systems at equilibrium. In the last two decades, major efforts were devoted to extend our understanding of the statistical laws of fluctuations and large deviations to non-equilibrium systems.
This talk will try to present some of the recent progresses.
Date: March 18, 2022, 3:30 pm
International Mathematics Day activities at the Centre de recherches mathématiques (CRM)
Monday, March 14, 2022
Happy International Math Day 2022 with the theme "Math Unites"! You can watch the 48-hour online coverage (March 14 around the world) and visit the best photos of the photo challenge.
Here are two events organized by the CRM.
1. Virtual Public Lecture by Francis Su (Harvey-Mudd College):
Mathematics for Human Flourishing
In particular, Francis Su has developed a friendship with a prisoner, Christopher, in a maximum security prison in the United States and this prisoner is waking up to mathematics.
2. This conference is preceded by the launch of a UNESCO tool kit that the CRM has developed entitled "Mathematics for Action: Supporting Science Based Decision" at 18:30 EDT. The kit can be viewed
at: kit
The Canadian launch will be held on Monday, March 14, 2022, 18:30-19:15 EDT in hybrid mode. See program and registration
Friday, March 11, 2022
Algebra, geometry and combinatorics of link homology
Khovanov and Rozansky defined in 2005 a triply graded link homology theory which generalizes HOMFLY-PT polynomial. In this talk, I will outline some known results and structures in Khovanov-Rozansky
homology, describe its connection to q,t-Catalan combinatorics and present several geometric models for some classes of links.
Date: March 11, 2022, 3:30 pm
Friday, February 18, 2022
Structure learning for Extremal graphical models
Extremal graphical models are sparse statistical models for multivariate extreme events. The underlying graph encodes conditional independencies and enables a visual interpretation of the complex
extremal dependence structure. For the important case of tree models, we provide a data-driven methodology for learning the graphical structure. We show that sample versions of the extremal
correlation and a new summary statistic, which we call the extremal variogram, can be used as weights for a minimum spanning tree to consistently recover the true underlying tree. Remarkably, this
implies that extremal tree models can be learned in a completely non-parametric fashion by using simple summary statistics and without the need to assume discrete distributions, existence of
densities, or parametric models for marginal or bivariate distributions. Extensions to more general graphs are also discussed.
Date: February 18, 2022, 3:30 pm
Friday, February 11, 2022
Sticky particle dynamics
I will discuss the time evolution of a collection of particles that interact primarily through perfectly inelastic collisions. I will explain why this problem is tractable if the particles are
constrained to lie on a line versus if they are allowed to move freely in space. In particular, I'll also describe an equation at the heart of this difficulty which some researchers believe has been
solved and others do not. This topic has motivations in astronomy and connections with optimal mass transportation which I will touch upon if time permits.
Date: February 11, 2022, 3:30 pm
Friday, February 4, 2022
Euler Systems and the Birch--Swinnerton-Dyer conjecture
L-functions are one of the central objects of study in number theory. There are many beautiful theorems and many more open conjectures linking their values to arithmetic problems. The most famous
example is the conjecture of Birch and Swinnerton-Dyer, which is one of the Clay Millenium Prize Problems. I will discuss this conjecture and some related open problems, and I will describe some
recent progress on these conjectures, using tools called "Euler systems".
Date: February 4, 2022, 12:00 pm
Friday, January 28, 2022
Risk assessment, heavy tails, and asymmetric least squares techniques
Statistical risk assessment, in particular in finance and insurance, requires estimating simple indicators to summarize the risk incurred in a given situation. Of most interest is to infer extreme
levels of risk so as to be able to manage high-impact rare events such as extreme climate episodes or stock market crashes. A standard procedure in this context, whether in the academic, industrial
or regulatory circles, is to estimate a well-chosen single quantile (or Value-at-Risk). One drawback of quantiles is that they only take into account the frequency of an extreme event, and in
particular do not give an idea of what the typical magnitude of such an event would be. Another issue is that they do not induce a coherent risk measure, which is a serious concern in actuarial and
financial applications. In this talk, after giving a leisurely tour of extreme quantile estimation, I will explain how, starting from the formulation of a quantile as the solution of an optimization
problem, one may come up with two alternative families of risk measures, called expectiles and extremiles, in order to address these two drawbacks. I will give a broad overview of their properties,
as well as of their estimation at extreme levels in heavy-tailed models, and explain why they constitute sensible alternatives for risk assessment using real data applications. This is based on
joint work with Abdelaati Daouia, Irène Gijbels, Stéphane Girard, Simone Padoan and Antoine Usseglio-Carleve.
Date: January 28, 2022, 3:30 pm
Friday, January 21, 2022
The commuting variety and generic pipe dreams
Nobody knows whether the scheme "pairs of commuting nxn matrices" is reduced. I'll show how this scheme relates to matrix Schubert varieties, and give a formula for its equivariant cohomology class
(and that of many other varieties) using "generic pipe dreams" that I'll introduce. These interpolate between ordinary and bumpless pipe dreams. With those, I'll rederive both formulae (ordinary and
bumpless) for double Schubert polynomials. This work is joint with Paul Zinn-Justin.
Date: January 21, 2022, 2:00 pm
Friday, January 14, 2022
Looking at hydrodynamics through a contact mirror: From Euler to Turing and beyond
What physical systems can be non-computational? (Roger Penrose, 1989). Is hydrodynamics capable of calculations? (Cris Moore, 1991). Can a mechanical system (including the trajectory of a fluid)
simulate a universal Turing machine? (Terence Tao, 2017).
The movement of an incompressible fluid without viscosity is governed by Euler equations. Its viscid analogue is given by the Navier-Stokes equations whose regularity is one of the open problems in
the list of problems for the Millenium bythe Clay Foundation. The trajectories of a fluid are complex. Can we measure its levels of complexity (computational, logical and dynamical)?
In this talk, we will address these questions. In particular, we will show how to construct a 3-dimensional Euler flow which is Turing complete. Undecidability of fluid paths is then a consequence of
the classical undecidability of the halting problem proved by Alan Turing back in 1936. This is another manifestation of complexity in hydrodynamics which is very different from the theory of chaos.
Our solution of Euler equations corresponds to a stationary solution or Beltrami field. To address this problem, we will use a mirror [5] reflecting Beltrami fields as Reeb vector fields of a contact
structure. Thus, our solutions import techniques from geometry to solve a problem in fluid dynamics. But how general are Euler flows? Can we represent any dynamics as an Euler flow? We will address
this universality problem using the Beltrami/Reeb mirror again and Gromov's h-principle. We will also consider the non-stationary case. These universality features illustrate the complexity of Euler
flows. However, this construction is not "physical" in the sense that the associated metric is not the euclidean metric. We will announce an euclidean construction and its implications to complexity
and undecidability.
These constructions [1,2,3,4] are motivated by Tao's approach to the problem of Navier-Stokes [7,8,9] which we will also explain.
[1] R. Cardona, E. Miranda, D. Peralta-Salas, F. Presas. Universality of Euler flows and flexibility of Reeb embeddings. https://arxiv.org/abs/1911.01963.
[2] R. Cardona, E. Miranda, D. Peralta-Salas, F. Presas. Constructing Turing complete Euler flows in dimension 3. Proc. Natl. Acad. Sci. 118 (2021) e2026818118.
[3] R. Cardona, E. Miranda, D. Peralta-Salas. Turing universality of the incompressible Euler equations and a conjecture of Moore. Int. Math. Res. Notices, , 2021;, rnab233, https://doi.org/10.109/
[4] R. Cardona, E. Miranda, D. Peralta-Salas. Computability and Beltrami fields in Euclidean space. https://arxiv.org/abs/2111.03559
[5] J. Etnyre, R. Ghrist. Contact topology and hydrodynamics I. Beltrami fields and the Seifert conjecture. Nonlinearity 13 (2000) 441–458.
[6] C. Moore. Generalized shifts: unpredictability and undecidability in dynamical systems. Nonlinearity 4 (1991) 199–230.
[7] T. Tao. On the universality of potential well dynamics. Dyn. PDE 14 (2017) 219–238.
[8] T. Tao. On the universality of the incompressible Euler equation on compact manifolds. Discrete Cont. Dyn. Sys. A 38 (2018) 1553–1565.
[9] T. Tao. Searching for singularities in the Navier-Stokes equations. Nature Rev. Phys. 1 (2019) 418–419.
Date: January 14, 2022, 11:00 am
Friday, December 17, 2021
Nonparametric causal mediation in a time-to-event setting
A causal mediation model with multiple time-to-event mediators is exemplified by the natural course of human disease marked by sequential milestones with a time-to-event nature. For example, from
hepatitis B infection to death, patients may experience intermediate events such as liver cirrhosis and liver cancer. The sequential events of hepatitis, cirrhosis, cancer, and death are susceptible
to right censoring; moreover, the latter events may preclude the former events. Casting the natural course of human diseases in the framework of causal mediation modeling, we establish a model with
intermediate and terminal events as the mediators and outcomes, respectively. We define the interventional analog of path-specific effects (iPSEs) as the effect of an exposure on a terminal event
mediated (or not mediated) by any combination of intermediate events without parametric models. The expression of a counting process-based counterfactual hazard is derived under the sequential
ignorability assumption. We employ composite nonparametric likelihood estimation to obtain maximum likelihood estimators for the counterfactual hazard and iPSEs. Our proposed estimators achieve
asymptotic unbiasedness, uniform consistency, and weak convergence. Applying the proposed method, we show that hepatitis B induced mortality is mostly mediated through liver cancer and/or cirrhosis
whereas hepatitis C induced mortality may be through extrahepatic diseases.
Date: December 17, 2021, 10:00 am
Friday, December 10, 2021
Stark's Conjectures and Hilbert's 12th Problem
In this talk we will discuss two central problems in algebraic number theory and their interconnections: explicit class field theory and the special values of L-functions. The goal of explicit class
field theory is to describe the abelian extensions of a ground number field via analytic means intrinsic to the ground field; this question lies at the core of Hilbert's 12th Problem. Meanwhile,
there is an abundance of conjectures on the values of L-functions at certain special points. Of these, Stark's Conjecture has relevance toward explicit class field theory. I will describe two
recent joint results with Mahesh Kakde on these topics. The first is a proof of the Brumer-Stark conjecture away from p=2. This conjecture states the existence of certain canonical elements in
abelian extensions of totally real fields. The second is a proof of an exact formula for Brumer-Stark units that has been developed over the last 15 years. We show that these units together with
other easily written explicit elements generate the maximal abelian extension of a totally real field, thereby giving a p-adic solution to the question of explicit class field theory for these
Date: December 10, 2021, 2:00 pm
Friday, December 3, 2021
K3 surfaces: geometry and dynamics
K3 surfaces are a class of compact complex manifolds that enjoys many special properties and play an important role in several areas of mathematics. In this colloquium I will discuss a new interplay
between complex geometry and analysis on K3 surfaces equipped with their Calabi-Yau metrics, and dynamics of holomorphic diffeomorphisms of these surfaces, that Simion Filip and I have been
investigating recently.
Date: December 3, 2021, 3:30 pm
Friday, November 26, 2021
Adventures with Partial Identifications in Studies of Marked Individuals
Monitoring marked individuals is a common strategy in studies of wild animals (referred to as mark-recapture or capture-recapture experiments) and hard to track human populations (referred to as
multi-list methods or multiple-systems estimation). A standard assumption of these techniques is that individuals can be identified uniquely and without error, but this can be violated in many
ways. In some cases, it may not be possible to identify individuals uniquely because of the study design or the choice of marks. Other times, errors may occur so that individuals are incorrectly
identified. I will discuss work with my collaborators over the past 10 years developing methods to account for problems that arise when are only individuals are only partially identified. I will
present theoretical aspects of this research, including an introduction to the latent multinomial model and algebraic statistics, and also describe applications to studies of species ranging from the
golden mantella (an endangered frog endemic to Madagascar measuring only 20 mm) to the whale shark (the largest known species of sh measuring up to 19 m).
Date: November 26, 2021, 3:30 pm
Friday, November 19, 2021
Exploring string vacua through geometric transitions
A fundamental problem in string theory is the multitude of distinct geometries which give rise to consistent solutions of the vacuum equations of motion. One possible resolution of this "vacuum
degeneracy" problem is the "fantasy" that the moduli space of string vacua is connected through the process of "geometric transitions". I will discuss some geometric problems associated to this
fantasy and their applications.
Date: November 19, 2021, 3:30 pm
Friday, November 12, 2021
Estimating the mean of a random vector
One of the most basic problems in statistics is the estimation of the mean of a random vector, based on independent observations. This problem has received renewed attention in the last few years,
both from statistical and computational points of view. In this talk, we review some recent results on the statistical performance of mean estimators that allow heavy tails and adversarial
contamination in the data. In particular, we are interested in estimators that have a near-optimal error in all directions in which the variance of the one dimensional marginal of the random vector
is not too small. The material of this talk is based on a series of joint papers with Shahar Mendelson.
Date: November 12, 2021, 3:30 pm
Place : HYBRIDE
This conference will be held in hybrid mode with a limited number of 21 participants on site. To reserve your place by first come, first serve, please use the link below.
On-site : CRM - Pavillon André Aisenstadt: Salle/ Room 5340
Vaccination passport and ID will be required
Friday, November 5, 2021
Les mathématiques ont une histoire et une géographie
La présentation se divise en deux temps. Dans un premier temps, les principaux résultats de notre étude sur le « portrait mathématique » des étudiants du Québec effectué dans le cadre du projet En
avant math! (projet conjoint CRM-CIRANO soutenu par le Ministère des finances) seront présentés. Ce rapport se fonde d’une part sur les résultats des tests internationaux TIMS et PISA pour les
élèves québécois du primaire et du secondaire et d’autre part, sur la situation des mathématiques dans les universités québécoises dégagée des données du Bureau de coopération interuniversitaire
(BCI) ( évolution des inscriptions étudiantes et portrait des étudiants tant en genre qu’en statut). Les données du BCI montrent que l'étudiant typique inscrit en maths dans les universités
québécoises est citoyen canadien, blanc et masculin. Et le nombre total d'inscrits baisse à chaque année (sauf, peut-être au doctorat). Où sont les filles? Où sont les étudiants issus de
l'immigration récente? Et pourtant aux tests PISA et TIMMS les élèves issus de l'immigration performent mieux au Canada que les élèves canadiens (c'est l'inverse pour la moyenne des pays de l'OCDE).
Puis, dans un deuxième temps, à la lumière des résultats du portrait des étudiants, nous discuterons des enjeux sociaux pour des mathématiques plus inclusives. Une recherche collaborative avec les
communautés inuit du Nunavik viendra illustrer nos propos.
Date: November 5, 2021, 3:30 pm
Friday, October 29, 2021
Opinionated practices for teaching reproducibility: motivation, guided instruction and pratice
In the data science courses at the University of British Columbia, we define data science as the study, development and practice of reproducible and auditable processes to obtain insight from data.
While reproducibility is core to our definition, most data science learners enter the field with other aspects of data science in mind, for example predictive modelling, which is often one of the
most interesting topic to novices. This fact, along with the highly technical nature of the industry standard reproducibility tools currently employed in data science, present out-ofthe gate
challenges in teaching reproducibility in the data science classroom. Put simply, students are not as intrinsically motivated to learn this topic, and it is not an easy one for them to learn. What
can a data science educator do? Over several iterations of teaching courses focused on reproducible data science tools and workflows, we have found that providing extra motivation, guided
instruction and lots of practice are key to effectively teaching this challenging, yet important subject. Here we present examples of how we deeply motivate, effectively guide and provide ample
practice opportunities to data science students to effectively engage them in learning about this topic.
Date: October 29, 2021, 3:30 pm
Friday, October 15, 2021
Entropy along the Mandelbrot set
The notion of topological entropy, arising from information theory, is a fundamental tool to understand the complexity of a dynamical system. When the dynamical system varies in a family, the
natural question arises of how the entropy changes with the parameter.
In the last decade, W. Thurston introduced these ideas in the context of complex dynamics by defining the "core entropy" of a quadratic polynomials as the entropy of a certain forward-invariant set
of the Julia set (the Hubbard tree).
As we shall see, the core entropy is a purely topological/combinatorial quantity which nonetheless captures the richness of the fractal structure of the Mandelbrot set. In particular, we will relate
the variation of such a function to the geometry of the Mandelbrot set. We will also prove that the core entropy on the space of polynomials of a given degree varies continuously, answering a
question of Thurston.
Finally, we will provide a new interpretation of core entropy in terms of measured laminations and discuss its finer regularity properties such as its Holder exponent.
Date: October 15, 2021, 3:30 pm
Friday, September 24, 2021
Deep down, everyone wants to be causal
Most researchers in the social, behavioral, and health sciences are taught to be extremely cautious in making causal claims. However, causal inference is a necessary goal in research for addressing
many of the most pressing questions around policy and practice. In the past decade, causal methodologists have increasingly been using and touting the benefits of more complicated machine learning
algorithms to estimate causal effects. These methods can take some of the guesswork out of analyses, decrease the opportunity for “p-hacking,” and may be better suited for more fine-tuned tasks such
as identifying varying treatment effects and generalizing results from one population to another. However, should these more advanced methods change our fundamental views about how difficult it is
to infer causality? In this talk I will discuss some potential advantages and disadvantages of using machine learning for causal inference and emphasize ways that we can all be more transparent in
our inferences and honest about their limitations.
Date: September 24, 2021, 3:00 pm
Monday, June 21, 2021
June 21-25, 2021
Fully Online
The aim of this school is to advance the participants knowledge and enthusiasm towards algebraic combinatorics. Through high-level presentations, the students will learn multiple combinatorial
aspects linked to representation theory. Every day, a postdoctoral researcher will introduce a research topic tied to the introductory classes.
Schubert calculus, symmetric functions, cluster algebra, Tamari lattices, frieze combinatorics and cluster categories are not only ways to study representation theory, but have many links between
them. On one hand, cluster algebras, introduced by Sergey Fomin and Andrei Zelevinsky, can be studied using the combinatorics of friezes, on the other hand, they can be studied algebraically using
cluster categories. Moreover, they have a correspondence with double Bruhat cells. In the case of Flag varieties and grassmanians, the decomposition into Bruhat cells gives way to decomposition into
Schubert cells. These can be obtained using Schubert calculus. Schubbert polynomials are a generalization of Schur functions, which are symmetric functions. Using sub-word complexes, Schubert
varieties are tied to the study of Tamari Lattices. These lattices correspond to exchange graphs of some cluster algebra.
Finally, our goal is to promote the visibility and accomplishment of women in mathematics. Even though the school is open to people of all genders, only women were invited to give lectures and talks.
It seems important to us to give the occasion to students to interact accomplished women in mathematics, since they are underrepresented among teachers in mathematics in universities.
Monday, June 21, 2021
Narrowing the Gap: Addressing Mathematical Inequity in Indigenous Education
on Monday, June 21, 2021, National Indigenous Peoples Day
at 4:00 p.m. - 5:00 p.m. (Eastern time)
SPEAKER: Melania Alvarez (Pacific Institute for the Mathematical Sciences)
This lecture will be delivered in English..
In order to positively narrow the educational gap between the Indigenous communities and the rest of the population, there needs to be a continuous and long-term intervention for change. By leaving
behind the philosophy of reduced expectations, mathematical scientists and educators in Western Canada have introduced a variety of interesting and challenging programs. Our first step has been to
build partnerships with elders and schools run by Indigenous communities, as well as with urban public schools with a high concentration of at-risk students and Indigenous Students. With their input
and support, a variety of outreach programs have been implemented, which will be described in this talk.
Also visit the page of the CRM Equity, Diversity and Inclusion Committee
Monday, May 31, 2021
May 31 - June 3, 2021
The school will consist of four days of courses aimed primarily at upper undergraduate and MSc students who are interested in pursuing further university education, and curious about modern topics in
statistics that they are unlikely to have encountered in their training.
Saturday, May 22, 2021
The 23rd edition of the Colloque Panquébécois de l’Institut des Sciences Mathématiques (ISM) will be held online May 22-23, 2021. The goal of this annual conference is to bring together graduate
students in mathematics from all of Quebec’s universities.
Participants are invited to give a 20 minute talk on a subject of their interest within mathematics. In addition, four plenary talks will be given by professors. The conference will also feature a
talk by the recipient of the Carl Herz prize, awarded by the ISM.
Friday, April 30, 2021
Knots, polynomials and signatures
After a historical introduction to knot theory, the talk will be centered around two knot invariants, the Alexander polynomial and the signature. The aim is to introduce a finite abelian group that
controls their relationship, and to illustrate this by several examples. Using Seifert matrices, the geometric questions are translated into arithmetic ones.
Date: April 30, 2021, 3:00 pm
Friday, April 23, 2021
Date: April 23, 2021, 3:00 pm
Friday, April 16, 2021
Reflected Brownian motion in a wedge: from probability theory to Galois theory of difference equations
We consider a reflected Brownian motion in a two-dimensional wedge. Under standard assumptions on the parameters of the model (opening of the wedge, angles of the reflections on the axes, drift), we
study the algebraic and differential nature of the Laplace transform of its stationary distribution. We derive necessary and sufficient conditions for this Laplace transform to be
rational, algebraic, differentially finite or more generally differentially algebraic. These conditions are explicit linear dependencies among the angles involved in the definition of the model.
To prove these results, we start from a functional equation that the Laplace transform satisfies, to which we apply tools from diverse horizons. To establish differential algebraicity, a key
ingredient is Tutte's invariant approach, which originates in enumerative combinatorics. To establish differential transcendence, we turn the functional equation into a difference equation and apply
Galoisian results on the nature of the solutions to such equations.
This is a joint work with M. Bousquet-Mélou, A. Elvey Price, S. Franceschi and C. Hardouin (https://arxiv.org/abs/2101.01562).
Date: April 16, 2021, 3:00 pm
Friday, April 9, 2021
Insect Flight from Newton's law to Neurons
Why do animals move the way they do? Bacteria, insects, birds, and fish share with us the necessity to move so as to live. Although each organism follows its own evolutionary course, it also obeys a
set of common laws. At the very least, the movement of animals, like that of planets, is governed by Newton’s law: All things fall. On Earth, most things fall in air or water, and their motions are
thus subject to the laws of hydrodynamics. Through trial and error, animals have found ways to interact with fluid so they can float, drift, swim, sail, glide, soar, and fly. This elementary struggle
to escape the fate of falling shapes the development of motors, sensors, and mind. Perhaps we can deduce parts of their neural computations by understanding what animals must do so as not to fall.
We have been seeking mechanistic explanations of the complex movement of insect flight. Starting from the Navier-Stokes equations governing the unsteady aerodynamics of flapping flight, we worked to
build a theoretical framework for computing flight and for studying the control of flight. I will discuss our recent computational and experimental studies of the balancing act of dragonflies and
fruit flies: how a dragonfly recovers from falling upside-down and how a fly balances in air. In each case, the physics of flight informs us about the neural feedback circuitries underlying
their fast reflexes.
Date: April 9, 2021, 3:00 pm
Friday, March 19, 2021
ABCD asymptotic expansion for lattice Boltzmann schemes and application to compressible Navier Stokes equations
Après avoir rappelé divers éléments sur l'histoire de la construction des schémas de Boltzmann sur réseau, nous présentons notre approche "ABCD", fondée sur le fait que le schéma numérique est exact
pour l'équation d'advection avec les vitesses du réseau. Cette analyse asymptotique permet d'écrire aux différents ordres les équations aux dérivées partielles conservatives équivalentes au schéma.
Un réglage de paramètres permet dans les bons cas une approximation précise des équations des fluides compressibles.
Date: March 19, 2021, 3:00 pm
Friday, March 12, 2021
With its 2021 theme “Mathematics for a Better World”, UNESCO's International Mathematics Day highlights the impact of mathematical sciences to face challenges in areas such as artificial
intelligence, prediction models, climate change, screening, equitable sharing as well as improving the quality of life in many unexpected ways thanks to the combined knowledge of the scientists who
work there. The CRM has brought together outstanding speaker-organizers who each explore this year's theme in their own way.
Friday, March 12, 2021
Nonparametric Tests for Informative Selection in Complex Surveys
Informative selection, in which the distribution of response variables given that they are sampled is different from their distribution in the population, is pervasive in complex surveys. Failing to
take such informativeness into account can produce severe inferential errors, including biased and inconsistent estimation of population parameters. While several parametric procedures exist to test
for informative selection, these methods are limited in scope and their parametric assumptions are difficult to assess. We consider two classes of nonparametric tests of informative selection. The
first class is motivated by classic nonparametric two-sample tests. We compare weighted and unweighted empirical distribution functions and obtain tests for informative selection that are analogous
to Kolmogorov-Smirnov and Cramer-von Mises. For the second class of tests, we adapt a kernel-based learning method that compares distributions based on their maximum mean discrepancy. The
asymptotic distributions of the test statistics are established under the null hypothesis of noninformative selection. Simulation results show that our tests have power competitive with existing
parametric tests in a correctly specified parametric setting, and better than those tests under model misspecification. A recreational angling application illustrates the methodology.
This is joint work with Teng Liu, Colorado State University.
Date: March 12, 2021, 3:30 pm
Monday, March 8, 2021
This event, which we will be held on International Women's Day, aims to raise awareness of the career challenges experienced by women mathematicians during the pandemic.
It will also serve as an occasion to bring together the CRM community to honour the achievements of women in mathematics.
Four outstanding mathematicians will highlight their recent work, as well as the unusual circumstances that either led to it or challenged it. These talks will be followed by an informal panel
discussion held in the evening, with a focus on the perspective of junior women mathematicians.
Friday, March 5, 2021
Secrets of the Surface: The Mathematical Vision of Maryam Mirzakhani examines the life and mathematical work of Maryam Mirzakhani, an Iranian immigrant to the United States who became a superstar in
her field. In 2014, she was both the first woman and the first Iranian to be honored by mathematics’ highest prize, the Fields Medal.
Mirzakhani’s contributions are explained in the film by leading mathematicians and illustrated by animated sequences. Her mathematical colleagues from around the world, as well as former teachers,
classmates, and students in Iran today, convey the deep impact of her achievements. She is a true inspiration.
Date/Time: March 5, 2021, 7:30 PM
Length: 60 min
Online: https://webinaires.uqam.ca/mathflix
Friday, February 26, 2021
Analytic solutions to algebraic equations, and a conjecture of Kobayashi
A projective algebraic variety is defined as the zero locus of a finite family of homogeneous polynomials. Over the field of complex numbers, the geometry of such varieties is governed to a large
extent by the sign, in a suitable sense, of the Ricci curvature form. When this sign is negative, the variety is expected to exhibit certain hyperbolicity properties in the sense of Kobayashi - as
well as further very deep number-theoretic properties that are mostly conjectural, in the arithmetic situation. In particular, all entire holomorphic curves drawn on it should be contained in a
proper algebraic subvariety: this is a famous conjecture of Green-Griffiths and Lang. Following recent ideas of D. Brotbek, we will try to explain here a rather elementary proof of a related
conjecture of Kobayashi, stating that a general algebraic hypersurface of sufficiently high degree is hyperbolic, i.e. does not contain any entire holomorphic curve.
Date: February 26, 2021, 3:00 pm
Friday, February 19, 2021
Local smoothing for the wave equation
The local smoothing problem asks about how much solutions to the wave equation can focus. It was formulated by Chris Sogge in the early 90s. Hong Wang, Ruixiang Zhang, and I recently proved the
conjecture in two dimensions. In the talk, we will build up some intuition about waves to motivate the conjecture, and then discuss some of the obstacles and some ideas from the proof.
Date: February 19, 2021, 3:30 pm
Friday, February 12, 2021
Spatio-temporal methods for estimating subsurface ocean thermal response to tropical cyclones
Tropical cyclones (TCs), driven by heat exchange between the air and sea, pose a substantial risk to many communities around the world. Accurate characterization of the subsurface ocean thermal
response to TC passage is crucial for accurate TC intensity forecasts and for understanding the role TCs play in the global climate system, yet that characterization is complicated by the high-noise
ocean environment, correlations inherent in spatio-temporal data, relative scarcity of in situ observations and the entanglement of the TC-induced signal with seasonal signals. We present a general
methodological framework that addresses these difficulties, integrating existing techniques in seasonal mean field estimation, Gaussian process modeling, and nonparametric regression into a
functional ANOVA model. Importantly, we improve upon past work by properly handling seasonality, providing rigorous uncertainty quantification, and treating time as a continuous variable, rather
than producing estimates that are binned in time. This functional ANOVA model is estimated using in situ subsurface temperature profiles from the Argo fleet of autonomous floats through a multi-step
procedure, which (1) characterizes the upper ocean seasonal shift during the TC season; (2) models the variability in the temperature observations; (3) fits a thin plate spline using the variability
estimates to account for heteroskedasticity and correlation between the observations. This spline fit reveals the ocean thermal response to TC passage. Through this framework, we obtain new
scientific insights into the interaction between TCs and the ocean on a global scale, including a three-dimensional characterization of the near-surface and subsurface cooling along the TC storm
track and the mixing-induced subsurface warming on the track's right side. Joint work with Addison Hu, Ann Lee, Donata Giglio and Kimberly Wood.
Date: February 12, 2021, 3:30 pm
Friday, February 5, 2021
Symmetry, Barcodes, and Hamiltonian dynamics
In the early 60s Arnol'd has conjectured that Hamiltonian diffeomorphisms, the motions of classical mechanics, often possess more fixed points than required by classical topological considerations.
In the late 80s and early 90s Floer has developed a powerful theory to approach this conjecture, considering fixed points as critical points of a certain functional. Recently, in joint work with L.
Polterovich, we observed that Floer theory filtered by the values of this functional fits into the framework of persistence modules and their barcodes, originating in data sciences. I will review
these developments and their applications, which arise from a natural time-symmetry of Hamiltonians. This includes new constraints on one-parameter subgroups of Hamiltonian diffeomorphisms, as well
as my recent solution of the Hofer-Zehnder periodic points conjecture. The latter combines barcodes with equivariant cohomological operations in Floer theory recently introduced by Seidel to form a
new method with further consequences.
Date: February 5, 2021, 3:00 pm
Friday, January 29, 2021
Small Area Estimation in Low- and Middle-Income Countries
The under-five mortality rate (U5MR) is a key barometer of the health of a nation. Unfortunately, many people living in low- and middle-income countries are not covered by civil registration
systems. This makes estimation of the U5MR, particularly at the subnational level, difficult. In this talk, I will describe models that have been developed to produce the official United Nations
(UN) subnational U5MR estimates in 22 countries. Estimation is based on household surveys, which use stratified, two-stage cluster sampling. I will describe a range of area- and unit-level models
and describe the rationale for the modeling we carry out. Data sparsity in time and space is a key challenge, and smoothing models are vital. I will discuss the advantages and disadvantages of
discrete and continuous spatial models, in the context of estimation at the scale at which health interventions are made. Other issues that will be touched upon include: design-based versus
model-based inference; adjustments for HIV epidemics; the inclusion of so-called indirect (summary birth history) data; reproducibility through software availability; benchmarking; how to deal with
incomplete geographical data; and working with the UN to produce estimates.
Date: January 29, 2021, 3:00 pm
Friday, January 22, 2021
Mean curvature flow through neck-singularities
A family of surfaces moves by mean curvature flow if the velocity at each point is given by the mean curvature vector. Mean curvature flow first arose as a model of evolving interfaces and has been
extensively studied over the last 40 years. In this talk, I will give an introduction and overview for a general mathematical audience. To gain some intuition we will first consider the
one-dimensional case of evolving curves. We will then discuss Huisken’s classical result that the flow of convex surfaces always converges to a round point. On the other hand, if the initial
surface is not convex we will see that the flow typically encounters singularities. Getting a hold of these singularities is crucial for most striking applications in geometry, topology and
physics. Specifically, singularities can be either of neck-type or conical-type. We will discuss examples from the 90s, which show, both experimentally and theoretically, that flow through conical
singularities is utterly non-unique. In the last part of the talk, I will report on recent work with Kyeongsu Choi, Or Hershkovits and Brian White, where we proved that mean curvature flow through
neck-singularities is unique. The key for this is a classification result for ancient asymptotically cylindrical flows that describes all possible blowup limits near a neck-singularity. In
particular, this confirms the mean-convex neighborhood conjecture. Assuming Ilmanen’s multiplicity-one conjecture, we conclude that for embedded two-spheres mean curvature flow through singularities
is well-posed.
Date: January 22, 2021, 3:00 pm
Saturday, January 9, 2021
The Seminars on Undergraduate Mathematics in Montreal (SUMM) is organized by undergraduate students from Montreal universities. The main objective is to create an environment facilitating exchange of
ideas and interests as well as allowing students to network. For the twelfth edition, we aim to unite the undergraduate mathematic community in Montreal, Quebec, and their surroundings.
SUMM is for undergraduate students in mathematics or related domains. This year, the conference will be held on January 9 and 10, online due to the pandemic.
The weekend consist of two days of presentations given by undergraduate students and invited professors. The presentations will covers a broad range of subject from mathematical physics to the
applications of artificial intelligence as well as the history and philosophy of mathematics.
During the SUMM, students can give a talk or simply attend presentations from their peers. It's an occasion to share the passion for mathematics in a stimulating environment, while networking with
other passionate students over the weekend.
Friday, November 27, 2020
Moduli of unstable objects in algebraic geometry
Moduli spaces arise naturally in classification problems in geometry. The study of the moduli spaces of nonsingular complex projective curves (or equivalently of compact Riemann surfaces) goes back
to Riemann himself in the nineteenth century. The construction of the moduli spaces of stable curves of fixed genus is one of the classical applications of Mumford's geometric invariant theory (GIT),
developed in the 1960s; many other moduli spaces of 'stable' objects can be constructed using GIT and in other ways. A projective curve is stable if it has only very mild singularities (nodes) and
its automorphism group is finite; similarly in other contexts stable objects are usually better behaved than unstable ones.
The aim of this talk is to explain how recent methods from a version of GIT for non-reductive group actions can help us to classify singular curves in such a way that we can construct moduli spaces
of unstable curves (of fixed type). More generally our aim is to use suitable 'stability conditions' to stratify other moduli stacks into locally closed strata with coarse moduli spaces. The talk is
based on joint work with Gergely Berczi, Vicky Hoskins and Joshua Jackson.
Date: November 27, 2020, 3:00 pm
Friday, November 20, 2020
Hodge Theory of p-adic varieties
p-adic Hodge Theory is one of the most powerful tools in modern Arithmetic Geometry. In this talk, I will review p-adic Hodge Theory of algebraic varieties, present current developments in p-adic
Hodge Theory of analytic varieties, and discuss some of its applications to problems in Number Theory.
Date: November 20, 2020, 3:00 pm
Friday, November 13, 2020
Approximate Cross-Validation for Large Data and High Dimensions
The error or variability of statistical and machine learning algorithms is often assessed by repeatedly re-fitting a model with different weighted versions of the observed data. The ubiquitous tools
of cross-validation (CV) and the bootstrap are examples of this technique. These methods are powerful in large part due to their model agnosticism but can be slow to run on modern, large data sets
due to the need to repeatedly re-fit the model. We use a linear approximation to the dependence of the fitting procedure on the weights, producing results that can be faster than repeated re-fitting
by orders of magnitude. This linear approximation is sometimes known as the "infinitesimal jackknife" (IJ) in the statistics literature, where it has mostly been used as a theoretical tool to prove
asymptotic results. We provide explicit finite-sample error bounds for the infinitesimal jackknife in terms of a small number of simple, verifiable assumptions. Without further modification, though,
we note that the IJ deteriorates in accuracy in high dimensions and incurs a running time roughly cubic in dimension. We additionally show, then, how dimensionality reduction can be used to
successfully run the IJ in high dimensions when data is sparse or low rank. Simulated and real-data experiments support our theory.
Date: November 13, 2020, 3:30 pm
Friday, October 16, 2020
Hyperplane arrangements and modular symbols
In his fantastic book « Elliptic functions according to Eisenstein and Kronecker, » Weil writes: « As Eisenstein shows, his method for constructing elliptic functions applies beautifully to the
simpler case of the trigonometric functions. Moreover, this case provides […] the simplest proofs for a series of results, originally discovered by Euler. » The results Weil alludes to are relations
between product of trigonometric functions. I will first explain how these relations are quite surprisingly governed by relations between modular symbols (whose elementary theory I will sketch). I
will then show how this story fits into a wider picture that relates the topological world of group homology of some linear groups to the algebraic world of trigonometric and elliptic functions. To
conclude I will briefly describe a number theoretical application. This is based on a work-in-progress with Pierre Charollois, Luis Garcia and Akshay Venkatesh.
Date: October 16, 2020, 3:00 PM
Friday, October 9, 2020
Hodge Theory and Moduli
The theory of moduli is an important and active area in algebraic geometry. For varieties of general type the existence of a moduli space with a canonical completion has been proved by Kollar/
Shepard-Barron/Alexeev. Aside from the classical case of algebraic curves, very little is known about the structure of , especially it’s boundary. The period mapping from Hodge theory provides a tool
for studying these issues.
In this talk, we will discuss some aspects of this topic with emphasis on I-surfaces, which provide one of the first examples where the theory has been worked out in some detail. Particular notice
will me made of how the extension data in the limiting mixed Hodge structures that arise from singular surfaces on the boundary of moduli may be used to guide the desingularization of that boundary.
Date: October 9, 2020, 3:00 PM
Friday, October 2, 2020
Data Science, Classification, Clustering and Three-Way Data
Data science is discussed along with some historical perspective. Selected problems in classification are considered, either via specific datasets or general problem types. In each case, the
problem is introduced before one or more potential solutions are discussed and applied. The problems discussed include data with outliers, longitudinal data, and three-way data. The proposed
approaches are generally mixture model-based.
Zoom: If you haven't already, to receive the zoom link for the series please register at: http://crm.umontreal.ca/quebec-mathematical-sciences-colloquium/index.html#csmq
Friday, September 11, 2020
Machine Learning for Causual Inference
Date: September 11, 2020, 4:00 pm
Friday, June 19, 2020
Quantitative approaches to understanding the immune response to SARS-CoV-2 infection
COVID-19 is typically characterized by a range of respiratory symptoms that, in severe cases, progress to acute respiratory distress syndrome (ARDS). These symptoms are also frequently accompanied by
a range of inflammatory indications, particularly hyper-reactive and dysregulated inflammatory responses in the form of cytokine storms and severe immunopathology. Much remains to be uncovered about
the mechanisms that lead to disparate outcomes in COVID-19. Here, quantitative approaches, especially mechanistic mathematical models, can be leveraged to improve our understanding of the immune
response to SARS-CoV-2 infection. Building upon our prior work modelling the production of innate immune cell subsets and the viral dynamics of HIV and oncolytic viruses, we are developing a
quantitative framework to interrogate open questions about the innate and adaptive immune reaction in COVID-19. In this talk, I will outline our recent work modelling SARS-CoV-2 viral dynamics and
the ensuing immune response at both the tissue and systemic levels. A portion of this work is done as part of an international and multidisciplinary coalition working to establish a comprehensive
tissue simulator (physicell.org/covid19 [1]), which I will also discuss in more detail.
Date: June 19, 2020, 4:00 pm
Friday, April 17, 2020
Observable events and typical trajectories in finite and infinite dimensional dynamical systems
The terms "observable events" and "typical trajectories" in the title should really be between quotation marks, because what is typical and/or observable is a matter of interpretation. For dynamical
systems on finite dimensional spaces, one often equates observable events with positive Lebesgue measure sets, and invariant distributions that reflect the large-time behaviors of positive Lebesgue
measure sets of initial conditions (such as Liouville measure for Hamiltonian systems) are considered to be especially important. I will begin by introducing these concepts for general dynamical
systems -- including those with attractors -- describing a simple dynamical picture that one might hope to be true. This picture does not always hold, unfortunately, but a small amount of random
noise will bring it about. In the second part of my talk I will consider infinite dimensional systems such as semi-flows arising from dissipative evolutionary PDEs. I will discuss the extent to
which the ideas above can be generalized to infinite dimensions, and propose a notion of "typical solutions".
Date / Time: Friday, April 17, 2020 - 16:00
Venue: Zoom meeting link:
Meeting ID: 170 851 981
Password: 942210
Sunday, March 15, 2020
Mirzakhani’s contributions are explained in the film by leading mathematicians and illustrated by animated sequences. Her mathematical colleagues from around the world, as well as former teachers,
classmates, and students in Iran today, convey the deep impact of her achievements. The path of her education, success on Iran’s Math Olympiad team, and her brilliant work, make Mirzakhani an ideal
role model for girls looking toward careers in science and mathematics.
J.A. De Sève Cinema
Room 125
J.W. McConnell Building
1400 De Maisonneuve W.
Sir George Williams Campus
This event is free.
Friday, March 13, 2020
Optimal shapes arising from pair interactions
In many physical and social situations, pair interactions determine how a large group of particles or individuals arranges itself in space under constraints on the overall mass, density, and
geometry. Typical examples are capacitor problems (where the interaction is purely repulsive), and flocking (where the interaction tends to be attractive at large distances and repulsive as
individuals get too close). Mathematically, this leads to non-local shape optimization problems, where a density interacts with itself by a pair potential. Under what conditions is there aggregation,
and when do individuals disperse? Is the optimal shape always round? Can multiple flocks co-exist? I will discuss some toy models, symmetrization techniques, recent results, and open questions.
Date / Time: Friday, March 13, 2020 - 4:00 pm
Venue: CRM, Université de Montréal, André-Aisenstadt Building, room 1175
Friday, February 28, 2020
Neyman-Pearson classification: parametrics and sample size requirement
The Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified
level alpha. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong, Feng
and Li (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given
type I error (i.e., conditional probability of classifying a class 0 observation as class 1 under the 0-1 coding) upper bound alpha with high probability, without specific distributional assumptions
on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class 0, which is often the more scarce class, such as in rare
disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum
sample size requirements on class 0 observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed
parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed
classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class 0 observations,
we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP
classifiers are implemented in the R package nproc.
Date / Time: Friday, February 28, 2020 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, February 21, 2020
Arithmetic Theta Series
I will recount a family history of theta series through several generations. Theta series for positive definite integral quadratic forms provide some of the most classical examples of elliptic
modular forms and their Siegel modular variants. Analogous series were defined by Siegel and Maass for lattices with indefinite quadratic forms say with signature (p,q). These series are no longer
holomorphic and depend on an additional variable in the Grassmannian of negative q-planes, i.e., the symmetric space for the orthogonal group O(p,q). Motivated by work of Hirzebruch and Zagier on the
generating series for curves on Hilbert modular surfaces, Millson and I constructed a theory of theta series valued in the cohomology of certain locally symmetric spaces -- geometric theta series.
More recently, a theory of arithmetic theta series has been emerging, theta series valued in the Chow groups or arithmetic Chow groups of the integral models of certain Shimura varieties.
Date / Time: Friday, February 21, 2020 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, February 7, 2020
Complex multiplication - old and new
The theory of complex multiplication is more than a century old; its origins date back to Klein, Hilbert, Kummer, Weber, Deuring and many others. It has been instrumental in the development of class
field theory and algebraic number theory. Yet, more than a century later we find new theorems that are truly surprising. I will start with this historical perspective and try to position some of
these new developments in the light of the André-Oort conjecture - a conjecture in the area of Shimura varieties that was recently resolved by Tsimerman, building on ideas of Edixhoven, Pila, Wilkie
and Zannier. The resolution rests on the averaged Colmez conjecture, a conjecture that addresses the arithmetic complexity of abelian varieties with complex multiplication, which was proved by
Andreatta-Howard-Madapusi Pera and the speaker, and, independently, by Yuan-Zhang.
Date / Time: Friday, February 7, 2020 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, January 31, 2020
Longitudinal functional regression: tests of significance
We consider longitudinal functional regression, where, for each subject, the response consists of multiple curves observed at different time visits. We discuss tests of significance in two general
settings. First, when there are no additional covariates, we develop a hypothesis testing methodology for formally assessing that the mean function does not vary over time. Second, in the presence of
other covariates, we propose a testing procedure to determine the significance of the covariate's time-varying effect formally. The methods account for the complex dependence structure of the
response and are computationally efficient. Numerical studies confirm that the testing approaches have the correct size and are have a superior power relative to available competitors. We illustrate
the methods on a real data application.
Date / Time: Friday, January 31, 2020 - 16:00
Venue: HEC Montréal, 3000, chemin de la Côte-Sainte-Catherine, room Béton Grilli
Friday, January 24, 2020
Propagation des ondes et diffraction par un obstacle: résultats utilisant l’analyse microlocale
Date / Time: Friday, January 24, 2020 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, January 17, 2020
Learning in Games
Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we developed good understanding on
how to quantify the impact of strategic user behavior on the overall performance in many games (including traffic routing as well as online auctions). In this talk we will focus on games where
players use a form of learning that helps them adapt to the environment, and consider two closely related questions: What are broad classes of learning behaviors that guarantee high social welfare in
games, and are these results robust to situations when game or the population of players is dynamically changing.
Date / Time: Friday, January 17, 2020 - 2:00 pm
Venue: CRM, Université de Montréal, André-Aisenstadt Building, room 6214-6254
Friday, January 10, 2020
The Seminar on Undergraduate Mathematics in Montreal (SUMM), is organized by undergraduate students from Montreal universities. The main objective is to create an environment facilitating exchange of
ideas and interests as well as allowing students to network. For the eleventh edition, we aim to unite the undergraduate mathematic community in Montreal, Quebec, and their surroundings.
SUMM is for undergraduate student in mathematics or related domains. This year, the conference will be held on January 10, 11, and 12 at UQAM.
The weekend will start with a wine and cheese on Friday evening and will be followed by two days of presentations given by undergraduate students and six invited professors. The presentations will
covers a broad range of subject from mathematical physic to the applications of artificial intelligence as well as the history and philosophy of mathematics.
During the SUMM, students can give a presentation, participate in a poster contest or simply attend presentations from their peers. It's an occasion to share the passion for mathematics in a
stimulating environment, while networking with other passionate students over the weekend whether it is between presentations or during breakfast, lunch, or dinner and enjoying, for example, a
delicious chicken panini.
Hope to see you there!
Friday, November 29, 2019
Shuffling and Group Representations
Picture n cards, numbered 1,2,...,n face down, in order, in a row on the table. Each time, your left hand picks a random card, your right hand picks a random card and the two cards are transposed. It
is clear that 'after a while' the cards get all mixed up. How long does this take? In joint work with Mehrdad Shahshahani we analyzed this problem using the character theory of the symmetric group.
The methods work for general measures on general compact groups. They mix probability, analysis, combinatorics and group theory (we need real formulas for the representations). I will try to explain
all of this(along with some motivation for studying such problems) 'in English'. The answer, when n=52, is 'about 400'.
Date / Time: Friday, November 29, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, November 22, 2019
Formulation and solution of stochastic inverse problems for science and engineering models
The stochastic inverse problem of determining probability structures on input parameters for a physics model corresponding to a given probability structure on the output of the model forms the core
of scientific inference and engineering design. We describe a formulation and solution method for stochastic inverse problems that is based on functional analysis, differential geometry, and
probability/measure theory. This approach yields a computationally tractable problem while avoiding alterations of the model like regularization and ad hoc assumptions about the probability
structures. We present several examples, including a high-dimensional application to determination of parameter fields in storm surge models. We also describe work aimed at defining a notion of
condition for stochastic inverse problems and tackling the related problem of designing sets of optimal observable quantities.
Date / Time: Friday, November 22, 2019 - 16:00
Venue: UQAM, Pavillon Président-Kennedy - 201, avenue du Président-Kennedy, room PK-5115
Friday, November 15, 2019
The role of random models in compressive sensing and matrix completion
Random models lead to a precise and comprehensive theory of compressive sensing and matrix completion. The number of random linear measurements needed to recover a sparse signal, or a low-rank
matrix, or, more generally, a structured signal, are now well understood. Indeed, this boils down to a question in random matrix theory: How well conditioned is a random matrix restricted to a
fixed subset of R^n? We discuss recent work addressing this question in the sub-Gaussian case. Nevertheless, a practitioner with a fixed data set will wonder: Can they apply theory based on
randomness? Is there any hope to get the same guarantees? We discuss these questions in compressive sensing and matrix completion, which, surprisingly, seem to have divergent answers.
Date / Time: Friday, November 15, 2019 - 16:00
Venue: CRM, Université de Montréal, André-Aisenstadt Building, room 135
Friday, November 8, 2019
Symmetries in topological quantum field theories
In this talk I will describe how to characterize symmetries of topological field theories and give a complete classification of symmetries in Abelian Topological Field Theories, uncovering a plethora
of quantum symmetries in these theories and an intriguing connection to number theory.
Date / Time: Friday, November 8, 2019 - 16:00
Venue: CRM, Université de Montréal, André-Aisenstadt Building, room 1355
Friday, November 1, 2019
General Bayesian modeling
The work is motivated by the inflexibility of Bayesian modeling; in that only parameters of probability models are required to be connected with data. The idea is to generalize this by allowing
arbitrary unknowns to be connected with data via loss functions. An updating process is then detailed which can be viewed as arising in at least a couple of ways - one being purely axiomatically
driven. The further exploration of replacing probability model based approaches to inference with loss functions is ongoing. Joint work with Chris Holmes, Pier Giovanni Bissiri and Simon Lyddon.
Date / Time: Friday, November 1, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, October 25, 2019
Coincidences in homological densities
For certain natural sequences of topological spaces, the kth homology group stabilizes once you go far enough out in the sequence of spaces. This phenomenon is called homological stability. Two
classical examples of homological stability are the configuration space of n unordered distinct points in the plane, studied in the 60's by Arnold' and the space of (based) algebraic maps from CP^1
to CP^1 studied by Segal in the 70's. It turns out that the stable homology is the same in these two examples, and in this talk we explain that this is just the tip an iceberg--a subtle, but precise
relationship between the values of stable of homology different sequences of spaces. To explain this relationship, which we discovered through an analogy to asymptotic counts in number theory, we
introduce a new notion of homological density. This talk is on joint work with Benson Farb and Jesse Wolfson.
Date / Time: Friday, October 25, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, October 18, 2019
o-minimal GAGA and applications to Hodge theory
(Joint with B.Bakker and Y.Brunebarbe) One very fruitful way of studying complex algebraic varieties is by forgetting the underlying algebraic structure, and just thinking of them as complex analytic
spaces. To this end, it is a natural and fruitful question to ask how much the complex analytic structure remembers. One very prominent result is Chows theorem, stating that any closed analytic
subspace of projective space is in fact algebraic. One notable consequence of this result is that a compact complex analytic space admits at most 1 algebraic structure - a result which is false in
the non-compact case. This was generalized and extended by Serre in his famous GAGA paper using the language of cohomology. We explain how we can extend Chow's theorem and in fact all of GAGA to
the non-compact case by working with complex analytic structures that are "tame" in the precise sense defined by o-minimality. This leads to some very general "algebraization" theorems, and we give
applications to Hodge theory.
Date / Time: Friday, October 18, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, October 11, 2019
Scoring positive semidefinite cutting planes for quadratic optimization via trained neural networks
Semidefinite programming relaxations complement polyhedral relaxations for quadratic optimization, but global optimization solvers built on polyhedral relaxations cannot fully exploit this
advantage. We develop linear outer-approximations of semidefinite constraints that can be effectively integrated into global solvers for nonconvex quadratic optimization. The difference from
previous work is that our proposed cuts are (i) sparser with respect to the number of nonzeros in the row and (ii) explicitly selected to improve the objective. A neural network estimator is key to
our cut selection strategy: ranking each cut based on objective improvement involves solving a semidefinite optimization problem, but this is an expensive proposition at each Branch&Cut node. The
neural network estimator, trained a priori of any instance to solve, takes the most time consuming computation offline by predicting the objective improvement for any cut.
Date / Time: Friday, October 11, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, October 4, 2019
La queue, la tuile, le bris d’égalité et leur rôle dans les modèles de dépendance
La modélisation de la dépendance entre variables aléatoires est omniprésente en statistique. S’agissant d’événements rares à fort impact, tels que des orages violents, des inondations ou des vagues
de chaleur, la question revêt une grande importance pour la gestion des risques et pose des défis théoriques. Une approche hautement flexible et prometteuse s’appuie sur la théorie des valeurs
extrêmes, la modélisation par copules et l’inférence fondée sur les rangs. Je présenterai trois avancées récentes dans ce domaine. Nous nous intéresserons d’abord à la prise en compte de la
dépendance en régime moyen, lorsque les modèles asymptotiques de valeurs extrêmes ne conviennent pas. Nous verrons ensuite quoi faire lorsque le nombre de variables est grand et comment une structure
de modèle hiérarchique peut être apprise à partir de matrices de corrélation de rangs de grande taille. Enfin, je ne résisterai pas à l’envie de vous initier à l’univers complexe de l’inférence basée
sur les rangs pour les données discrètes ou mixtes.
Date / Time: Friday, October 4, 2019 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 1355
Thursday, September 26, 2019
From Monge optimal transports to optimal Skorokhod embeddings
The optimal transportation problem, which originated in the work of Gaspard Monge in 1781, provides a fundamental and quantitave way to measure the distance between probability distributions. It has
led to many successful applications in PDEs, Geometry, Statistics and Probability Theory. Recently, and motivated by problems in Financial Mathematics, variations on this problem were introduced by
requiring the transport plans to abide by certain "fairness rules," such as following martingale paths. One then specifies a stochastic state process and a costing procedure, and minimize the
expected cost over stopping times with a given state distribution. Recent work has uncovered deep connections between this type of constrained optimal transportation problems, the celebrated
Skorokhod embeddings of probability distributions in Brownian motion, and Hamilton-Jacobi variational inequalities.
Date / Time: Thursday, September 26, 2019 - 16:00
Venue: CRM, Université de Montréal, Pav. Roger-Gaudry, 2900, boul. Édouard-Montpetit, room M-415
Friday, September 20, 2019
Date / Time: Friday, September 20, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, September 13, 2019
Multiple zeta values in deformation quantization
The subject of "deformation quantization" originated as a mathematical abstraction of the passage from classical to quantum mechanics: starting from a classical phase space (i.e. a Poisson manifold),
we deform the ordinary multiplication of functions to produce a noncommutative ring, which serves as the algebra of quantum observables. Over the years, the theory has evolved from its physical
origins to take on a mathematical life of its own, with rich connections to representation theory, topology, graph theory, number theory and more. I will give an introduction to the subject and
explain how the quantization process is inextricably linked, via a formula of Kontsevich, to special values of the Riemann zeta function, and their generalizations known as multiple zeta values.
Date / Time: Friday, September 13, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, August 30, 2019
Khovanov homology, 3-manifolds, and 4-manifolds
Khovanov homology is an invariant of knots in R^3. A major open problem is to extend its definition to knots in other three-manifolds, and to understand its relation to surfaces in 4-manifolds. I
will discuss some partial progress in these directions, from different perspectives (gauge theory, representation theory, sheaf theory). In the process I will also review some of the topological
applications of Khovanov homology.
Date / Time: Friday, August 30, 2019 - 16:00
Venue: UQAM, Pavillon Sherbrooke, 201, rue Sherbrooke West, room SH-3620
Tuesday, August 27, 2019
August 27-30, 2019
This discovery school is based around recent developments in low-dimensional topology, centred on the work of Ciprian Manolescu who will be in residence as an Aisenstadt chair as part of the CRM’s
50th anniversary program in Low dimensional topology. The school will be focused on new developments in Heegaard-Floer homology and gauge theory and the developments growing out of Manolescu’s
celebrated disproof of the triangulation conjecture.
Monday, June 10, 2019
June 10-14, 2019
The analysis of dynamical (or time-dependent) stochastic processes and their asymptotic behaviour is fundamental in both discrete and continuous probability. In some cases, the asymptotic behavior of
the stochastic process exhibits interesting dynamics on its own. This can be seen, for example, in the context of front propagation arising asymptotically in branching Brownian motion. In other
cases, interesting dynamical behaviour emerges from stochastic processes after rescaling. Examples include the study of rescaled random walks in random environments (which in some settings converge
almost surely to a Brownian motion, and in others to jump processes), or the study of two-dimensional lattice models in statistical mechanics (many of which converge, provably or conjecturally, to
Schramm-Loewner Evolutions). The goal of this program is to expose graduate and advanced undergraduate students to a variety of such topics of current interest within the study of dynamical
stochastic processes.
Friday, May 17, 2019
The 22nd edition of the Colloque panquébécois de l'Institut des sciences mathématiques (ISM) will be held in Montreal from May 17th to May 19th, 2019. The goal of this annual conference is to bring
together for a week-end the graduate students in mathematics from the universities of Quebec. This year, the conference will be held at Université de Montréal.
The participants are invited to give a 20 minutes talk on a mathematical subject of their interest. In addition, four plenary talks will be given by professors. The conference will also feature a
talk by the recipient of the Carl Herz prize, awarded by the ISM.
The registration fee for the event is 20$. This includes a wine and cheese activity on Friday night, the saturday evening supper, as well as breakfasts, lunches and coffee breaks during the whole
week-end. Moreover, the first participants to register for the event will receive a promotional ISM mug!
If you have any questions, don't hesitate to contact us.
Thursday, May 16, 2019
Introduction to birational classification theory in dimension three and higher
One of the main themes of algebraic geometry is to classify algebraic varieties and to study various geometric properties of each of the interesting classes. Classical theories of curves and surfaces
give a beautiful framework of classification theory. Recent developments provide more details in the case of dimension three. We are going to introduce the three-dimensional story and share some
expectations for even higher dimensions.
Date / Time: Friday, May 16, 2019 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Saturday, May 11, 2019
Have you always dreamed of inventing your own magic tricks? Would you like to know how math can help? The Institut des sciences mathématiques invites you to see magic tricks using cards, knots and
mathematics, but definitely not your intuition... it may trick you!
Saturday, May 11, 12:00 to 2:00
UQAM, Président-Kennedy Building
201, ave du Président-Kennedy
Room PK-1630, Montréal
Friday, May 10, 2019
Quantum Jacobi forms and applications
Quantum modular forms were defined in 2010 by Zagier; they are somewhat analogous to ordinary modular forms, but they are defined on the rational numbers as opposed to the upper half complex plane,
and have modified transformation properties. In 2016, Bringmann and the author defined the notion of a quantum Jacobi form, naturally marrying the concept of a quantum modular form with that of a
Jacobi form (the theory of which was developed by Eichler and Zagier in the 1980s). We will discuss these intertwined topics, emphasizing recent developments and applications. In particular, we will
discuss applications to combinatorics, topology (torus knots), and representation theory (VOAs).
Date / Time: Friday, May 10, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, May 3, 2019
The stochastic heat equation and KPZ in dimensions three and higher
The stochastic heat equation and the KPZ equation appear as the macroscopic limits for a large class of probabilistic models, and the study of KPZ, in particular, led to many fascinating developments
in probability over the last decade or so, from the regularity structures to integrable probability. We will discuss a small group of recent results on these equations in simple settings, of the PDE
flavour, that fall in line with what one may call naive expectations by an applied mathematician.
Date / Time: Friday, May 3, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Saturday, April 27, 2019
The ISM will host a half day workshop (in English) for girls in CEGEP who are interested in math and science. It will consist of hands-on talks, a panel discussion, and an inspirational keynote
address, all given by women studying or working in math!
The purpose of the event is to encourage young women to continue their study of math and science. The event also aims to help young women form connections that will aid in their transition from CEGEP
to college.
The event is open to girls in CEGEP, their parents and teachers, and is free! Lunch, snacks and drinks will be provided! However, we do ask that potential participants register (on the registration
page) by Wednesday, April 24, 2019.
Friday, April 26, 2019
Distinguishing finitely presented groups by their finite quotients
If G is a finitely generated group, let C(G) denote the set of finite quotients of G. This talk will survey work on the question of to what extent C(G) determines G up to isomorphism, culminating in
a discussion of examples of Fuchsian and Kleinian groups that are determined by C(G) (amongst finitely generated residually finite groups).
Date / Time: Friday, April 26, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, April 12, 2019
Linking in torus bundles and Hecke L functions
Torus bundles over the circle are among the simplest and cutest examples of 3-dimensional manifolds. After presenting some of these examples, using in particular animations realized by Jos Leys, I
will consider periodic orbits in these fiber bundles over the circle. We will see that their linking numbers --- that are rational numbers by definition --- can be computed as certain special values
of Hecke L-functions. Properly generalized this viewpoint makes it possible to give new topological proof of now classical rationality or integrality theorems of Klingen-Siegel and Deligne-Ribet. It
also leads to interesting new "arithmetic lifts" that I will briefly explain. All this is extracted from an on going joint work with Pierre Charollois, Luis Garcia and Akshay Venkatesh.
Date / Time: Friday, April 12, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, March 29, 2019
Principal Bundles in Diophantine Geometry
Principal bundles and their moduli have been important in various aspects of physics and geometry for many decades. It is perhaps not so well-known that a substantial portion of the original
motivation for studying them came from number theory, namely the study of Diophantine equations. I will describe a bit of this history and some recent developments.
Date / Time: Friday, March 29, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, March 22, 2019
Flexibility in contact and symplectic geometry
We discuss a number of h-principle phenomena which were recently discovered in the field of contact and symplectic geometry. In generality, an h-principle is a method for constructing global
solutions to underdetermined PDEs on manifolds by systematically localizing boundary conditions. In symplectic and contact geometry, these strategies typically are well suited for general
constructions and partial classifications. Some of the results we discuss are the characterization of smooth manifolds admitting contact structures, high dimensional overtwistedness, the symplectic
classification of flexibile Stein manifolds, and the construction of exotic Lagrangians in C^n.
Date / Time: Friday, March 22, 2019 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, March 15, 2019
Le Sommet de la statistique à Montréal (SÉSÀM) 2019 est un événement qui réunira au cours d’une journée d’activités et de séminaires les étudiants en statistique des universités québécoises.
Avec des conférences plénières, des séminaires étudiants et un concours de conférences-éclairs, le SÉSÀM offrira une occasion privilégiée pour le partage et la discussion entre étudiant-es et avec
des chercheurs de renom.
Un dîner et un cocktail en soirée vous permettront d'échanger et d'élargir votre réseau, le tout dans une ambiance chaleureuse et décontractée.
Le 15 mars 2019, c'est un rendez-vous à ne pas manquer!
Friday, March 15, 2019
Persistent homology as an invariant, rather than as an approximation
Persistent homology is a very simple idea that was initially introduced as a way of understanding the underlying structure of an object from, perhaps noisy, samples of the object, and has been used
as a tool in biology, material sciences, mapping and elsewhere. I will try to explain some of this, but perhaps also some more mathematical applications within geometric group theory. Then I'd like
to pivot and study the part that traditionally has been thrown away, and show that this piece is relevant to approximation theory (a la Chebyshev), closed geodesics (a la Gromov), and to problems of
quantitative topology (joint work with Ferry, Chambers, Dotter, and Manin).
Date / Time: Friday, March 15, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, February 22, 2019
Quantum Modularity in Topology in Physics
I will discuss a set of inter-related phenomena in topology, physics, and number theory. A natural question in topology is the construction of homological invariants of 3-manifolds. These turn out to
be related to certain special 3- dimensional quantum field theories in physics. Both of these scenarios exhibit a fascinating number theoretic phenomenon: quantum modularity. Quantum modular forms,
introduced by Zagier, are functions defined only at rational numbers, and in the most general cases are neither analytic nor modular. It is still an open question to develop a general theory which
encompasses their behavior. I will overview these relations and discuss recent advances which may shed light on some of these questions.
Date / Time: Friday, February 22, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, February 15, 2019
Discrete subgroups of Lie groups and geometric structure
Discrete subgroups of Lie groups play a fundamental role in several areas of mathematics. Discrete subgroups of SL(2,R) are well understood, and classified by the geometry of the corresponding
hyperbolic surfaces. On the other hand, discrete subgroups of SL(n,R) for n>2, beyond lattices, remain quite mysterious. While lattices in this setting are rigid, there also exist more flexible
"thinner" discrete subgroups, which may have large and interesting deformation spaces (some of them with topological and geometric analogies to the Teichmüller space of a surface, giving rise to
so-called "higher Teichmüller theory"). We will survey recent progress in constructing and understanding such discrete subgroups from a geometric and dynamical viewpoint.
Date / Time: Friday, February 15, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 Sherbrooke Street West, room 1104
Friday, February 8, 2019
Periodic orbits of Hamiltonian systems: the Conley conjecture and beyond
One distinguishing feature of Hamiltonian dynamical systems is that such systems, with very few exceptions, tend to have numerous periodic orbits and these orbits carry a lot of information about the
dynamics of the system. In 1984 Conley conjectured that a Hamiltonian diffeomorphism (i.e., the time-one map of a Hamiltonian flow) of a torus has infinitely many periodic points. This conjecture was
proved by Hingston some twenty years later, in 2004. Similar results for Hamiltonian diffeomorphisms of surfaces of positive genus were also established by Franks and Handel. Of course, one can
expect the Conley conjecture to hold for a much broader class of closed symplectic manifolds and this is indeed the case as has been proved by Gurel, Hein and the speaker. However, the conjecture is
known to fail for some, even very simple, phase spaces such as the sphere. These spaces admit Hamiltonian diffeomorphisms with finitely many periodic orbits -- the so-called pseudo-rotations -- which
are of particular interest in dynamics. In this talk, mainly based on joint work with Gurel, we will discuss the role of periodic orbits in Hamiltonian dynamics and the methods used to prove their
existence, and examine the situations where the Conley conjecture does not hold.
Date / Time: Friday, February 8, 2019 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, February 1, 2019
How does the brain works?
We will go over the basic challenges for understanding the human brain, the role if mathematics and three examples , one in theoretical neuroscience and two in neuro medicine.
Date / Time: Friday, February 1, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 O., rue Sherbrooke, room 1104
Friday, January 25, 2019
Conférence Nirenberg du CRM en analyse géométrique: Stochastic diffusive behavior at Kirkwood gaps
One of the well known indications of instability in the Solar system is the presence of Kirkwood gaps in the Asteroid belt. The gaps correspond to resonance between their periods and the period of
Jupiter. The most famous ones are period ratios 3:1, 5:2, 7:3. In the 1980s, J. Wisdom and, independently, A. Neishtadt discovered one mechanism of creation for the 3:1 Kirkwood gap. We propose
another mechanism of instabilities, based on an a priori chaotic underlying dynamical structure. As an indication of chaos at the Kirkwood gaps, we show that the eccentricity of Asteroids behaves
like a stochastic diffusion process. Along with the famous KAM theory this shows a mixed behavior at the Kirkwood gaps: regular and stochastic. This is a joint work with M. Guardia, P. Martin and P.
Date / Time: Friday, January 25, 2019 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 6254
Friday, January 18, 2019
The boundary of the Kähler cone
On any compact Kähler manifold the space of cohomology classes of all possible Kähler forms is an open convex cone inside a finite-dimensional vector space. I will discuss some recent advances in
understanding the geometric and analytic properties of the classes on the boundary of this cone, and describe several applications.
Date / Time: Friday, January 18, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 O., rue Sherbrooke, room 1104
Friday, January 11, 2019
Seminars in Undergraduate Mathematics in Montreal (SUMM) is an annual event organized by undergraduate mathematics students from Montreal’s four universities.
The 2019 edition of SUMM will be held at the University of Montreal on January, 11, 12 and 13.
Friday, January 11, 2019
The mathematics of neutron transport
We discuss the evolving mathematical view of the Neutron Transport Equation (NTE), which describes the flux of neutrons through inhomogeneous fissile materials. Neutron transport theory emerges in
the rigorous mathematical literature in the mid-1950s. Its treatment as an integro-differential equation eventually settled in the applied mathematics literature through the theory of c_0-semigroup
theory, thanks to the work of Robert Dautray, Louis Lions and collaborators. This paved the way for its spectral analysis which has played an important role in the design of nuclear reactors and
nuclear medical equipment. We also look at the natural probabilistic approach to the NTE which has largely been left behind. Connections with methods of branching particle systems, quasi-stationarity
for Markov processes and stochastic analysis all lead new ways of characterising solutions and spectral behaviour the NTE. In particular this, in turn, leads to the suggestion of completely new
Monte-Carlo algorithms, which has genuine industrial impact.
Date / Time: Friday, January 11, 2019 - 16:00
Venue: McGill University, Burnside Hall , 805 O., rue Sherbrooke, room 1104
Friday, December 7, 2018
Stability problems in general relativity
There exist several remarkable explicit solutions of Einstein's field equations of General Relativity. A fundamental problem (with implications even for experimental science) is to determine their
properties upon perturbation of their initial conditions. I will describe two such solutions: Minkowski spacetime, which is a model for regions of the universe without matter or energy content; and
the Kerr--de Sitter family of spacetimes describing (rotating) black holes. In recent work, in parts joint with A. Vasy, we prove global existence and obtain a precise asymptotic description of
perturbations of these spacetimes. I will explain these results and indicate the role played by modern microlocal and spectral theoretic techniques in our proofs.
Date / Time: Friday, December 7, 2018 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, November 30, 2018
Completeness of the isomorphism problem for separable C*-algebras
In logic and computer science one often studies the complexity of decision problems. In mathematical logic this leads to the program of study of relative complexity of isomorphism problems and
determining various complexity classes. Broadly speaking, a problem p in a class C is complete in C if any other problem in C reduces to p. The isomorphism problem for separable C*-algebras has been
studied since the 1960's and evolved into the Elliott program that classifies C*-algebras via their K-theoretic invariants. During the talk I will discuss the complexity of the isomorphism problem
for separable C*-algebras and its completeness in the class of orbit equivalence relations.
Date / Time: Friday, November 30, 2018 - 16:00
Venue: McGill University, Burnside Hall , 805 O., rue Sherbrooke, room 1104
Friday, November 16, 2018
Sharp arithmetic transitions for 1D quasiperiodic operators
A very captivating question in solid state physics is to determine/understand the hierarchical structure of spectral features of operators describing 2D Bloch electrons in perpendicular magnetic
fields, as related to the continued fraction expansion of the magnetic flux. In particular, the hierarchical behavior of the eigenfunctions of the almost Mathieu operators, despite significant
numerical studies and even a discovery of Bethe Ansatz solutions has remained an important open challenge even at the physics level. I will present a complete solution of this problem in the
exponential sense throughout the entire localization regime. Namely, I will describe the continued fraction driven hierarchy of local maxima, and a universal (also continued fraction expansion
dependent) function that determines local behavior of all eigenfunctions around each maximum, thus giving a complete and precise description of the hierarchical structure. In the regime of
Diophantine frequencies and phase resonances there is another universal function that governs the behavior around the local maxima, and a reflective-hierarchical structure of those, phenomena not
even described in the physics literature. These results lead also to the proof of sharp arithmetic transitions between pure point and singular continuous spectrum, in both frequency and phase, as
conjectured since 1994. This part of the talk is based on the papers joint with W. Liu. Within the singular continuous regime, it is natural to look for further, dimensional transitions. I will
present a sharp arithmetic transition result in this regard that holds for the entire class of analytic quasiperiodic potentials, based on the joint work with S. Zhang.
Date / Time: Friday, November 16, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 1140
Friday, November 9, 2018
Period mappings and Diophantine equations
I will give some friendly examples introducing the period mapping. This is an analytic mapping which controls many aspects of how algebraic varieties change in families. After that I will explain
joint work with Brian Lawrence which shows that one can exploit transcendence properties of the period mapping to prove results about Diophantine equations. For example we give another proof of
the Mordell conjecture (originally proved by Faltings): there are only finitely many rational points on an algebraic curve over Q whose genus is at least 2.
Date / Time: Friday, November 9, 2018 - 16:00
Venue: McGill University, Burnside Hall , 805 O., rue Sherbrooke ATTENTION - SALLE 1B45 - ATTENTION
Friday, November 2, 2018
The complexity of detecting cliques and cycles in random graphs
A strong form of the P ≠ NP conjecture holds that no algorithm faster than n^O(k) solves the k-clique problem with high probability when the input is an Erdoös–Rényi random graph with an appropriate
edge density. Toward this conjecture, I will describe a line of work lower-bounding the average-case complexity of k-clique (and other subgraph isomorphism problems) in weak models of computation:
namely, restricted classes of boolean circuits and formulas. Along the way I will discuss some of the history and current frontiers in Circuit Complexity.Joint work with Kenichi Kawarabayashi, Yuan
Li and Alexander Razborov.
Joint work with Ken-ichi Kawarabayashi, Yuan Li and Alexander Razborov.
Date / Time: Friday, November 2, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 1355
Friday, October 26, 2018
A generalized detailed balance relation
The transition probabilities of reactions J -> K and K -> J in a thermal bath are related because of the time reversal symmetry of fundamental physical laws. The relation is known as detailed balance
relation. We study the problems that arise in obtaining a rigorous proof of detailed balance, using deterministic rather than Markovian dynamics. J. Englands’ biological applications of detailed
balance are briefly considered.
Date / Time: Friday, October 26, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 6254
Friday, October 19, 2018
Vacuum Energy of the Universe and nontrivial topological sectors in Quantum Field Theory
I discuss a new scenario for early cosmology when the inflationary de Sitter phase emerges dynamically. This genuine quantum effect occurs as a result of dynamics of the topologically nontrivial
sectors in a strongly coupled non-abelian gauge theory in an expanding universe. I argue that the key element for this idea to work is the presence of nontrivial holonomy in strongly coupled gauge
theories. The effect is global in nature, non-analytical in coupling constant, and cannot be formulated in terms of a gradient expansion in an effective local field theory.
I explain the basic ideas of this framework using a simplified 2D quantum field theory where precise computations can be carried out in theoretically controllable way. I move on to generalize the
computations to 4D non-abelian gauge field theories. The last (and most important for cosmological applications) part of my talk is based on recent paper [arXiv:1709.09671].
Date / Time: Friday, October 19, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 1140
Friday, October 12, 2018
Robust estimation in the presence of influential units for skewed finite and infinite populations
Many variables encountered in practice (e.g., economic variables) have skewed distributions. The latter provide a conducive ground for the presence of influential observations, which are those that
have a drastic impact on the estimates if they were to be excluded from the sample. We examine the problem of influential observations in a classical statistic setting as well as in a finite
population setting that includes two main frameworks: the design-based framework and the model-based framework. Within each setting, classical estimators may be highly unstable in the presence of
influential units. We propose a robust estimator of the population mean based on the concept of conditional bias of a unit, which is a measure of influence. The idea is to reduce the impact of the
sample units that have a large conditional bias. The proposed estimator depends on a cut-off value. We suggest selecting the cut-off value that minimizes the maximum absolute estimated conditional
bias with respect to the robust estimator. The properties of the proposed estimator will be discussed. Finally, the results of a simulation study comparing the performance of several estimators in
terms of bias and mean square error will be presented.
Date / Time: Friday, October 12, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 6254
Friday, October 5, 2018
Counting lattice walks confined to cones
The study of lattice walks confined to cones is a very lively topic in combinatorics and in probability theory, which has witnessed rich developments in the past 20 years. In a typical problem, one
is given a finite set of allowed steps S in Z^d, and a cone C in R^d. Clearly, there are |S|^n walks of length n that start from the origin and take their steps in S. But how many of them remain the
the cone C?
One of the motivations for studying such questions is that lattice walks are ubiquitous in various mathematical fields, where they encode important classes of objects: in discrete mathematics
(permutations, trees, words...), in statistical physics (polymers...), in probability theory (urns, branching processes, systems of queues), among other fields.
The systematic study of these counting problems started about 20 years ago. Beforehand, only sporadic cases had been solved, with the exception of walks with small steps confined to a Weyl chamber,
for which a general reflection principle had been developed. Since then, several approaches have been combined to understand how the choice of the steps and of the cone influence the nature of the
counting sequence a(n), or of the the associated series A(t)=\sum a(n) t^n. For instance, if C is the first quadrant of the plane and S only consists of "small" steps, it is now understood when A(t)
is rational, algebraic, or when it satisfies a linear, or a non-linear, differential equation. Even in this simple case, the classification involves tools coming from an attractive variety of fields:
algebra on formal power series, complex analysis, computer algebra, differential Galois theory, to cite just a few. And much remains to be done, for other cones and sets of steps.
This talk will survey these recent developments, and conclude a series of talks by the author, in the framework of the Aisenstadt chair.
Date / Time: Friday, October 5, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 6254
Friday, September 28, 2018
A delay differential equation with a solution whose shortened segments are dense
Simple-looking autonomous delay differential equations x'(t)=f(x(t-r)) with a real function f and single time lag r>0 can generate complicated (chaotic) solution behaviour, depending on the shape of
f. The same could be shown for equations with a variable, state-dependent delay r=d(x[t]), even for the linear case f(\xi)=-\alpha\,\xi with alpha>0. Here the argument x[t] of the delay functional d
is the history of the solution x between t-r and t defined as the function x[t]:[-r,0] to\mathbb{R}) given by x[t](s)=x(t+s). So the delay alone may be responsible for complicated solution behaviour.
In both cases the complicated behaviour which could be established occurs in a thin dust-like invariant subset of the infinite-dimensional Banach space or manifold of functions [-r,0]\to\mathbb{R} on
which the delay equation defines a nice semiflow. The lecture presents a result which grew out of an attempt to obtain complicated motion on a larger set with non-empty interior, as certain numerical
experiments seem to suggest. For some r>1 we construct a delay functional d:Y\to(0,r), Y an infinite-dimensional subset of the space C^1([-r,0],\mathbb{R}), so that the equation x'(t)=-\alpha\,x(t-d
(x_t)) has a solution whose short segments x_[t]|[[-1,0]], t\ge0, are dense in the space C^1([-1,0],\mathbb{R}). This implies a new kind of complicated behaviour of the flowline [0,\infty)\ni t\
mapsto x[t]\in C^1[r]. Reference: H. O. Walther, A delay differential equation with a solution whose shortened segments are dense. J. Dynamics Dif. Eqs., to appear.
Date / Time: Friday, September 28, 2018 - 16:00
Venue: McGill University, Burnside Hall , 805 W., rue Sherbrooke, room 1104
Friday, September 21, 2018
Algebraic structures for topological summaries of data
This talk introduces an algebraic framework to encode, compute, and analyze topological summaries of data. The main motivating problem, from evolutionary biology, involves statistics on a dataset
comprising images of fruit fly wing veins, which amount to embedded planar graphs with varying combinatorics. Additional motivation comes from statistics more generally, the goal being to summarize
unknown probability distributions from samples. The algebraic structures for topological summaries take their cue from graded polynomial rings and their modules, but the theory is complicated by the
passage from integer exponent vectors to real exponent vectors. The key to making the structures practical for data science applications is a finiteness condition that encodes topological tameness --
which occurs in all modules arising from data -- robustly, in equivalent combinatorial and homological algebraic ways. Out of the tameness condition surprisingly falls much of ordinary commutative
algebra, including syzygy theorems and primary decomposition.
Date / Time: Friday, September 21, 2018 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, September 14, 2018
Systems of points with Coulomb interactions
Large ensembles of points with Coulomb interactions arise in various settings of condensed matter physics, classical and quantum mechanics, statistical mechanics, random matrices and even
approximation theory, and they give rise to a variety of questions pertaining to analysis, Partial Differential Equations and probability. We will first review these motivations, then present the
"mean-field" derivation of effective models and equations describing the system at the macroscopic scale. We then explain how to analyze the next order behavior, giving information on the
configurations at the microscopic level and connecting with crystallization questions, and finish with the description of the effect of temperature.
Date / Time: Friday, September 14, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 6254
Friday, September 7, 2018
Mathematical challenges in constructing quantum field theory models
This talk is an overview of algebraic quantum field theory (AQFT) and its perturbative generalization: pAQFT. Both are axiomatic systems meant to provide foundations for quantum field theory (the
theory underlying particle physics). I will explain what is the current status of constructing physically relevant models in both approaches and present future perspectives. The most recent results
include applications of pAQFT in Yang-Mills theories and effective quantum gravity, as well as some progress in understanding how to go beyond the perturbation theory.
Date / Time: Friday, September 7, 2018 - 16:00
Venue: CRM, Université de Montréal, Pavillon André-Aisenstadt, room 6254
ISM students to animate a workshop at the Eureka Festival
Sunday, June 10, 2018
Le carréousel du géomètre
On the theme of movement, the ISM invites you to participate in animated activities for all ages. A hodge podge of mathematical curiosities will debunk your preconceived notions about all things
that roll. Square wheels, carefully constructed manholes, and drills to make square holes are just a few of the objects with which you'll be able to experiment.
Meet us at the Eureka Festival in Montreal's Old Port
Sunday, June 10, 2018
4:00 pm to 6:00 pm
Conceived and animated by: Alexis Langlois-Rémillard, Olivier Binette and Pierre-Alexandre Mailhot
Monday, May 28, 2018
The summer school will be held at McGill University in Montreal, May 28-June 01, 2018. This school immediately precedes the Statistical Society of Canada Annual Meeting, also to be held at McGill
University, which will in turn be followed by a month-long program at the Centre de recherche mathematique on “Causal inference in the presence of dependence and network structure: modelling
strategies and model selection”.
Friday, May 25, 2018
The Université de Sherbrooke’s department of mathematics invites you to be a part of the 21st edition of the colloque panquébécois des étudiants et étudiantes de l’ISM which will be held from may
25th to may 27th this year.
This year, four researchers will give a plenary talk to present a part of their research. Louigi Addario-Berry from the Université McGill, Karim Oualkacha from the Université du Québec à Montréal,
Hugo Chapdelaine from the Université Laval and Vasilisa Schramchenko from the Université de Sherbrooke will share the auditorium throughout the weekend.
We also invite you to share your own research as part of one of the 20 minutes student’s talks.
A beautiful city, a beautiful campus, great food and amazing people are on schedule for this weekend !
You can find the registration form, the event’s details and how to join us on our website: https://www.ism21colloque.com.
Saturday, May 12, 2018
Have you always dreamed of inventing your own magic tricks? Would you like to know how math can help? The Institut des sciences mathématiques invites you to see magic tricks using cards, knots and
mathematics, but definitely not your intuition... it may trick you!
Saturday, May 12, 11:00 am to 2:00 pm
UQAM, Président-Kennedy Building
201, ave du Président-Kennedy
Room PK-1630, Montréal
Friday, May 4, 2018
Klein-Gordon-Maxwell-Proca systems in the Riemannian setting
We intend to give a general talk about Klein-Gordon-Maxwell-Proca systems which we aim to be accessible to a broad audience. We will insist on the Proca contribution and then discuss the kind of
results one can prove in the electro-magneto static case of the equations.
Date / Time: Friday, May 4, 2018 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, April 13, 2018
Local-global principles in number theory
One of the classical tools of number theory is the so-called local-global principle, or Hasse principle, going back to Hasse's work in the 1920's. His first results concern quadratic forms, and norms
of number fields. Over the years, many positive and negative results were proved, and there is nowa huge number of results in this topic.
This talk will present some old and new results, in particular in the continuation of Hasse's cyclic norm theorem. These have been obtained jointly with Parimala and Tingyu Lee.
Date / Time: Friday, April 13, 2018 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Thursday, March 1, 2018
p-Adic Variation in the Theory of Automorphic Forms
This will be an expository lecture intended to illustrate through examples the theme of p-adic variation in the classical theory of modular forms. Classically, modular forms are complex analytic
objects, but because their fourier coefficients are typically integral, it is possible to do elementary arithmetic with them. Early examples arose already in the work of Ramanujan. Today one knows
that modular forms encode deep arithmetic information about elliptic curves and galois representations. The main goal of the lecture will be to motivate a beautiful theorem of Robert Coleman and
Barry Mazur, who constructed the so-called Eigenvariety, which leads to a geometric approach to varying modular forms, their associated galois representations, as well as their L-functions, in p-adic
analytic families. We will briefly discuss important applications to Number Theory and Iwasawa Theory.
Date / Time: Thursday, March 1, 2018 - 15:30
Venue: Université Laval, room 3840, Alexandre-Vachon Building
Friday, February 23, 2018
Cluster theory of the coherent Satake category
The affine Grassmannian, though a somewhat esoteric looking object at first sight, is a fundamental algebro-geometric construction lying at the heart of a series of ideas connecting number theory
(and the Langlands program) to geometric representation theory, low dimensional topology and mathematical physics.
Historically it is popular to study the category of constructible perverse sheaves on the affine Grassmannian. This leads to the *constructible* Satake category and the celebrated (geometric) Satake
More recently it has become apparent that it makes sense to also study the category of perverse *coherent* sheaves (the coherent Satake category). Motivated by certain ideas in mathematical physics
this category is conjecturally governed by a cluster algebra structure.
We will illustrate the geometry of the affine Grassmannian in an elementary way, discuss what we mean by a cluster algebra structure and then describe a solution to this conjecture in the case of
general linear groups.
Date / Time: Friday, February 23, 2018 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, February 16, 2018
Quantum n-body problem: generalized Euler coordinates (from J-L Lagrange to Figure Eight by Moore and Ter-Martirosyan, then and today)
The potential of the n-body problem, both classical and quantum, depends only on the relative (mutual) distances between bodies. By generalized Euler coordinates we mean relative distances and
angles. Their advantage over Jacobi coordinates is emphasized.
The NEW IDEA is to study trajectories in both classical, and eigenstates in quantum systems which depends on relative distances ALONE.
We show how this study is equivalent to the study of
(i) the motion of a particle (quantum or classical) in curved space of dimension n(n-1)/2
or the study of
(ii) the Euler-Arnold (quantum or classical) - sl(n(n-1)/2, R) algebra top.
The curved space of (i) has a number of remarkable properties. In the 3-body case the de-Quantization of quantum Hamiltonian leads to a classical Hamiltonian which solves a ~250-years old problem
posed by Lagrange on 3-body planar motion.
Date: Friday, February 16, 2018
Time: 4:00 p.m.
Place: UdeM, Pavillon André-Aisenstadt, salle 6254
Friday, February 16, 2018
The Law of Large Populations: The return of the long-ignored N and how it can affect our 2020 vision
For over a century now, we statisticians have successfully convinced ourselves and almost everyone else, that in statistical inference the size of the population N can be ignored, especially when it
is large. Instead, we focused on the size of the sample, n, the key driving force for both the Law of Large Numbers and the Central Limit Theorem. We were thus taught that the statistical error
(standard error) goes down with n typically at the rate of 1/√n. However, all these rely on the presumption that our data have perfect quality, in the sense of being equivalent to a probabilistic
sample. A largely overlooked statistical identity, a potential counterpart to the Euler identity in mathematics, reveals a Law of Large Populations (LLP), a law that we should be all afraid of. That
is, once we lose control over data quality, the systematic error (bias) in the usual estimators, relative to the benchmarking standard error from simple random sampling, goes up with N at the rate of
√N. The coefficient in front of √N can be viewed as a data defect index, which is the simple Pearson correlation between the reporting/recording indicator and the value reported/recorded. Because
of the multiplier√N, a seemingly tiny correlation, say, 0.005, can have detrimental effect on the quality of inference. Without understanding of this LLP, “big data” can do more harm than good
because of the drastically inflated precision assessment hence a gross overconfidence, setting us up to be caught by surprise when the reality unfolds, as we all experienced during the 2016 US
presidential election. Data from Cooperative Congressional Election Study (CCES, conducted by Stephen Ansolabehere, Douglas River and others, and analyzed by Shiro Kuriwaki), are used to estimate
the data defect index for the 2016 US election, with the aim to gain a clearer vision for the 2020 election and beyond.
Date: Friday, February 16, 2018
Time: 3:30 p.m.
Place: McGill University, OTTO MAASS 217
Friday, February 9, 2018
Persistence modules in symplectic topology
In order to resolve Vladimir Arnol'd's famous conjecture from the 1960's, giving lower bounds on the number of fixed points of Hamiltonian diffeomorphisms of a symplectic manifold, Andreas Floer has
associated in the late 1980's a homology theory to the Hamiltonian action functional on the loop space of the manifold. It was known for a long time that this homology theory can be filtered by the
values of the action functional, yielding information about metric invariants in symplectic topology (Hofer's metric, for example). We discuss a recent marriage between the filtered version of Floer
theory and persistent homology, a new field of mathematics that has its origins in data analysis, providing examples of new ensuing results.
Date / Time: Friday, February 9, 2018 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, January 12, 2018
What is quantum chaos?
Where do eigenfunctions of the Laplacian concentrate as eigenvalues go to infinity? Do they equidistribute or do they concentrate in an uneven way? It turns out that the answer depends on the nature
of the geodesic flow. I will discuss various results in the case when the flow is chaotic: the Quantum Ergodicity theorem of Shnirelman, Colin de Verdière, and Zelditch, the Quantum Unique Ergodicity
conjecture of Rudnick-Sarnak, the progress on it by Lindenstrauss and Soundararajan, and the entropy bounds of Anantharaman-Nonnenmacher. I will conclude with a recent lower bound on the mass of
eigenfunctions obtained with Jin. It relies on a new tool called "fractal uncertainty principle" developed in the works with Bourgain and Zahl.
Date: Friday, January 12, 2018
Time: 4:00 p.m.
Place: UdeM, Pavillon André-Aisenstadt, salle 6254
Friday, January 12, 2018
Seminars in Undergraduate Mathematics in Montreal (SUMM) is an annual event organized by undergraduate mathematics students from Montreal’s four universities.
Each year, the SUMM committee organizes a bilingual colloquium for undergraduate mathematics students. Our main goal is to bring together students from the universities of Montreal and across Canada,
creating a dynamic and stimulating undergraduate mathematical community where they can share ideas and interests.
During the three days of the conference, students are invited to share their interest in an area of mathematics or statistics in a 25, 35 or 45 minute talk. Moreover, four keynote speakers — one
professor from each organizing university (Concordia University, McGill University, Université de Montréal and Université du Québec à Montréal), representing different areas of research — are invited
to give a 50 minute presentation.
For its ninth edition, SUMM will be held at Concordia University on January 12, 13 and 14. If you are interested in presenting a talk, please indicate so in your registration form. Those giving talks
will receive a discount on their ticket price.
Thursday, December 14, 2017
The new world of infinite random geometric graphs
The infinite random or Rado graph R has been of interest to graph theorists, probabilists, and logicians for the last half-century. The graph R has many peculiar properties, such as its categoricity:
R is the unique countable graph satisfying certain adjacency properties. Erdös and Rényi proved in 1963 that a countably infinite binomial random graph is isomorphic to R.
Random graph processes giving unique limits are, however, rare. Recent joint work with Jeannette Janssen proved the existence of a family of random geometric graphs with unique limits. These graphs
arise in the normed space $\ell^n_\infty$ , which consists of $\mathbb{R}^n$ equipped with the $L_\infty$-norm. Balister, Bollobás, Gunderson, Leader, and Walters used tools from functional analysis
to show that these unique limit graphs are deeply tied to the $L_\infty$-norm. Precisely, a random geometric graph on any normed, finite-dimensional space not isometric $\ell^n_\infty$ gives
non-isomorphic limits with probability 1.
With Janssen and Anthony Quas, we have discovered unique limits in infinite dimensional settings including sequences spaces and spaces of continuous functions. We survey these newly discovered
infinite random geometric graphs and their properties.
Date: Thursday, December 14, 2017
Time: 3:30 p.m.
Place: Université Laval, Pavillon Vachon, room 2830
Friday, December 8, 2017
Primes with missing digits
Many famous open questions about primes can be interpreted as questions about the digits of primes in a given base. We will talk about recent work showing there are infinitely many primes with no 7
in their decimal expansion. (And similarly with 7 replaced by any other digit.) This shows the existence of primes in a 'thin' set of numbers (sets which contain at most X^1-c elements less than X)
which is typically very difficult.
Date / Time: Friday, December 8, 2017 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, November 24, 2017
150 years (and more) of data analysis in Canada
As Canada celebrates its 150th anniversary, it may be good to reflect on the past and future of data analysis and statistics in this country. In this talk, I will review the Victorian Statistics
Movement and its effect in Canada, data analysis by a Montréal physician in the 1850s, a controversy over data analysis in the 1850s and 60s centred in Montréal, John A. MacDonald’s use of
statistics, the Canadian insurance industry and the use of statistics, the beginning of mathematical statistics in Canada, the Fisherian revolution, the influence of Fisher, Neyman and Pearson, the
computer revolution, and the emergence of data science.
Date: Friday, November 24, 2017
Time: 3:30 p.m.
Place: McGill, Leacock Building, room LEA 232
Friday, November 24, 2017
Complex analysis and 2D statistical physics
Over the last decades, there was much progress in understanding 2D lattice models of critical phenomena. It started with several theories, developed by physicists. Most notably, Conformal Field
Theory led to spectacular predictions for 2D lattice models: e.g., critical percolation cluster a.s. has Hausdorff dimension 91/48, while the number of self-avoiding length N walks on the hexagonal
lattice grows like (\sqrt{2+\sqrt{2}})^N N^11/32. While the algebraic framework of CFT is rather solid, rigorous arguments relating it to lattice models were lacking. More recently, mathematical
approaches were developed, allowing not only for rigorous proofs of many such results, but also for new physical intuition. We will discuss some of the applications of complex analysis to the study
of 2D lattice models.
Date: Friday, November 24, 2017
Time: 4:00 p.m.
Place: UdeM, Pavillon André-Aisenstadt, salle 6254
Friday, November 17, 2017
Recent progress on De Giorgi Conjecture
Classifying solutions to nonlinear partial differential equations are fundamental research in PDEs. In this talk, I will report recent progress made in classifying some elementary PDEs, starting with
the De Giorgi Conjecture (1978). I will discuss the classification of global minimizers and finite Morse index solutions, relation with minimal surfaces and Toda integrable systems, as well as recent
exciting developments in fractional De Giorgi Conjecture.
Date / Time: Friday, November 17, 2017 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, October 27, 2017
Beneath the Surface: Geometry Processing at the Intrinsic/Extrinsic Interface
Algorithms for analyzing 3D surfaces find application in diverse fields from computer animation to medical imaging, manufacturing, and robotics. Reflecting a bias dating back to the early development
of differential geometry, a disproportionate fraction of these algorithms focuses on discovering intrinsic shape properties, or those measurable along a surface without considering the surrounding
space. This talk will summarize techniques to overcome this bias by developing a geometry processing pipeline that treats intrinsic and extrinsic geometry democratically. We describe
theoretically-justified, stable algorithms that can characterize extrinsic shape from surface representations.
In particular, we will show two strategies for computational extrinsic geometry. In our first approach, we will show how the discrete Laplace-Beltrami operator of a triangulated surface accompanied
with the same operator for its offset determines the surface embedding up to rigid motion. In the second, we will treat a surface as the boundary of a volume rather than as a thin shell, using the
Steklov (Dirichlet-to-Neumann) eigenproblem as the basis for developing volumetric spectral shape analysis algorithms without discretizing the interior.
Date: Friday, October 27, 2017
Time: 4:00 p.m.
Place: UdeM, Pavillon André-Aisenstadt, salle 6254
Friday, October 13, 2017
Supercritical Wave Equations
I will review the problem of Global existence for dispersive equations, in particular, supercritical equations. These equations who play a fundamental role in science, have been , and remain a major
challenge in the field of Partial Differential Equations. They come in various forms, derived from Geometry, General Relativity, Fluid Dynamics, Field Theory. I present a new approach to classify the
asymptotic behavior of wave equations, supercritical and others, and construct global solutions with large initial data. I will then describe current extensions to Nonlinear Schroedinger Equations.
Date: Friday, October 13, 2017
Time: 4:00 p.m.
Place: UdeM, Pavillon André-Aisenstadt, room 6254
Friday, September 29, 2017
The first field
The “first field” is obtained by making the entries in its addition and multiplication tables be the smallest possibilities. It is really an interesting field that contains the integers, but with new
addition and multiplication tables. For example, 2 x 2 = 3, 5 x 7 = 13, ... It extends to the infinite ordinals and the first infinite ordinal is the cube root of 2!
Date: Friday, September 29, 2017
Time: 4:00 p.m.
Place: UdeM, Pavillon André-Aisenstadt, salle 1140
Friday, September 15, 2017
Isometric embedding and quasi-local type inequality
In this talk, we will first review the classic Weyl's embedding problem and its application in quasi-local mass. We will then discuss some recent progress on Weyl's embedding problem in general
Riemannian manifold. Assuming isometric embedding into Schwarzschild manifold, we will further establish a quasi-local type inequality. This talk is based on works joint with Pengfei Guan and Pengzi
Date / Time: Friday, September 15, 2017 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Canadian Undergraduate Mathematics Conference
Wednesday, July 19, 2017
The CUMC is an academic conference aimed at undergraduate students studying mathematics or mathematical fields. This year's event will take place in Montréal, from the 19th to the 23th of July, in
collaboration between Université de Montréal, Concordia University, Université du Québec à Montréal and McGill University.
Montreal Math Camp
Monday, June 26, 2017
Concordia’s Math Camp provides an enriching experience for students who have shown an interest or an aptitude for mathematics. Led by a bilingual instructor, experienced in outreach activities and
international mathematical contests, and staffed by graduate students in mathematics or mathematics education from the Montreal area universities, the camp will challenge students aged 10-15 to
develop their problem skills while having fun.
The students will participate in problem solving sessions, and experience math through games, projects, experiments and other fun activities. The camp will also highlight math in everyday life from
practical fields to the arts. Finally, it is a chance for kids interested in mathematics to meet other kids who share the same interest and thus develop new friendships.
The cost of the camp is $225/week (M-F: 9am to 4pm) with an extra $35/week for extended care (M-F: 8am-9am and 4pm-5pm). The camp is held in the Mathematics and Statistics Department of Concordia
University, 9^th floor of the Library Building in SGW campus:
A Montreal Math Circle Activity, the camp is made possible by Concordia University in collaboration with the Institut des sciences mathématiques (ISM) and the support of Canadian Mathematical Society
(CMS) and NSERC Promoscience.
Mathematical activities are chosen among:
- appropriate level problem solving;
- challenge problems: like open problems and discovery type activities;
- exploratory activity with math themes like polygonal numbers, math in paintings, Escher logic and tessellations; or a math walk around the department.
- recreational mathematics with activities foreseen for outreach and puzzles; stories or movies about mathematicians and mathematics.
We will aim to find at least one daily activity where the students will get to move around and spend some physical energy.
Any general enquiries may be addressed to montrealmathclub@gmail.com. Please write June camp in the subject line.
To register, please complete the form.
The Mathematics of Magic!
Saturday, May 13, 2017
Have you always dreamed of inventing your own magic tricks? Would you like to know how math can help? The Institut des sciences mathématiques invites you to see magic tricks using cards, knots and
mathematics, but definitely not your intution... it may trick you!
Saturday, May 13, 10:00 am to 2:00 pm
UQAM, Président-Kennedy Building
201, ave du Président-Kennedy
Room PK-R650, Montréal
Friday, May 12, 2017
The purpose of this annual conference is to bring together for a weekend graduate students from the province of Quebec enrolled in mathematical sciences programs. This year, the symposium will be
held at the Université du Québec à Trois-Rivières from May 12-14, 2017. Everyone is invited to register and to present his or her work on a topic that they are passionate about.
Friday, May 5, 2017
From the geometry of numbers to Arakelov geometry
Arakelov geometry is a modern formalism that extends in various directions the geometry of numbers founded by Minkowski in the nineteenth century. The objects of study are arithmetic varieties,
namely complex varieties that can be defined by polynomial equations with integer coefficients. The theory exploits the interplay between algebraic geometry and number theory and complex analysis and
differential geometry. Recently, the formalism found beautiful and important applications to the so-called Kudla programme and the Colmez conjecture. In the talk, I will first introduce elementary
facts in Minkowski's geometry of numbers. This will provide a motivation for the sequel, where I will give my own view of Arakelov geometry, by focusing on toy (but non-trivial) examples of one of
the central theorems in the theory, the arithmetic Riemann-Roch theorem mainly due to Bismut, Gillet and Soulé, and generalizations. I hope there will be ingredients to satisfy different tastes, for
instance modular forms (arithmetic aspect), analytic torsion (analytic aspect) and Selberg zeta functions (arithmetic, analytic and dynamic aspects).
Date / Time: Friday, May 5, 2017 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, April 21, 2017
Introduction to the Energy Identity for Yang-Mills
In this talk we give an introduction to the analysis of the Yang-Mills equation in higher dimensions. In particular, when studying sequences of solutions we will study the manner in which blow up
can occur, and how this blow up may be understood through the classical notions of the defect measure and bubbles. The energy identity is an explicit conjectural relationship, known to be true in
dimension four, relating the energy density of the defect measure at a point to the bubbles which occur at that point, and we will give a brief overview of the recent proof of this result for general
stationary Yang Mills in higher dimensions. The work is joint with Daniele Valtorta.
Date / Time: Friday, April 21, 2017 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, April 21, 2017
Date /Time : Friday, April 21, 2017 - 15:30
Venue : McGill University, Burnside Hall, 805 Sherbrooke Ouest, room 1205
Friday, April 7, 2017
Kahler-Einstein metrics
Kahler-Einstein metrics are of fundamental importance in Kahler geometry, with connections to algebraic geometry, geometric analysis, string theory amongst other fields. Their study has received a
great deal of attention recently, culminating in the solution of the Yau-Tian-Donaldson conjecture, characterizing which complex manifolds admit Kahler-Einstein metrics. I will give an overview of
the field, including some recent developments.
Date / Time: Friday, April 7, 2017 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Thursday, April 6, 2017
Instrumental variable regression with survival outcomes
Instrumental variable (IV) methods are popular in non-experimental studies to estimate the causal effects of medical interventions or exposures. These approaches allow for the consistent estimation
of such effects even if important confounding factors are unobserved. Despite the increasing use of these methods, there have been few extensions of IV methods to censored data regression problems.
We discuss challenges in applying IV structural equational modelling techniques to the proportional hazards model and suggest alternative modelling frameworks. We demonstrate the utility of the
accelerated lifetime and additive hazards models for IV analyses with censored data. Assuming linear structural equation models for either the event time or the hazard function, we proposed
closed-form, two-stage estimators for the causal effect in the structural models for the failure time outcomes. The asymptotic properties of the estimators are derived and the resulting inferences
are shown to perform well in simulation studies and in an application to a data set on the effectiveness of a novel chemotherapeutic agent for colon cancer.
Date / Time: Thursday, April 6, 2017 - 15:30
Venue: Laval University, Pavillon Vachon, room 3840
Friday, March 31, 2017
PDEs on non-smooth domains
Abstract: In these lecture we will discuss the relationship between the boundary regularity of the solutions to elliptic second order divergence form partial differential equations and the geometry
of the boundary of the domain where they are defined. While in the smooth setting tools from classical PDEs are used to address this question, in the non-smooth setting techniques from harmonic
analysis and geometric measure theory are needed to tackle the problem. The goal is to present an overview of the recent developments in this very active area of research.
Date / Time: Friday, March 31, 2017 - 16:00
Venue: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, March 17, 2017
Inference in Dynamical Systems
We consider the asymptotic consistency of maximum likelihood parameter estimation for dynamical systems observed with noise. Under suitable conditions on the dynamical systems and the observations,
we show that maximum likelihood parameter estimation is consistent. Furthermore, we show how some well-studied properties of dynamical systems imply the general statistical properties related to
maximum likelihood estimation. Finally, we exhibit classical families of dynamical systems for which maximum likelihood estimation is consistent. Examples include shifts of finite type with Gibbs
measures and Axiom A attractors with SRB measures. We also relate Bayesian inference to the thermodynamic formalism in tracking dynamical systems.
Date / Time : Friday, March 17, 2017 - 15:30
Venue : McGill University, Burnside Hall, 805 Sherbrooke Ouest, salle 1205
Friday, March 10, 2017
Probabilistic aspects of minimum spanning trees
One of the most dynamic areas of probability theory is the study of the behaviour of discrete optimization problems on random inputs. My talk will focus on the probabilistic analysis of one of the
first and foundational combinatorial optimization problems: the minimum spanning tree problem. The structure of a random minimum spanning tree (MST) of a graph G turns out to be intimately linked to
the behaviour of critical and near-critical percolation on G. I will describe this connection, and present some results on the structure, scaling limits, and volume growth of random MSTs. It turns
out that, on high-dimensional graphs, random minimum spanning trees are expected to be three-dimensional when viewed intrinsically, and six-dimensional when viewed as embedded objects.
Based on joint works with Nicolas Broutin, Christina Goldschmidt, Simon Griffiths, Ross Kang, Gregory Miermont, Bruce Reed, Sanchayan Sen.
Date / Time : Friday, March 10, 2017 - 4:00 PM
Venue : CRM, Université de Montréal, Pavillon André-Aisenstadt, 2920 Chemin de la Tour, room 6254
Friday, February 24, 2017
Spreading phenomena in integrodifference equations with overcompensatory growth function
The globally observed phenomenon of the spread of invasive biological species with all its sometimes detrimental effects on native ecosystems has spurred intense mathematical research and modelling
efforts into corresponding phenomena of spreading speeds and travelling waves. The standard modelling framework for such processes is based on reaction- diffusion equations, but several aspects of an
invasion can only be appropriately described by a discrete-time analogues, called integrodifference equations. The theory of spreading speeds and travelling waves in such integrodifference equations
is well established for the "mono-stable" case, i.e. when the non-spatial dynamics show a globally stable positive steady state. When the positive state of the non-spatial dynamics is not stable, as
is the case with the famous discrete logistic equation, it is unclear how the corresponding spatial spread profile evolves and at what speed. Previous simulations seemed to reveal a travelling
profile in the form of a two-cycle, with or without spatial oscillations. The existence of a travelling wave solution has been proven, but its shape and stability remain unclear. In this talk, I will
show simulations that suggest that there are several travelling profiles at different speeds. I will establish corresponding generalizations of the concept of a spreading speed and prove the
existence of such speeds and travelling waves in the second- iterate operator. I conjecture that rather than a travelling two-cycle for the next-generation operator, one observes a pair of stacked
fronts for the second-iterate operator. I will relate the observations to the phenomenon of dynamic stabilization.
Date / Time : Friday, February 24, 2017 - 4:00 PM
Venue : CRM, Université de Montréal, Pavillon André-Aisenstadt, 2920 Chemin de la Tour, room 6254
Friday, February 10, 2017
Knot concordance
I will introduce the knot concordance group, give a survey of our current understanding of it and discuss some relationships with the topology of 4-manifolds.
Date / Time: Friday, February 10, 2017 - 4:00 PM
Venue UQAM, Président-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
Friday, January 20, 2017
The Birch-Swinnerton Dyer Conjecture and counting elliptic curves of ranks 0 and 1
This colloquium talk will begin with an introduction to the Birch--Swinnerton-Dyer conjecture for elliptic curves -- just curves defined by the equations y^2=x^3+Ax+B -- and then describe recent
advances that allow us to prove that lots of elliptic curves have rank zero or one.
Date / Time: Friday, January 20, 2017 - 4:00 PM
Venue UQAM, Président-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
SUMM 2017
Friday, January 13, 2017
The Seminars in Undergraduate Mathematics in Montreal (SUMM) is an annual event organized by students who are currently enrolled in an undergraduate mathematics program at one of the four Montreal
The 2017 edition SUMM will be held at McGill University on January, 13, 14 and 15.
Friday, December 2, 2016
Partial differential equations of mixed elliptic-hyperbolic type in mechanics and geometry
As is well-known, two of the basic types of linear partial differential equations (PDEs) are hyperbolic PDEs and elliptic PDEs, following the classification for linear PDEs first proposed by Jacques
Hadamard in the 1920s; and linear theories of PDEs of these two types have been well established, respectively. On the other hand, many nonlinear PDEs arising in mechanics, geometry, and other areas
naturally are of mixed elliptic-hyperbolic type. The solution of some longstanding fundamental problems in these areas greatly requires a deep understanding of such nonlinear PDEs of mixed type.
Important examples include shock reflection-diffraction problems in fluid mechanics (the Euler equations) and isometric embedding problems in differential geometry (the Gauss-Codazzi-Ricci
equations), among many others. In this talk we will present natural connections of nonlinear PDEs of mixed elliptic-hyperbolic type with these longstanding problems and will then discuss some recent
developments in the analysis of these nonlinear PDEs through the examples with emphasis on developing and identifying mathematical approaches, ideas, and techniques for dealing with the mixed-type
problems. Further trends, perspectives, and open problems in this direction will also be addressed.
Date / Time: Friday, December 2, 2016 - 4:00 PM
Venue : UQAM, Président-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
Thursday, December 1, 2016
High-dimensional changepoint estimation via sparse projection
Changepoints are a very common feature of Big Data that arrive in the form of a data stream. We study high-dimensional time series in which, at certain time points, the mean structure changes in a
sparse subset of the coordinates. The challenge is to borrow strength across the coordinates in order to detect smaller changes than could be observed in any individual component series. We propose a
two-stage procedure called 'inspect' for estimation of the changepoints: first, we argue that a good projection direction can be obtained as the leading left singular vector of the matrix that solves
a convex optimisation problem derived from the CUSUM transformation of the time series. We then apply an existing univariate changepoint detection algorithm to the projected series. Our theory
provides strong guarantees on both the number of estimated changepoints and the rates of convergence of their locations, and our numerical studies validate its highly competitive empirical
performance for a wide range of data generating mechanisms.
Date / Time : Thursday, December 1, 2016 - 15:30
Venue : Room 1205, Burnside Hall, 805 Sherbrooke West
Friday, November 25, 2016
Around the Möbius function
The Moebius function plays a central role in number theory; both the prime number theorem and the Riemann Hypothesis are naturally formulated in terms of the amount of cancellations one gets when
summing the Moebius function. In recent joint work with K. Matomaki the speaker showed that the sum of the Moebius function exhibits cancellations in "almost all intervals" of increasing length. This
goes beyond what was previously known conditionally on the Riemann Hypothesis. The result holds in fact in greater generality. Exploiting this generality one can show that between a fixed number of
consecutive squares there is always an integer composed of only "small" prime factors. This is related to the running time of Lenstra's factoring algorithm. I will also discuss some further
developments : the work of Tao on correlations between consecutive values of Chowla, and his application of this result to the resolution of the Erdos discrepancy problem.
Date / Time: Friday, November 25, 2016 - 4:00 PM
Venue UQAM, Président-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
Friday, November 4, 2016
The nonlinear stability of Minkowski space for self-gravitating massive fields
I will review results on the global evolution of self-gravitating massive matter in the context of Einstein's theory as well as the f(R)-theory of gravity. In collaboration with Yue Ma (Xian), I have
investigated the global existence problem for the Einstein equations coupled with a Klein-Gordon equation describing the evolution of a massive scalar field. Our main theorem establishes the global
nonlinear stability of Minkowski spacetime upon small perturbations of the metric and the matter field. Recall that the fully geometric proof by Christodoulou and Klainerman in 1993, as well as the
proof in wave gauge by Lindblad and Rodnianski in 2010, both apply to vacuum spacetimes and massless fields only. Our new technique of proof, which we refer to as the Hyperboloidal Foliation Method,
does not use Minkowski's scaling field and is based on a foliation of the spacetime by asymptotically hyperboloidal spacelike hypersurfaces, on sharp estimates for wave and Klein-Gordon equations,
and on an analysis of the quasi-null hyperboloidal structure (as we call it) of the Einstein equations in wave gauge.
Date / Time: Friday, November 4, 2016 - 4:00 PM
Venue UQAM, Président-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
Friday, October 28, 2016
Efficient tests of covariate effects in two-phase failure time studies
Two-phase studies are frequently used when observations on certain variables are expensive or difficult to obtain. One such situation is when a cohort exists for which certain variables have been
measured (phase 1 data); then, a sub-sample of individuals is selected, and additional data are collected on them (phase 2). Efficiency for tests and estimators can be increased by basing the
selection of phase 2 individuals on data collected at phase 1. For example, in large cohorts, expensive genomic measurements are often collected at phase 2, with oversampling of persons with
“extreme” phenotypic responses. A second example is case-cohort or nested case-control studies involving times to rare events, where phase 2 oversamples persons who have experienced the event by a
certain time. In this talk I will describe two-phase studies on failure times, present efficient methods for testing covariate effects. Some extensions to more complex outcomes and areas needing
further development will be discussed.
Date: Friday, October 28, 2016
Time: 3:30 p.m. - 4:30 p.m.
Place: Room 1205, Burnside Hall, 805 Sherbrooke West
Friday, October 21, 2016
Integrable probability and the KPZ universality class
I will explain how certain integrable structures give rise to meaningful probabilistic systems and methods to analyze them. Asymptotics reveal universal phenomena, such as the Kardar-Parisi-Zhang
universality class. No prior knowledge will be assumed.
Date / Time: Friday, October 21, 2016 - 4:00 PM
Venue : CRM, André-Aisenstadt Building, 2920 chemin de la tour, room 6254
Friday, October 14, 2016
Rigorously verified computing for infinite dimensional nonlinear dynamics: a functional analytic approach
Studying and proving existence of solutions of nonlinear dynamical systems using standard analytic techniques is a challenging problem. In particular, this problem is even more challenging for
partial differential equations, variational problems or functional delay equations which are naturally defined on infinite dimensional function spaces. The goal of this talk is to present rigorous
numerical technique relying on functional analytic and topological tools to prove existence of steady states, time periodic solutions, traveling waves and connecting orbits for the above mentioned
dynamical systems. We will spend some time identifying difficulties of the proposed approach as well as time to identify future directions of research.
Date / Time: Friday, October 14, 2016 - 4:00 PM
Venue : CRM, André-Aisenstadt Building, 2920 chemin de la tour, room 6254
Friday, September 30, 2016
Notions of simplicity in low-dimensions
Various auxiliary structures arise naturally in low-dimensions. I will discuss three of these: left-orders on the fundamental group, taut foliations on three-manifolds, and non-trivial Floer
homological invariants. Perhaps surprisingly, for (closed, connected, orientable, irreducible) three-manifolds, it has been conjectured that the existence of any one of these structures implies the
others. I will describe what is currently known about this conjectural relationship, as well as some of the machinery — particularly in Heegaard Floer theory — that has been developed in pursuit of
the conjecture.
Date / Time: Friday, September 30, 2016 - 4:00 PM
Venue UQAM, Président-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
Friday, September 16, 2016
Statistical Inference for fractional diffusion processes
There are some time series which exhibit long-range dependence as noticed by Hurst in his investigations of river water levels along Nile river. Long-range dependence is connected with the concept of
self-similarity in that increments of a self-similar process with stationary increments exhibit long-range dependence under some conditions. Fractional Brownian motion is an example of such a
process. We discuss statistical inference for stochastic processes modeled by stochastic differential equations driven by a fractional Brownian motion. These processes are termed as fractional
diffusion processes. Since fractional Brownian motion is not a semimartingale, it is not possible to extend the notion of a stochastic integral with respect to a fractional Brownian motion following
the ideas of Ito integration. There are other methods of extending integration with respect to a fractional Brownian motion. Suppose a complete path of a fractional diffusion process is observed over
a finite time interval. We will present some results on inference problems for such processes.
Date: Friday, September 16, 2016
Time: 4:00 p.m.
Place: Concordia University, Library Building, 1400 de Maisonneuve O., room LB-921.04
Friday, September 16, 2016
Cubature, approximation, and isotropy in the hypercube
The hypercube is the standard domain for computation in higher dimensions. We describe two respects in which the anisotropy of this domain has practical consequences. The first is a matter well known
to experts (and to Chebfun users): the importance of axis-alignment in low-rank compression of multivariate functions.
Rotating a function by a few degrees in two or more dimensions may change its numerical rank completely. The second is new. The standard notion of degree of a multivariate polynomial, total degree,
is isotropic – invariant under rotation.
The hypercube, however, is highly anisotropic. We present a theorem showing that as a consequence, the convergence rate of multivariate polynomial approximations in a hypercube is determined not by
the total degree but by the Euclidean degree, defined in terms of not the 1-norm but the 2-norm of the exponent vector k of a monomial x[1]^k[1]... x[s]^k[s]. The consequences, which relate to
established ideas of cubature and approximation going back to James Clark Maxwell, are exponentially pronounced as the dimension of the hypercube increases. The talk will include numerical
Date: Friday, September 16, 2016
Time: 4:00 p.m.
Place: UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Spectral Theory and Applications: Summer School at Laval University
Monday, July 4, 2016
The goal of the 2016 CRM Summer School in Quebec City is to prepare students for research involving spectral theory. The school will give an overview of a selection of topics from spectral theory,
interpreted in a broad sense. It will cover topics from pure and applied mathematics, each of which will be presented in a 5-hour mini-course by a leading expert. These lectures will be complemented
by supervised computer labs and exercise sessions. At the end of the school, invited speakers will give specialized talks. This rich subject intertwines several sub-disciplines of mathematics, and it
will be especially beneficial to students. The subject is also very timely, as spectral theory is witnessing major progresses both in its mathematical sub-disciplines and in its applications to
technology and science in general.
The school is intended to advanced undergraduate and beginning graduate students. As such, the prerequisites will be kept at a minimum, and review material will be provided a few weeks before the
Mathfest at Concordia University
Sunday, June 12, 2016
Where: John Molson Building, Concordia University, room MB 3.430, 1450 rue Guy
When: June 12, 10:00 AM - 12:00 noon
The event is organized by Concordia's Department of Mathematics and Statistics and by the ISM.
Friday, May 20, 2016
Complexité des fonctions d'un grand nombre de variables: de la physique statistique aux algorithmes de "deep learning"
Date / Time: Friday, May 20, 2016 - 4:00 PM
Venue: CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, salle 6214
Friday, May 13, 2016
The goal of this annual conference is to bring together Quebec graduate students in the mathematical sciences for a weekend. This year, the conference will be held at UQAM from May 13-15, 2016.
Everyone is invited to come or present their work on a subject they are interested in.
The 20-minute talks given by graduate students are a great chance to learn about the work of your colleagues. They will be grouped in thematic sessions that will cover a variety of subjects such as
Combinatorics, Financial Mathematics, Mathematical Physics, etc. In addition, four plenary talks will be given by well-known researchers. Presentations in either French or English are welcome. Since
the ISM is celebrating its 25th anniversary this year, all the plenary talks at the conference will be given by former ISM students.
The conference will be launched with a social activity allowing everyone to get to know each other.
To register, click here.
We look forward to seeing you this spring!
Friday, April 15, 2016
Elliptic PDEs in two dimensions
I will give a short survey of the several approaches to the regularity theory of elliptic equations in two dimensions. In particular I will focus on some old ideas of Bernstein and their application
to the infinity Laplace equation and to the Bellman equation in two dimensions.
Date / Time : Friday, April 15, 2016 - 16:00
Venue : UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Thursday, April 14, 2016
Statistical Estimation Problems in Meta-Analysis
The principal statistical estimation problem in meta-analysis is to obtain a reliable confidence interval for the treatment effect. Several possible approaches and settings are described. In
particular a Bayesian model with non-informative priors and the default data-dependent priors is discussed along with relevant optimization issues.
Date: Thursday, April 14, 2016
Time: 4:30 p.m
Venue: Université de Sherbrooke, 2500, boul. de l'Université, salle D3-2041
Thursday, April 14, 2016
The statistical price for computational efficiency
With the explosion of the size of data, computation has become an integral part of statistics. Ad hoc remedies such as employing convex relaxations, or manipulating sufficient statistics, have been
successful to derive efficient procedures with provably optimal statistical guarantees. Unfortunately, computational efficiency sometimes comes at an inevitable statistical cost. Therefore, one needs
to redefine optimality among computationally efficient procedures. Using tools from information theory and computational complexity, we quantify this cost in the context of two models: (i) the
multi-armed bandit problem, and (ii) sparse principal component analysis [Based on joint work with Q. Berthet, S. Chassang, V. Perchet and E. Snowberg]
Date /Time : Thursday, April 14, 2016 - 15:30
Venue : Laval University, Pavillon Adrien-Pouliot, room 2840
Friday, April 8, 2016
The dimer model: universality and conformal invariance
The dimer model on a finite bipartite planar graph is a uniformly chosen set of edges which cover every vertex exactly once. It is a classical model of statistical mechanics, going back to work of
Kasteleyn and Temperley/Fisher in the 1960s who computed its partition function.
After giving an overview, I will discuss some recent joint work with Benoit Laslier and Gourab Ray, where we prove in a variety of situations that when the mesh size tends to 0 the fluctuations are
described by a universal and conformally invariant limit known as the Gaussian free field.
A key novelty in our approach is that the exact solvability of the model plays only a minor role. Instead, we rely on a connection to imaginary geometry, where Schramm-Loewner Evolution curves are
viewed as flow lines of an underlying Gaussian free field.
Date / Time : Friday, April 8, 2016 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Friday, April 1, 2016
Needles, Bushes, Hairbrushes and Polynomials
Pretend that your car is a unit line segment. How do you perform a three point turn using an infinitesimally small area on the road? It turns out that this seemingly impossible driving stunt is
related to the fundamental theorem of calculus, as well as all the objects in the title of this talk! We will explore these connections and see how they have been useful in many problems in
Date / Time : Friday, April 1, 2016 - 16:00
Lieu/Venue : UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, March 18, 2016
Harry Potter's Cloak via Transformation Optics
Can we make objects invisible? This has been a subject of human fascination for millennia in Greek mythology, movies, science fiction, etc., including the legend of Perseus versus Medusa and the more
recent Star Trek and Harry Potter. In the last decade or so, there have been several scientific proposals to achieve invisibility. We will introduce some of these in a non-technical fashion,
concentrating on the so-called "transformation optics" that has received the most attention in the scientific literature.
Date / Time : Friday, March 18, 2016 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Thursday, March 17, 2016
Quantum Chromatic Numbers and the conjectures of Connes and Tsirelson
It is possible to characterize the chromatic number of a graph in terms of a game. It is the fewest number of colours for which a winning strategy exists using classical random variables to a certain
graph colouring game. If one allows the players to use quantum experiments to generate their random outcomes, then for many graphs this game can be won with far fewer colours. This leads to the
definition of the quantum chromatic number of a graph. However, there are several mathematical models for the set of probability densities generated by quantum experiments and whether or not these
models agree depends on deep conjectures of Connes and Tsirelson. Thus, there are potentially several "different" quantum chromatic numbers and computing them for various graphs gives us a
combinatorial means to test these conjectures. In this talk I will present these ideas and some of the results in this area. I will only assume that the audience is familiar with the basics of
Hilbert space theory and assume no background in quantum theory.
Date / Time : Thursday, March 17, 2016 - 15:30
Venue : Laval University, Pavillon Alexandre Vachon, room VCH-2830
Thursday, March 10, 2016
Ridges and valleys in the high excursion sets of Gaussian random fields
It is well known that normal random variables do not like taking large values. Therefore, a continuous Gaussian random field on a compact set does not like exceeding a large level. If it does
exceed a large level at some point, it tends to go back below the level a short distance away from that point. One, therefore, does not expect the excursion set above a high for such a field to
possess any interesting structure. Nonetheless, if we want to know how likely are two points in such an excursion set to be connected by a path ("a ridge") in the excursion set, how do we figure
that out? If we know that a ridge in the excursion set exists (e.g. the field is above a high level on the surface of a sphere), how likely is there to be also a valley (e.g. the field going to
below a fraction of the level somewhere inside that sphere)?
We use the large deviation approach. Some surprising results (and pictures) are obtained.
Date / Time : Thursday, March 10, 2016 - 15:30
Venue : McGill University, Burnside Hall, salle à venir
Friday, February 26, 2016
The fundamental theorem of algebra, complex analysis and ... astrophysics
The fundamental theorem of algebra, complex analysis and ... astrophysicsThe Fundamental Theorem of Algebra first rigorously proved by Gauss states that each complex polynomial of degree $n$ has
precisely $n$ complex roots. In recent years various extensions of this celebrated result have been considered. We shall discuss the extension of the FTA to harmonic polynomials of degree $n$. In
particular, the theorem of D. Khavinson and G. Swiatek that shows that the harmonic polynomial \bar{z}-p(z), deg \, p=n>1 has at most 3n-2 zeros as was conjectured in the early 90's by T. Sheil-Small
and A. Wilmshurst. L. Geyer was able to show that the result is sharp for all n. G. Neumann and D. Khavinson proved that the maximal number of zeros of rational harmonic functions \bar{z}-r(z), deg
\,r =n>1 is 5n-5. It turned out that this result confirmed several consecutive conjectures made by astrophysicists S. Mao, A. Petters, H. Witt and, in its final form, the conjecture of S. H. Rhie
that were dealing with the estimate of the maximal number of images of a star if the light from it is deflected by n co-planar masses. The first non-trivial case of one mass was already investigated
by A. Einstein around 1912. We shall also discuss the problem of gravitational lensing of a point source of light, e.g., a star, by an elliptic galaxy, more precisely the problem of the maximal
number of images that one can observe. Under some more or less "natural" assumptions on the mass distribution within the galaxy one can prove (A.Eremenko and W. Bergweiler - 2010, also, K - E.
Lundberg - 2010) that the number of visible images can never be more than four in some cases and six in the other. Interestingly, the former situation can actually occur and has been observed by
astronomers. Still there are much more open questions than there are answers.
Time : Friday, February 26, 2016 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Friday, February 12, 2016
Date / Time : February 12, 2016 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Book Signing Event: Mathematics without Apologies, Michael Harris
Thursday, February 11, 2016
Michael Harris (Université Paris-Diderot, Columbia University)
An unapologetic guided tour of the mathematical life
** You may purchase the book on site for $25 (cash only) **
What do pure mathematicians do, and why do they do it? Looking beyond the conventional answers—for the sake of truth, beauty, and practical applications—this book offers an eclectic panorama of the
lives and values and hopes and fears of mathematicians in the twenty-first century, assembling material from a startlingly diverse assortment of scholarly, journalistic, and pop culture sources.
Drawing on his personal experiences and obsessions as well as the thoughts and opinions of mathematicians from Archimedes and Omar Khayyám to such contemporary giants as Alexander Grothendieck and
Robert Langlands, Michael Harris reveals the charisma and romance of mathematics as well as its darker side. In this portrait of mathematics as a community united around a set of common intellectual,
ethical, and existential challenges, he touches on a wide variety of questions, such as: Are mathematicians to blame for the 2008 financial crisis? How can we talk about the ideas we were born too
soon to understand? And how should you react if you are asked to explain number theory at a dinner party?
Disarmingly candid, relentlessly intelligent, and richly entertaining, Mathematics without Apologies takes readers on an unapologetic guided tour of the mathematical life, from the philosophy and
sociology of mathematics to its reflections in film and popular music, with detours through the mathematical and mystical traditions of Russia, India, medieval Islam, the Bronx, and beyond.
Michael Harris is professor of mathematics at the Université Paris Diderot and Columbia University. He is the author or coauthor of more than seventy mathematical books and articles, and has received
a number of prizes, including the Clay Research Award, which he shared in 2007 with Richard Taylor.
DATE :
Thursday, February 11, 2016
TIME :
4:00 p.m.
PLACE :
Concordia University, Library Building, 9th floor, Salle/Room LB 921-04
1400 De Maisonneuve West
Friday, February 5, 2016
Chain reactions
To every action, there is an equal and opposite reaction. However, there turn out to exist in nature situations where the reaction seems to be neither equal in magnitude nor opposite in direction to
the action. We will see a series of table-top demos and experimental movies, apparently in more and more violation of Newton's 3rd law, and give a full analysis of what is happening, discovering in
the end that this phenomenon are in a sense generic. The keys are shock, singular material property, and supply of "critical geometry".
Date / Time : Friday, February 5, 2016 - 16:00
Venue : UQAM, Pavillon Président-Kennedy, 201, ave du Président-Kennedy, room PK-5115
Friday, January 29, 2016
Stability and instability for nonlinear elliptic PDE with slight variations to the data
We will consider the question of stability of solutions to nonlinear elliptic PDE when slightly varying the data. We will take as a model the Standing Wave Equation for critical nonlinear Schrödinger
and Klein-Gordon Equations on a closed manifold, and we will look at variations to the potential functions in these equations. A number of results have been obtained on this question in the last two
decades, and we now have an accurate picture of the stability and instability of solutions to these equations. I will give an overview of these results and explain why certain types of unstable
solutions can exist for some potential functions or in some geometries, and not others.
Date / Time : Friday, January 29, 2016 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Friday, January 22, 2016
Big data & mixed-integer (non linear) programming
In this talk I review a couple of applications on Big Data that I personally like and I try to explain my point of view as a Mathematical Optimizer -- especially concerned with discrete (integer)
decisions -- on the subject. I advocate a tight integration of Data Mining, Machine Learning and Mathematical Optimization (among others) to deal with the challenges of decision-making in Data
Science. Those challenges are the core of the mission of the Canada Excellence Research Chair in "Data Science for Real-time Decision Making" that I hold.
Date / time: Friday, January 22, 2016 - 4:00 pm
Venue: UQAM, President-Kennedy Building, 201, ave du Président-Kennedy, room PK-5115
Friday, January 15, 2016
Maximum of strongly correlated random variables
One of the main goal of probability theory is to find "universal laws". This is well-illustrated by the Law of Large Numbers and the Central Limit Theorem, dating back to the 18th century, which show
convergence of the sum of random variables with minimal assumptions on their distributions. Much of current research in probability is concerned with finding universal laws for the maximum of random
variables. One universality class of interest (in mathematics and in physics) consists of stochastic processes whose correlations decay logarithmically with the distance. In this talk, we will survey
recent results on the subject and their connection to problems in mathematics such as the maxima of the Riemann zeta function on the critical line and of the characteristic polynomial of random
** The talk will be given in french with English slides. **
Coffee will be served before the conference and a reception will follow at Salon Maurice-L’Abbé (Room 6245).
Date and time: Friday, January 15, 2016, 16:00 - 17:00
Venue: Room 6254, Centre de recherches mathématiques, Pavillon André-Aisenstadt, 2920, chemin de la Tour
Friday, January 8, 2016
The Seminars in Undergraduate Mathematics in Montreal (SUMM) is an annual event organized by students who are currently enrolled in an undergraduate mathematics program at one of the four Montreal
The 2016 edition of SUMM will be held at Université du Québec à Montréal (UQÀM) on January 8-9-10.
Keynote Speakers :
– Dimiter Dryanov, Department of Mathematics and Statistics, Concordia University.
– Marlène Frigon, Département de Mathématiques et de Statistique, Université de Montréal.
– Christian Genest, Department of Mathematics and Statistics, McGill University.
– Franco Saliola, Département de Mathématiques, Université du Québec à Montréal.
For more information, please contact us.
Thursday, December 10, 2015
Causal discovery with confidence using invariance principles
What is interesting about causal inference? One of the most compelling aspects is that any prediction under a causal model is valid in environments that are possibly very different to the environment
used for inference. For example, variables can be actively changed and predictions will still be valid and useful. This invariance is very useful but still leaves open the difficult question of
inference. We propose to turn this invariance principle around and exploit the invariance for inference. If we observe a system in different environments (or under different but possibly not well
specified interventions) we can identify all models that are invariant. We know that any causal model has to be in this subset of invariant models. This allows causal inference with valid confidence
intervals. We propose different estimators, depending on the nature of the interventions and depending on whether hidden variables and feedbacks are present. Some empirical examples demonstrate the
power and possible pitfalls of this approach.
Date / Time : Thursday, December 10,2015 - 3:30 PM
Venue : UdeM, Pav. Roger-Gaudry, salle S-116
Friday, December 4, 2015
The Canadian Mathematical Society (CMS) invites the mathematical community to the 2015 CMS Winter Meeting in Montreal, Quebec, from December 4-7. All meeting activities are taking place at the Hyatt
Regency Montreal (1255 Jeanne-Mance, Montreal, Quebec, Canada, H5B 1E5).
Friday, November 27, 2015
Measuring irregularities in data : Can fractals help to classify Van Gogh paintings?
Benoît Mandelbrot defined fractal geometry as the geometry of irregular sets; he and his followers successfully used the mathematical concepts of fractional dimensions to quantify this irregularity
and thus popularized new classification tools among scientists working in many disciplines. Recently, these ideas have proved very fruitful in multifractal analysis, which deals with the analysis of
irregular functions. We will show how the seminal ideas introduced in fractal geometry have been diverted in order to supply new classification tools for signals and images, and we will present a
selected choice of applications including: - Model classification in the context of fully developed turbulence and the diagnostic of heart-beat failure. - Modeling of internet flowl - Stylometry
tools helping art historians to differentiate between the paintings of several masters.
Date / Time : Friday, November 27, 2015 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, salle 6214
Thursday, November 26, 2015
Inference regarding within-family association in disease onset times under biased sampling schemes
In preliminary studies of the genetic basis for chronic conditions, interest routinely lies in the within-family dependence in disease status. When probands are selected from disease registries
and their respective families are recruited, a variety of ascertainment bias-corrected methods of inference are available which are typically based on models for correlated binary data. This
approach ignores the age that family members are at the time of assessment. We consider copula-based models for assessing the within-family dependence in the disease onset time and disease
progression, based on right-censored and current status observation of the non-probands. Inferences based on likelihood, composite likelihood and estimating functions are each discussed and
compared in terms of asymptotic and empirical relative efficiency. This is joint work with Yujie Zhong.
Date / Time : Thursday, November 26, 2015 - 3:30 PM
Venue : McGill, Burnside Hall, room 306
Friday, November 20, 2015
Sur l'étude des singularités dans des modèles mathématiques de cristaux liquides
L'analyse de modèles mathématiques pour les cristaux liquides pose beaucoup de défis, vu leurs proximités à l'étude des singularités dans les applications harmoniques. Dans ce colloque, je vais
présenter des modèles mathématiques utilisés dans l'étude des cristaux liquides, la connexion avec les résultats classiques pour les applications harmoniques, ainsi que les nouvelles méthodes
utilisées pour étudier les singularités dans le modèle de Landau-de Gennes. Ce modèle permet une plus grande variété de singularités que le modèle d'Oseen-Frank basé sur les applications harmoniques
à valeur dans la sphère. (The talk will delivered in French with English slides.)
Date / Time : Friday, November 20, 2015 - 4:00 PM
Venue : UQAM, Sherbrooke Building, Room SH-2420
Friday, November 13, 2015
Random walks in random environments
The goal of this talk is to present some recent developments in the field of random walks in random environments. We chose to do this by presenting a specific model, known as biased random walk on
Galton-Watson trees, which is intuitively easy to understand but gives rise to many interesting and challenging questions. We will then explain why this model is actually representative of a whole
class of models which exhibit universal limiting behaviours.
Date / Time : Friday, November 13, 2015 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, salle 6214
Friday, November 6, 2015
Walls in random groups
I will give an overview of Gromov's density model for random groups. These groups are hyperbolic and for large densities are exotic enough to have Kazhdan's property (T). I will focus on small
densities and explain the techniques of Ollivier and Wise, and Mackay and myself to tame these groups by finding "walls" and hence an action on a CAT(0) cube complex.
Date: Friday, November 6, 2015
Time: 4:00 PM
Venue: UQAM, Pavillon Sherbrooke, Room SH-2420
The talk will be followed by a wine and cheese reception.
Friday, October 30, 2015
A knockoff filter for controlling the false discovery rate
The big data era has created a new scientific paradigm: collect data first, ask questions later. Imagine that we observe a response variable together with a large number of potential explanatory
variables, and would like to be able to discover which variables are truly associated with the response. At the same time, we need to know that the false discovery rate (FDR)---the expected fraction
of false discoveries among all discoveries---is not too high, in order to assure the scientist that most of the discoveries are indeed true and replicable. We introduce the knockoff filter, a new
variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables. This method works by constructing fake variables,
knockoffs, which can then be used as controls for the true variables; the method achieves exact FDR control in finite sample settings no matter the design or covariates, the number of variables in
the model, and the amplitudes of the unknown regression coefficients, and does not require any knowledge of the noise level. This is joint work with Rina Foygel Barber.
Date / Time : Friday, October 30, 2015 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, salle 1360
Friday, October 23, 2015
Weighted Hurwitz Numbers: Classical and Quantum
The study of Hurwitz numbers, which enumerate branched coverings of the Riemann sphere, is classical, going back to the pioneering work of Hurwitz in the 1880’s. There is an equivalent combinatorial
problem, related by monodromy that was developed by Frobenius in his pioneering work on character theory, consisting of enumeration of factorizations of elements of the symmetric group. In 2000,
Okounkov and Pandharipande began their program relating Hurwitz numbers to other combinatorial/topological invariants associated to Riemann surfaces, such as as Gromov-Witten and Donaldson-Thomas
invariants. This has since been further developed by others to include, e.g., Hodge invariants and relations to knot invariants. A key result of Okounkov and Pandharipande was to express the
generating functions for special classes of Hurwitz numbers, e.g., including only simple branching, plus one, or two other branch points, as special types of Tau functions of integrable hierarchies
such as Sato's KP hierarchy and Takasaki-Takebe’s 2D Toda lattice hierarchy, together with associated semi-infinite wedge product representations. The differential/algebraic equations satisfied by
such generating functions provide a new perspective, implying deep interrelations between these various types of enumerative invariants. In more recent work, these ideas have been extended to include
generating functions for a very wide class of branched coverings, with suitable combinatorial interpretations, including broad class of weighted enumerations that select amongst infinite parametric
families of weights. These make use not only of the six standard bases for the ring of symmetric functions, such as Schur functions, and monomomial sum symmetric functions, but also their “quantum”
deformations, involving the pair of deformation parameters (q,t) appearing the in theory of Macdonald polynomials. The general theory of weighted Hurwitz numbers, together with various applications
and examples coming from Random Matrix theory and enumerative geometry will be explained in a simple, unified way, based on special elements of and bases for the center of the symmetric group
algebra, and the characteristic map to the ring of symmetric polynomials. The simplest quantum case provides a relation between special weighted enumerations of branched coverings and the statistical
nechanics of Bose-Eintein gases. Various other specializations, to such bases as: Hall-Littlewood, Jack, q-Whittaker, dual q-Whttaker as well as certain special classical weightings have further
applications, in physics, geometry, group theory and combinatorics.
Date / Time : Friday, October 23, 2015 - 16:00
Venue : UQAM - Sherbrooke Building, Room SH-2420 (one floor below the normal colloquium room)
A wine and cheese reception will follow the talk.
Friday, October 16, 2015
Holomorphic functions, convexity and transversality
Morse theory is a powerful tool to study the topology of real manifolds. After recalling its basic features, we will discuss the existence, on complex manifolds, of holomorphic functions giving
similar information on the topology. More specifically, we will review the notions of pseudoconvexity and of Stein manifold so as to gradually explain the significance of a recent result, jointly
obtained with John Pardon, which shows that any Stein domain can be presented as a Lefschetz fibration over the disk. The talk will be aimed at a general mathematical audience.
Date / Time : Friday, October 16, 2015 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, salle 6254
Friday, October 9, 2015
Coxeter Groups and Quiver Representations
It has been understood since almost the beginning of the development of quiver representations, in the 1970s, that there are important connections between Coxeter groups and quiver representations.
Nonetheless, further relations continue to appear. I will touch on the classical connections and some of the more recent ones, including the example of the parallel elaboration of the closely related
concepts of exceptional sequences of representations and factorizations of Coxeter elements.
Date / Time : Friday, October 9, 2015 - 16:00
Venue : UQAM - Sherbrooke Building, Room SH-2420 (one floor below the normal colloquium room)
A wine and cheese reception will follow the talk.
Quebec Mathematical Sciences Colloquium - Dmitri Vassiliev (University College London)
Friday, September 25, 2015
Analysis of first order systems of PDEs on manifolds without boundary
In layman's terms a typical problem in this subject area is formulated as follows. Suppose that our universe has finite size but does not have a boundary. An example of such a situation would be a
universe in the shape of a 3-dimensional sphere embedded in 4-dimensional Euclidean space. And imagine now that there is only one particle living in this universe, say, a massless neutrino. Then one
can address a number of mathematical questions. How does the neutrino field (solution of the massless Dirac equation) propagate as a function of time? What are the eigenvalues (stationary energy
levels) of the particle? Are there nontrivial (i.e. without obvious symmetries) special cases when the eigenvalues can be evaluated explicitly? What is the difference between the neutrino (positive
energy) and the antineutrino (negative energy)? What is the nature of spin? Why do neutrinos propagate with the speed of light? Why are neutrinos and photons (solutions of the Maxwell system) so
different and, yet, so similar? The speaker will approach the study of first order systems of PDEs from the perspective of a spectral theorist using techniques of microlocal analysis and without
involving geometry or physics. However, a fascinating feature of the subject is that this purely analytic approach inevitably leads to differential geometric constructions with a strong theoretical
physics flavour. References [1] See items 98-101, 103 and 104 on my publications page http://www.homepages.ucl.ac.uk/~ucahdva/publicat/publicat.html [2] Futurama TV series, Mars University episode
(1999): Fry: Hey, professor. What are you teaching this semester? Professor Hubert Farnsworth: Same thing I teach every semester. The Mathematics of Quantum Neutrino Fields. I made up the title so
that no student would dare take it.
Date / Time : Friday, September 25, 2015 - 16:00
Venue : UQAM - Sherbrooke Building, Room SH-2420 (one floor below the normal colloquium room)
A wine and cheese reception will follow the talk.
Monday, June 15, 2015
McGill University will host the CRM-PIMS probability summer school from June 15-July 11, 2015.
There will be two main courses, given by Alice Guionnet and Remco van der Hofstad, as well as mini-courses by Louigi Addario-Berry, Shankar Bhamidi and Jonathan Mattingly.
For more details, see: http://problab.ca/ssprob2015/index.php
Monday, June 15, 2015
The 2015 Séminaire de Mathématiques Supérieures will feature about a dozen minicourses on geometry of eigenvalues, geometry of eigenfunctions, spectral theory on manifolds with singularities, and
computational spectral theory. There has been a number of remarkable recent developments in these closely related fields. The goal of the summer school is to shed light on different facets of modern
spectral theory and to provide a unique opportunity for graduate students and young researchers to get a "big picture" of this rapidly evolving area of mathematics. The lectures will be given by the
leading experts in the subject. The minicourses will be complemented by guided exercises sessions, as well as by several invited talks by the junior participants who have already made important
contributions to the field. A particularly novel aspect of the school is the emphasis on the interactions between spectral geometry and computational spectral theory. We do not assume that the
students are familiar with computational methods, and therefore we intend to provide tutorials where the participants will learn to develop and implement algorithms for numerical analysis of
eigenvalue problems.
Friday, May 15, 2015
The 18th edition of the ISM Student Conference will be held at HEC Montreal from May 15 to 17, 2015. You can present your work (deadline for abstract submission: April 15, 2015) or simply attend the
presentations and enjoy the networking activities. The keynote speakers are: Nantel Bergeron (York University), Matt Davison (University of Western Ontario), Stephen Fienberg (Carnegie Mellon), and
the 2015 Carl Herz Prize winner. For more information or to register, please visit the conference web site: http://www.crm.umontreal.ca/2015/ISM2015/index_e.php.
Friday, May 8, 2015
This year, we are celebrating the international year of light by welcoming John Dudley, instigator of the international year. You are invited to a half-day of activties where you will discover the
role of light in our civilization and how mathematics allows us to study it. In French. Free registration. Further information.
Quebec Mathematical Sciences Colloquium - Stephen S. Kudla (University of Toronto)
Thursday, April 9, 2015
Modular generating series and arithmetic geometry
I will survey the development of the theory of theta series and describe some recent advances/work in progress on arithmetic theta series. The construction and modularity of theta series as counting
functions for lattice points for positive definite quadratic forms is a beautiful piece of classical mathematics with its origins in the mid 19th century. Siegel initiated the study of the analogue
for indefinite quadratic forms. Millson and I introduced a geometric variant in which the theta series give rise to modular generating series for the cohomology classes of "special" algebraic cycles
on locally symmetric varieties. These results motivate the definition of analogous generating series for the classes of such special cycles in the Chow groups and for the classes in the arithmetic
Chow groups of their integral extensions. The modularity of such series is a difficult problem. I will discuss various cases in which recent progress has been made and some of the difficulties
Date / Time : Thursday, April 9, 2015 - 4:00 PM
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6254
Quebec Mathematical Sciences Colloquium - Konstantin Mischaikow (Rutgers)
Thursday, April 2, 2015
A combinatorial approach to dynamics applied to switching networks
Models of multiscale systems, such as those encountered in systems biology, are often characterized by heuristic nonlinearities and poorly defined parameters. Furthermore, it is typically not
possible to obtain precise experimental data for these systems. Nevertheless, verification of the models requires the ability to obtain meaningful dynamical structures that can be compared
quantitatively with the experimental data. With this in mind we present a purely combinatorial approach to modeling dynamics. We will discuss this approach in the context of switching networks.
Date / Time: Thursday, April 2, 2015 - 4:30 PM
Venue: Université de Sherbrooke
Quebec Mathematical Sciences Colloquium - William Minicozzi (MIT)
Thursday, April 2, 2015
Uniqueness of blowups and Lojasiewicz inequalities
The mean curvature flow (MCF) of any closed hypersurface becomes singular in finite time. Once one knows that singularities occur, one naturally wonders what the singularities are like. For minimal
varieties the first answer, by Federer-Fleming in 1959, is that they weakly resemble cones. For MCF, by the combined work of Huisken, Ilmanen, and White, singularities weakly resemble shrinkers.
Unfortunately, the simple proofs leave open the possibility that a minimal variety or a MCF looked at under a microscope will resemble one blowup, but under higher magnification, it might (as far as
anyone knows) resemble a completely different blowup. Whether this ever happens is perhaps the most fundamental question about singularities. We will discuss the proof of this long standing open
question for MCF at all generic singularities and for mean convex MCF at all singularities. This is joint work with Toby Colding.
Date / Time :Thursday, April 2, 2015 - 4:00 PM
Lieu : McGill University, Burnside Hall, 805 rue Sherbrooke 0., Montréal, room 920
Thursday, March 26, 2015
Left-orderings of groups and the topology of 3-manifolds
Many decades of work culminating in Perelman's proof of Thurston's geometrisation conjecture showed that a closed, connected, orientable, prime 3-dimensional manifold W is essentially determined by
its fundamental group π[1](W). This group consists of classes of based loops in W and its multiplication corresponds to their concatenation. An important problem is to describe the topological and
geometric properties of W in terms of π[1](W). For instance, geometrisation implies that W admits a hyperbolic structure if and only if π[1](W) is infinite, freely indecomposable, and contains no Z ⊕
Z subgroups. In this talk I will describe recent work which has determined a surprisingly strong correlation between the existence of a left-order on π[1](W) (a total order invariant under left
multiplication) and the following two measures of largeness for W:
a) the existence of a co-oriented taut foliation on W - a special type of partition of W into surfaces which fit together locally like a deck of cards.
b) the condition that W not be an L-space - an analytically defined condition representing the non-triviality of its Heegaard-Floer homology.
I will introduce each of these notions, describe the results which connect them, and state a number of open problems and conjectures concerning their precise relationship.
Date / Time: Thursday, March 26, 2015 - 4:00 PM
Venue: McGill University, Burnside Hall, 805 rue Sherbrooke 0., Montréal, room 920
Quebec Mathematical Sciences Colloquium - Alexei Borodin (MIT)
Thursday, March 19, 2015
Integrable probability
The goal of the talk is to survey the emerging field of integrable probability, whose goal is to identify and analyze exactly solvable probabilistic models. The models and results are often easy to
describe, yet difficult to find, and they carry essential information about broad universality classes of stochastic processes.
Date / Time: Thursday, March 19, 2015 - 4:00 PM
Venue: McGill University, Burnside Hall, 805 rue Sherbrooke 0., Montréal, room 920
Quebec Mathematical Sciences Colloquium - Pierre Colmez (CNRS & Paris VI Jussieu)
Thursday, March 12, 2015
The upper half-planes
The upper half-planes (complex and p-adic) are very elementary objects, but they have a surprisingly rich structure that I will explore in the talk.
Date / Time: Thursday, March 12, 2015 - 4:00 PM
Venue: CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 1360
Quebec Mathematical Sciences Colloquium - Sophie Morel (Princeton University)
Thursday, March 5, 2015
We will discuss periods, in particular the periods conjecture of Kontsevich and Zagier and the relationship between formal periods and Nori motives.
Date / Time : Thursday, March 5, 2015 - 4:00 PM
Venue : McGill University, Burnside Hall, 805 rue Sherbrooke 0., Montréal, room 920
Quebec Mathematical Sciences Colloquium - Alistair Savage (University of Ottawa)
Thursday, February 26, 2015
Categorification in representation theory
This will be an expository talk concerning the idea of categorification and its role in representation theory. We will begin with some very simple yet beautiful observations about how various ideas
from basic algebra (monoids, groups, rings, representations etc.) can be reformulated in the language of category theory. We will then explain how this viewpoint leads to new ideas such as the
"categorification" of the above-mentioned algebraic objects. We will conclude with a brief synopsis of some current active areas of research involving the categorification of quantum groups. One of
the goals of this idea is to produce four-dimensional topological quantum field theories. Very little background knowledge will be assumed.
Date / Time: Thursday, February 26, 2015 - 4:00 PM
Venue: McGill University, Burnside Hall, 805 rue Sherbrooke 0., Montréal, room 920
Quebec Mathematical Sciences Colloquium - Francis Brown (IHES)
Thursday, February 19, 2015
Irrationality proofs, moduli spaces and dinner parties
After introducing an elementary criterion for a real number to be irrational, I will discuss Apery’s famous result proving the irrationality of zeta(3). Then I will give an overview of subsequent
results in this field, and finally propose a simple geometric interpretation based on a classical dinner party game.
Date / Time : Thursday, February 19, 2015 - 4:00 PM
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Quebec Mathematical Sciences Colloquium - Laure Saint-Raymond (École normale supérieure, Paris)
Thursday, February 12, 2015
The role of boundary layers in the global ocean circulation
Comprendre les mécanismes qui régissent la circulation océanique est un défi pour les géophysiciens, mais aussi pour les mathématiciens qui doivent développer de nouveaux outils d'analyse pour ces
modèles complexes (qui font intervenir en particulier de très nombreuses échelles de temps et d'espace). Un mécanisme particulièrement important pour la circulation à l'échelle planétaire est le
phénomène de couche limite qui explique une partie des échanges énergétiques. On montrera ici au travers d'un modèle très simplifié qu'il permet d'expliquer notamment l'intensification des courants
de bord Ouest. On évoquera ensuite les difficultés mathématiques liées à la prise en compte de la géométrie. Note : l'exposé sera en anglais avec des transparents en français.
Date / Time: Thursday, February 12, 2015 - 4:00 PM
Venue: McGill University, Burnside Hall, 805 Sherbrooke Street West, Montreal, room 920
Quebec Mathematical Sciences Colloquium - Octav Cornea (Université de Montréal)
Thursday, February 5, 2015
Cobordism and Lagrangian topology
This talk aims to discuss how two different basic organizing principles in topology come together in the study of Lagrangian submanifolds. The first principle is cobordism and it emerged in topology
in the 1950’s, mainly starting with the work of Thom. It was introduced in Lagrangian topology by Arnold in the 1970’s. The second principle is to reconstruct a subspace of a given space from a
family of "slices", each one obtained by intersecting the subspace with a member of a preferred class of special "test" subspaces. For instance, a subspace of 3d euclidean space can be described as
the union of all its intersections with horizontal planes. The key issue from this point of view is, of course, how to assemble all the slices together. The perspective that is central for my talk
originates in the work of Gromov and Floer in the 1980’s: if the ambient space is a symplectic manifold M, and if the subspace to be described is a Lagrangian submanifold, then, surprisingly,the
"glue" that puts the slices together in an efficient algebraic fashion is a reflection of the combinatorial properties of J-holomorphic curves in M. This point of view has been pursued actively since
then by many researchers such as Hofer, Fukaya, Seidel leading to a structure called the Fukaya category. Through recent work of Paul Biran and myself, cobordism and the Fukaya category turn out to
be intimately related and at the end of the talk I intend to give an idea about this relation.
Date / Time : Thursday, February 5, 2015 - 4:00 PM
Venue : McGill University, Burnside Hall, 805 rue Sherbrooke 0., Montréal, room 920
Quebec Mathematical Sciences Colloquium - Thomas Ransford (Université Laval)
Thursday, January 29, 2015
Spectres et pseudospectres
Les valeurs propres sont parmi les notions les plus utiles en mathématiques: elles permettent la diagonalisation des matrices, elles décrivent l'asymptotique et la stabilité, elles donnent de la
personnalité à une matrice. Cependant, lorsque la matrice en question n'est pas normale, l'analyse par des valeurs propres ne donne qu'une information très partielle, et peut même nous induire en
erreur. Cet exposé se veut une introduction à la théorie des pseudospectres, un raffinement de la théorie spectrale standard qui s'est avéré utile dans des applications concernant des matrices non
normales. Je vais m'intéresser surtout à la question suivante: À quel point les pseudospectres d'une matrice déterminent-ils le comportement de la matrice?
Date / Time: Thursday, January 29, 2015 - 4:00 PM
Venue: McGill University, Burnside Hall, 805 Sherbrooke Street West, Montréal, room 920
Quebec Mathematical Sciences Colloquium - Hansjoerg Albrecher (HEC, Lausanne)
Thursday, January 22, 2015
On the usefulness of mathematics for insurance risk theory - and vice versa
This talk is on applications of various branches of mathematics in the field of risk theory, a branch of actuarial mathematics dealing with the analysis of the surplus process of a portfolio of
insurance contracts over time. At the same time such practical problems frequently trigger mathematical research questions, in some cases leading to remarkable identities and connections. Next to the
close interactions with probability and statistics, examples will include the branches of real and complex analysis, algebra, symbolic computation, number theory and discrete mathematics.
Date / Time: Thursday, January 22, 2015 - 4:00 PM
Venue: McGill University, Burnside Hall, 805 Sherbrooke Street West, Montréal, room 920
Quebec Mathematical Sciences Colloquium - Fang Yao (University of Toronto)
Thursday, January 15, 2015
Functional data analysis and related topics
Functional data analysis (FDA) has received substantial attention, with applications arising from various disciplines, such as engineering, public health, finance, etc. In general, the FDA approaches
focus on nonparametric underlying models that assume the data are observed from realizations of stochastic processes satisfying some regularity conditions, e.g., smoothness constraints. The
estimation and inference procedures usually do not depend on merely a finite number of parameters, which contrasts with parametric models, and exploit techniques, such as smoothing methods and
dimension reduction, that allow data to speak for themselves. In this talk, I will give an overview of FDA methods and related topics developed in recent years.
Date / Time : Thursday, January 15, 2015 - 4:00 PM
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 1360
Quebec Mathematical Sciences Colloquium - François Bergeron (UQAM)
Thursday, December 4, 2014
Algebraic combinatorics and finite reflection groups
The lecture will be delivered in French, with English slides, so that anyone may enjoy it. ----- La conférence sera présentée en français, avec des transparents en anglais, pour que tous puissent
suivre. Les dernières années ont vu une explosion d’activités à la frontière entre la combinatoire algébrique, la théorie de la représentation et la géométrie algébrique, avec des liens captivants
avec la théorie des nœuds et la physique mathématique. En gardant un large auditoire en tête, nous esquisserons en quoi cette interaction a été très fructueuse et a soulevé de nouvelles questions
intrigantes dans les divers domaines concernés. Nous essaierons de donner la saveur des résultats obtenus, des techniques utilisées, du grand nombre de questions ouvertes, et du pourquoi de leur
intérêt. Ce fascinant échange entre combinatoire et algèbre fait d’une part intervenir des généralisations au contexte des rectangles des « chemins de Dyck ». Il est bien connu, depuis Euler, que ces
chemins sont comptés par les nombres de Catalan, dans le cas d’un carré. De plus, les fonctions de stationnement (parking functions) sont intimement reliées à ces chemins. D’autre part, du côté
algébrique, apparaissent des S[n]-module bigradué de polynômes harmoniques diagonaux du groupe symétrique S[n]. Il a été conjecturé qu’une énumération adéquate des fonctions de stationnement,
associées à certaines familles de chemins de Dyck, fournit une formule combinatoire explicite du caractère bigradué de ces modules. Cette conjecture, connue sous le nom de conjecture « shuffle », a
récemment été grandement étendue pour couvrir tous les cas rectangulaires. Interviennent dans tout ceci, des opérateurs sur les polynômes de Macdonald, l’algèbre de Hall elliptique, les algèbres de
Hecke affines doubles (DAHA), le schéma de Hilbert de points dans le plan, etc.
Date / Time: Thursday, December 4, 2014 - 4:00 PM
Venue: CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Quebec Mathematical Sciences Colloquium - Nilima Nigam (Simon Fraser University)
Thursday, November 27, 2014
On the well-posedness of the 2D stochastic Allen-Cahn equation
Non-linear parabolic PDE arise in many physical and biological settings; we often need to incorporate the effects of additive white noise. The resultant stochastic partial differential equations are
well-understood in 1D. In higher spatial dimensions, there is an interesting dichotomy: such models are popular in application, while mathematicians assume these models to be ill-posed. We
investigate the specific case of the two dimensional Allen-Cahn equation driven by additive white noise. Without noise, the Allen-Cahn equation is 'pattern-forming'. Does the presence of noise
affect this behaviour? The precise notion of a weak solution to this equation is unclear. Instead, we regularize the noise and introduce a family of approximations. We discuss the continuum limit of
these approximations and show that it exhibits divergent behavior. Our results show that a series of published numerical studies are somewhat problematic: shrinking the mesh size in these simulations
does not lead to the recovery of a physically meaningful limit. This is joint work with Marc Ryser and Paul Tupper.
Date / Time : Thursday, November 27, 2014 - 3:30 PM
Venue : Laval University, Alexandre Vachon Building, room 2830
Quebec Mathematical Sciences Colloquium - Martin Wainwright (University of California, Berkeley)
Thursday, November 20, 2014
High-dimensional phenomena in mathematical statistics and convex analysis
Statistical models in which the ambient dimension is of the same order or larger than the sample size arise frequently in different areas of science and engineering. Although high-dimensional models
of this type date back to the work of Kolmogorov, they have been the subject of intensive study over the past decade, and have interesting connections to many branches of mathematics (including
concentration of measure, random matrix theory, convex geometry, and information theory). In this talk, we provide a broad overview of the general area, including vignettes on phase transitions in
high-dimensional graph recovery, and randomized approximations of convex programs.
Date / Time : Thursday, November 20, 2014 - 4:00 PM
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Quebec Mathematical Sciences Colloquium - Kartik Prasanna (University of Michigan, Ann Arbor)
Thursday, November 13, 2014
Recent advances in the arithmetic of elliptic curves
In the past few years there have been several spectacular advances in understanding the arithmetic of elliptic curves including results about ranks on average and on the conjecture of Birch and
Swinnerton-Dyer. I will give an introduction to the main problems of interest and survey some of these developments. This talk will be addressed to a general mathematical audience.
Date / Time : Thursday, November 13, 2014 - 4:00 PM
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Quebec Mathematical Sciences Colloquium - Dani Wise (McGill University)
Thursday, November 6, 2014
The cubical route to understanding groups
Cube complexes have come to play an increasingly central role within geometric group theory, as their connection to right-angled Artin groups provides a powerful combinatorial bridge between geometry
and algebra. This talk will primarily aim to introduce nonpositively curved cube complexes, and then describe some of the developments that have recently culminated in the resolution of the virtual
Haken conjecture for 3-manifolds, and simultaneously dramatically extended our understanding of many infinite groups.
Date / Time : Thursday, November 6, 2014 - 4:00 PM
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 6214
Quebec Mathematical Sciences Colloquium - Georgia Benkart (University of Wisconsin-Madison)
Thursday, October 30, 2014
A pedestrian approach to group representations
Determining the number of walks of n steps from vertex A to vertex B on a graph often involves clever combinatorics or tedious treading. But if the graph is the representation graph of a group,
representation theory can facilitate the counting and provide much insight. This talk will focus on connections between Schur-Weyl duality and walking on representation graphs. Examples of special
interest are the simply-laced affine Dynkin diagrams, which are the representation graphs of the finite subgroups of the special unitary group SU(2) by the McKay correspondence. The duality between
the SU(2) subgroups and certain algebras enables us to count walks and solve other combinatorial problems, and to obtain connections with the Temperley-Lieb algebras of statistical mechanics, with
partitions, with Stirling numbers, and much more.
Date / Time : Thursday,October 30, 2014 - 4:00 PM
Venue: CRM, Université de Montréal, Pav. André-Aisenstadt, 2920, ch. de la Tour, rom 6214
Quebec Mathematical Sciences Colloquium - Alex Kontorovich (Rutgers)
Thursday, October 9, 2014
Applications of additive combinatorics to homogeneous dynamics
We will discuss the role played by additive combinatorics in attacks on various problems in dynamics related to finer equidistribution questions beyond Duke's Theorem, particularly those posed by
McMullen and Einsiedler-Lindenstrauss-Michel-Venkatesh. This work is joint with Jean Bourgain.
Date / Time: Thursday, October 9, 2014 - 4:00 PM
Venue: CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, room 1140
Quebec Mathematical Sciences Colloquium - Paul Bourgade (New York University)
Thursday, October 2, 2014
Universality in random matrix theory
Wigner stated the general hypothesis that the distribution of eigenvalue spacings of large complicated quantum systems is universal, in the sense that it depends only on the symmetry class of the
physical system but not on other detailed structures. The simplest case for this hypothesis concerns large but finite dimensional matrices. I will explain some historical aspects random matrix
theory, as well as recent techniques developed to prove eigenvalues and eigenvectors universality, for matrices with independent entries from all symmetry classes. The methods are both probabilist
(random walks and coupling) and analytic (homogenization for parabolic PDEs).
Date / Time : Thursday, October 2, 2014 - 16:00
Venue : CRM, UdeM, Pav. André-Aisenstadt, 2920, ch. de la Tour, salle 6214
Conference in honor of Louis-Paul Rivest
Friday, August 29, 2014
A conference in honor of Louis-Paul Rivest will be held at Université Laval on August 28 and 29 to mark his 60th birthday and his many contributions to science.
We hope that you can join us. Please register at http://www.crm.umontreal.ca/2014/Rivest14/index.php
SMS 2014 - Counting Arithmetic Objects: June 23 - July 4, 2014
Monday, June 23, 2014
Counting objects of arithmetic interest (such as quadratic forms, number fields, elliptic curves, curves of a given genus, ...) in order of increasing arithmetic complexity, is among the most
fundamental enterprises in number theory, going back (at least) to the fundamental work of Gauss on composition of binary quadratic forms and class groups of quadratic fields.
In the past decade tremendous progress has been achieved, notably through Bhargava's revolutionary program blending elegant algebraic techniques with powerful analytic ideas. It suffices to mention
the striking upper bounds on the size of Selmer groups (and therefore ranks) of elliptic curves and even Jacobians of hyperelliptic curves of higher genus, among the many other breakthroughs that
have grown out of this remarkable circle of ideas.
The 2014 Summer School will be devoted to covering these recent developments, with the objective of attracting researchers who are in the early stages of their career into this active and rapidly
developing part of number theory.
For more information, view the website.
ISM Quebec student conference
Friday, May 16, 2014
This annual conference is an occasion Québec mathematics and statistics students to meet for one weekend. This year it will be held May 16-18, 2014. All are invited to present their current research
or another subject matter judged worthy of interest.
Twenty-minute student presentations are an excellent way to discover a diversity of subjects and exchange ideas with fellow students. We strongly encourage participants to present in French, however,
presentations in English are always welcome. In addition to the student conferences, there will be 50-minute plenary lectures by well-known professors.
Friday sessions will finish with a wine and cheese reception so that we can all meet and spend a very pleasant weekend.
The 2014 colloque panquébécois des étudiants de l'ISM will be held at Université Laval in Québec City.
We hope to see you this spring! | {"url":"https://ism.uqam.ca/events/","timestamp":"2024-11-04T14:30:33Z","content_type":"text/html","content_length":"504878","record_id":"<urn:uuid:7c13573e-b8fd-4b95-ba36-d6287ce4a746>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00100.warc.gz"} |
Generate Digital Chirp Signals With DDS
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
Direct-digital synthesis (DDS) is a mature digital-signal-processing (DSP) technology that offers great flexibility and power for generating complex waveforms. One of the advanced waveforms within
the realm of DDS creation (given a dual-accumulator architecture) is chirp or linear frequency-modulation (FM) signals. In contrast to larger and more expensive arbitrary waveform generators, DDS
chirp sources can save power, size, and cost in critical designs.
The principles of DDS technology were formulated in the late 1960s. In almost the reverse of sampling theory, a DDS source produces digital samples of a sinewave by means of an accumulator and sine
lookup table; these digital samples are converted to analog waveforms via a digital-to-analog converter (DAC) and filter (Fig. 1, left) . The number of digital bits in the process determines the
ultimate resolution of the output waveform (Fig. 1, right) .
A variable phase ramp is achieved by means of an accumulator with W digital bits. The accumulator is a phase generator in which 2^W states represent 2π phase conditions. Of the accumulator's W bits,
Q (most significant) of these bits ( usually Q
A simple example may serve to demonstrate the properties of the sampled data as well as the concept of positive and negative frequencies. In this example, a bar is rotating at a rate of 1 Hz and
illuminated by a flashlight (sampled) blinking at a rate of 10 Hz (Fig. 2).
Each time the light flashes, the bar appears to rotate 36 deg. forward in a clockwise motion, although this interpretation is not conclusive. The bar can rotate at 1 Hz, or 11 Hz, or 21 Hz, or any
frequency that is 10N + 1 Hz and be interpreted in a similar fashion under the strobe light. In addition, there is another set of infinite frequencies that are given by 10N - 1 Hz, rotating counter
clockwise, that yield same results. Because they rotate counterclockwise, they are called negative frequencies, 180 deg. relative to the main set of frequencies. Generally, the set NF[s] + F[f] or NF
[s]- F[f] (where Fs is the sampling frequency, F[f] is the fundamental frequency, and N is the multiplication factor), generate the same sampled data response.
The set F[f] and F[s] - F[f] is therefore a " couple" in sampled data (Fig. 3).
Their amplitude decreases because the DAC generates a sample-and-hold (S/H) waveform and not a sampled "delta function," so the amplitude is scaled by the S/H transfer function given by sin(x)/x,
where x is 1/Fs (and therefore goes to zero at integer multiples of the sampling frequency F[s] = F[clock] in Fig. 3).
Negative frequencies are real physical phenomena and not just a mathematical outcome of the Fourier transform. When a signal in the vicinity of F[0] is mixed with F[0] itself (Fig. 4, left), the
noise bandwidth of the lowpass filter is twice its bandwidth (BW) because all sidebands around F[0] (BW) will pass into the filter. Signals above F[0] will generate a positive output while
frequencies below F[0] will generate a negative output (Fig. 4, right).
Since the electrical signal in a DDS is a vector, positive and negative phases are possible, and positive and negative frequencies. If F[0] + 1 or F[0]- 1 are mixed with F[0], and the mixer output at
1 Hz displayed on an oscilloscope, it would be impossible to tell the difference between the two outputs. What is known is that the two signals are offset-by 180 deg. To identify them, two components
of the vector must be displayed, hence the in-phase (I) and quadrature (Q) components.
The basic equations for a DDS, to generate sinewave outputs and change frequencies by changing the control input word based on a W-bit accumulator and a D-bit DAC (assuming Q > D + 1) include
the following:
F[out] = W[1] (F[clock]/2^W)
W[1] = the control input.
For example, a 48-b accumulator DDS, running at a clock frequency of 1000 MHz, has a frequency resolution of 10^9/2^48 ~ 3.5 Hz.
Spurious signals, which traditionally have been a limitation of DDS technology, are approximately given by -6D (dBc), or about -70 dBc for the 12-b DAC. (This is true mainly for clock frequencies
below F[clock]/4; above this clock rate, DAC errors begin to dominate.) A DDS source's switching speed is given by approximately 3/BW, where B is the output bandwidth of the lowpass filter.
The output frequency of a DDS is practically limited to 40 to 45 percent of the clock frequency, since the source generates both F[out] and F[clock] - F[out] and there are limitations on how to
filter F[clock] - F[out] as F[out] gets closer to F[clock]/2. This is a natural outcome of the sampling theorem that states that sampling rate must be at least two samples per cycle.
The frequency command words W[1] and 2^W- W[1] will generate the same output frequency, although the two output signals will be offset by 180 deg. For an input command of W[1], the accumulator
increments W[1], 2W[1], 3W[1]...until it reaches 2^W which is a complete cycle and 2^W will then be subtracted from the sum (modulus 2π).
When controlling 2^W - W[1], the accumulator will almost always exceed its full state so the residue will be 2^W - W[1], 2(2^W - W[1]), 3(2^W - W[1]), etc...therefore: 2^W - W[1], 2^W - 2W[1], 2^W -
3W[1], etc. The absolute value of the phase increment (slope) is similar, but with opposite phase sign.
Page Title
If a second accumulator is added in front of the first accumulator, the phase generator is now a double accumulator, or a double integrator, and has a parabolic output. In the analog domain, this
translates to sin(αt^2) and since the instantaneous frequency of a signal is given by the derivative of the phase, it becomes 2αt, or a (linear FM) chirp signal. In the digital domain, this is of
course the same, and linear FM signals have significant applications in the market; from imaging radars, test signals, optical imaging to instrumentation and silicon yield enhancement. The chirp DDS
allows setting accurate and repeatable start, stop, and chirp-rate frequencies, not possible from analog voltage-controlled-oscillator (VCO) designs.
The overall principles of digitally generating chirp signals are very similar to DDS sinewave signal generation, with two distinct additional fundamental equations. The sweep rate is given by:
Sweep rate = W[1]× F[clock]^2/2^W.
For example, for a chirp DDS with a 24-b accumulator (a model DCP-1 from Meret Optical Systems, for example) and 500 MHz clock rate, the minimum chirp rate, W[1] = 1, is ~ 14.9 kHz/µs. Rather than
negative frequency, a chirp generator can create chirps with negative slopes. Again, the pair W[1] and 2^W? W[1] is a positive/negative slope pair. In the above example, the command 224 ? 1, which is
FFFFFFF hex, will generate a negative chirp with a rate of ~ ?14.9 kHz/µs. The calculation of 224 ? W[1] (to create a negative slope chirp) can be done by at least two ways: Subtract W[1] from 224 or
invert all digits of W[1] and add 1. For example, for W[1] = 1, which is 0000001 Hex, invert all digits (FFFFFFE) and add 1, for a total of FFFFFFF Hex. For W1 = 15, or 00000F, invert all digits
(FFFFFF0) and add 1, for a total of FFFFFF1 Hex. Figure 5 shows a functional run of positive and negative chirps.
Figure 6 provides short tables of DDS and chirp accumulators for a 10-b accumulator and W[1] = 001 hex and its pair FF3 (for 10-b digit arithmetic). In the tables, W[1] is the DDS output while W[21]
is the chirper output.
The phase ramps (where phase is modulus 2π, or 1024 in this case) are shown in Fig. 7 (where phase is denoted in red and frequency in blue color).
Quantization effects cause noise and spurious content in DDS signals. It is well known from DSP basics that the quantization signal-to-noise ratio (SNR) of a digitally generated signal is ~6D dBc, or
~?72 dBc for a 12-b DDS. This can be easily translated to phase noise since quantization noise is uniformly distributed across ~ F[clock]. Generally, incremental DDS phase noise, caused by
quantization errors, can be approximated by ?6D ? 10log(F[clock]). For example, a 10-b DDS clocked at 320 MHz generates a phase-noise floor on the order of ?60 ? 85 = ?145 dBc/Hz. Some analog noise
might be added due to DAC circuitry and a typical real number for such a DDS is closer to a still respectable ?135 dBc/Hz.
For all practical purposes, a DDS chirp source generates very accurate and repeatable chirps, and its linearity is as good as the quantization error, which is minimal. However, the lowpass filter
that follows a DDS has phase characteristics that can affect chirp linearity. As the frequency changes, the output signal passes the lowpass filter, and as the filter's group delay deviates from
linear phase, it will affect the frequency slope accuracy. Hence, filters with linear phase characteristics are highly recommended for the best chirp linearity.
This is usually not a problem up to about 20 percent of the output bandwidth or in cases that the chirp is generated across a small percentage bandwidth. But as the output frequency increases, the
phase error and deviation from linear phase (filter dispersion) may require the use of a phase equalization network.
Certain utility functions in a DDS are convenient for many applications (Fig. 8) .
For example, the ability to shift phase (achieved by placing an adder after the phase accumulator) allows very precisely controlled phase modulation. It is also possible to set start and stop
frequencies in chirp mode, and by control of instantaneous frequency, to allow setting thresholds or trigger certain events when these frequencies are reached (see INST_FREQ in Fig. 8). The ability
to control the phase reset function of a DDS makes it possible to start the chirp at the same point in every run.
DDS and DDS chirp are mature technologies that enable fast switching, fine resolution, and excellent linearity in frequency sweeping requirements. While arbitrary waveform generation has advanced
mightily in the last 15 years, allowing arbitrary signals stored in memory, the large size and expensive cost of such generators can be prohibitive for some applications compared to the small size of
DDS and chirp DDS sources. The capability of a DDS source to change frequency while maintaining phase continuity, and the application of chirps for generating accurate positive and negative chirp
slopes, has earned the technology a central and growing position in the signal-generation and frequency-synthesis space. Advanced CMOS DDS devices can now clock at 1 GHz (for output signals to 400
MHz), with GaAs devices offering more than double this speed.
1. Bar-Giora Goldberg, Digital Frequency Synthesis Demystified, LLH publishing, 1999.
2. Analog Devices, Norwood, MA, data sheets for DDS product line.
3. Meret Optical Communications, San Diego, CA, data sheet for DCP-1 Chirp Synthesizer.
4. J. Tierney, C. Rader, and B. Gold, "A Digital Frequency Synthesizer," IEEE Transactions on Audio Electronics, pp. 48-57, 1971.5.
5. B.G. Goldberg, Seminar notes for Besser Associates, www.bessercourse.com or www.cei.com.
Sponsored Recommendations | {"url":"https://www.mwrf.com/markets/defense/article/21846617/generate-digital-chirp-signals-with-dds","timestamp":"2024-11-14T05:47:03Z","content_type":"text/html","content_length":"303072","record_id":"<urn:uuid:6fc28e04-5793-44dc-9bbe-e5581cfe2222>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00875.warc.gz"} |
Comparison of viscoelastic models with a different number of parameters for transient simulations
The numerical and analytical models used for transient simulations, and hence for the pressurized pipe system diagnosis, require the definition of a rheological component related to the pipe
material. The introduction and the following widespread use of polymeric material pipes, characterized by a viscoelastic behavior, increased the complexity and the number of parameters involved in
this component with respect to metallic materials. Furthermore, since tests on specimens are not reliable, a calibration procedure based on transient test is required to estimate the viscoelastic
parameters. In this paper, the trade-off between viscoelastic component accuracy and simplicity is explored, based on the Akaike criterion. Several aspects of the calibration procedure are also
examined, such as the use of a frequency domain numerical model and of different standard optimization algorithms. The procedure is tested on synthetic data and then it is applied to experimental
data, acquired during transients on a high density polyethylene pipe. The results show that the best model among those used for the considered system implements the series of a spring with three
Kelvin–Voigt elements.
The numerical models for the simulation of transients are an important tool for the design and the diagnosis of pressurized pipe systems (Covas et al. 2004; Duan et al. 2010b, 2012; Meniconi et al.
2011; Gong et al. 2013; Massari et al. 2014; Duan 2017; Laucelli et al. 2016; Sun et al. 2016). Such models require the definition of a functional relationship between stresses and strains in the
pipe material, i.e., its rheological model. The spreading of pipes of polymeric materials, such as high density polyethylene (HDPE) and polyvinyl chloride (PVC), and their use in pressurized pipe
systems instead of metallic pipes, increased the complexity of the rheological models, from elastic to viscoelastic. The increase in the complexity of the models corresponded to an increase in the
number of the parameters they depend on and to the need of reliable and efficient algorithms to calibrate them.
For the specific purpose of transient modeling in polymeric pipes, the evaluation on specimens of the rheological parameters, i.e., by creep and relaxation tests in laboratory, is not successful (
Ghilardi & Paoletti 1986; Covas et al. 2005). For this reason, the most common procedure used so far is the calibration of the parameters of the implemented rheological models using transient tests.
In the literature, several case studies and calibration procedures have been presented. As an example, Pezzinga & Scandura (1995) used a trial and error procedure to calibrate a rheological model
consisting of one Kelvin–Voigt (KV) element with two parameters, by means of transient tests on a HDPE pipe. In the following papers, Pezzinga (2000, 2002) and Pezzinga et al. (2014) calibrated the
same rheological model by means of a micro-genetic algorithm. The calibration procedure is based on the maximization of a function evaluated as the inverse of the sum of the squared residuals (RSS)
between simulated and experimental data. The numerical model is obtained by the integration of the governing equations with the method of the characteristics (MOC).
Covas et al. (2005) used the experimental data collected from a PE pipe-rig to calibrate a rheological model composed by a different number of KV elements, fixing some of the parameters and varying
the others. The calibration is based on the minimization of the RSS by means of a Levenberg–Marquardt algorithm. A sensitivity analysis on the RSS suggests that the optimal results are obtained with
five KV elements and five calibration parameters. The need of a proper choice of the time step in the MOC model is discussed.
Soares et al. (2011) used a genetic algorithm to calibrate different rheological models on data coming from tests carried out in an experimental hydraulic circuit composed of PVC pipes. Numerical MOC
models implementing different rheological models, with up to three KV elements, are calibrated minimizing the RSS by means of both a genetic and a Levenberg–Marquardt algorithm. The algorithms are
used in series since the genetic algorithm is considered as a good tool for the detection of local minima but not accurate enough for the global minimum determination. The considered optimal value is
obtained by one KV element and fixing some parameters.
Although in the available literature different choices are made with regard to the complexity of the model, the number of the involved parameter is not considered as a parameter itself. In other
words, the complexity level of the used viscoelastic models is not investigated as a variable of the problem and determined by means of an objective and rational criterion.
In this paper, several aspects of the calibration of the viscoelastic parameters are investigated. In the first part, the implementation of the viscoelastic elements in a frequency domain model is
described. The choice of the optimization function and algorithm is discussed, giving some general remarks regarding the calibration procedure. A method for the estimation of the appropriate number
of the viscoelastic model parameters is then presented. The proposed approach is applied to synthetic data, obtained corrupting with a white noise the numerical results of an implemented viscoelastic
model, and then to experimental data, obtained by tests carried out on a HDPE pipe. The obtained results and the original contributions of the paper are summarized in the conclusions.
The rheology of a pipe material can be described introducing simple components, corresponding to elementary behaviors, and combining them to describe more complex behaviors. A linear viscoelastic
material, being in between a linear elastic and a linear viscous material, can be modeled as a combination of elementary linear elastic (
) and linear viscous (
) elements. The combination of a spring and a dashpot in parallel is the so-called KV element. It can be analytically described as: where the first and second terms on the right represent the spring
and the dashpot, respectively. In Equation (
), are the stresses and are the strains,
denotes the Young's module of the spring and the viscosity of the dashpot,
is the time while the subscript denotes the model.
A KV element coupled in series with a spring is usually referred to as standard linear solid (S1). More complex systems can be obtained putting in series with the spring not just a single KV element
but a series of m KV elements (Sm). In this case, it is: where and and the subscript 0 denotes the spring in series with the KV elements.
Many authors (e.g., Ghilardi & Paoletti 1986, 1987; Pezzinga & Scandura 1995; Covas et al. 2005; Lee et al. 2009; Duan et al. 2012; Keramat et al. 2012; Pezzinga 2014; Evangelista et al. 2015) have
presented the implementation of S rheological components in MOC numerical models, to relate hoop stresses and strains in the pipe. Other authors have presented the implementation of the same
rheological components in frequency domain integration models (e.g., Duan et al. 2010a, 2012; Vítkovský et al. 2011; Lee et al. 2013; Capponi et al. 2017a; Ferrante & Capponi 2017b). With respect to
the time domain, the frequency domain integration is faster and easy when the variation in time at one section of the system is sought. The frequency domain integration requires a linearization of
the governing equations but limited to the second order of the steady-friction resistance term, when the boundary conditions are linear (Capponi et al. 2017c). Since the viscoelastic term in the
governing equations can be represented as a convolution in time (Weinerowska-Bords 2015) it does not require a linearization and, due to the well-known properties of Fourier and Laplace transforms of
such integrals, it is easier to be handled in the frequency domain than in the time domain. Similarly, the unsteady-friction term is a convolution integral in the time domain as well (
Weinerowska-Bords 2015) and can be considered and easily implemented in the frequency domain models. With reference to the considered conditions and to the results presented in the following, the
effects of viscoelasticity prevail on those introduced by the friction term (Duan et al. 2010a, 2012). Hence, for the sake of simplicity and considering that the main purpose of this paper is the
comparison of the viscoelastic models, a frequency domain numerical model without an unsteady-friction term is used.
For the pipe material rheological model S, defined by Equation (
), the total hoop strain is the sum of the spring strain, and of the strain associated with the remaining part of the model, . Due to the series arrangement, the stress applied to the spring is the
same applied to the remaining part of the model. Under this and other common assumptions (
Wylie & Streeter 1993
Chaudhry 2014
), the Fourier transform of the linearized equations governing the one-dimensional transient flow can be obtained: where and is the angular frequency. The dependent variables,
, and , are the Fourier transform of the pressure head,
, flow,
, and pipe hoop strain, , respectively. In Equation (
), the capacitance, , the inertance, , and the resistance, , are introduced, where is the friction factor,
is the gravitational acceleration,
are the pipe cross-sectional area and diameter. Only the strain component of the spring, , contributes to the evaluation of the wave speed,
, while the term in Equation (
) takes into account the characteristics of the remaining part of the rheological model, i.e., the
introduced KV elements. Combining Equation (
) with the equations of the rheological model, two symmetric equations with the dependent variables
can be derived by simple algebraic manipulations: where
The function depends on the chosen rheological model of the pipe material (Ferrante & Capponi 2017b). In the fifth column of Table 1 the formulae are given for the S models, for different values of m
. In these formulae, assuming that the thickness of the pipe wall , and , it is , where is the pipe constraint coefficient and is the water density. In the same table the model parameters are also
specified. Since all the used models require a spring in series with the rest of the element arrangement, the Young modulus of this spring, , should be added directly to the list or indirectly, i.e.,
considering the wave speed a as a further parameter.
Table 1
Model . Sketch . . Parameters . f(ω) .
EL 1 E[0] 0
S1 3 E[0], E[S][1], η[S][1]
S2 5 E[0], E[S][1], η[S][1,]E[S][2], η[S][2]
… … … … …
Sm 2m + 1 E[0], E[S][1], η[S][1,]…, … E[Sm], η[Sm]
Model . Sketch . . Parameters . f(ω) .
EL 1 E[0] 0
S1 3 E[0], E[S][1], η[S][1]
S2 5 E[0], E[S][1], η[S][1,]E[S][2], η[S][2]
… … … … …
Sm 2m + 1 E[0], E[S][1], η[S][1,]…, … E[Sm], η[Sm]
The integration of Equation (
) for the case of a reservoir-pipe-valve (RPV) system yields the expression (
Wylie & Streeter 1993
Ferrante & Brunone 2003
): where is the impedance function at the pipe downstream end, that is just upstream of the valve, , and is the length of the pipe.
In Equation (6), is introduced as a parameter, without any need of further space discretization and compatibility conditions between space and time grids. If a reduced number of measurement sections
is used in the simulation, the large amount of calculations needed by the time domain integration, at each time step and for each node of the grid, can be strongly reduced by the frequency domain
integration. Since for the calibration procedures the dependent variables are evaluated at a few sections, frequency domain models are much more efficient than their time domain correspondent. The
reliability and the accuracy of frequency domain solutions rely on other parameters (Lee & Vítkovský 2010; Lee 2013).
With respect to the approximations introduced by the steady-friction term linearization, Capponi et al. (2017c) have shown that, with reference to the test conditions of this investigation, their
effects are negligible when compared to those due to viscoelasticity. Furthermore, the computational burden is reduced of about two orders of magnitude, with respect to the time domain (e.g., Capponi
et al. 2017a, 2017b; Ferrante & Capponi 2017b). Hence, considering that the increase in the simulation efficiency justifies the introduced approximations, the frequency domain model is used in the
The values of the parameters of the numerical model can be evaluated by means of a calibration procedure, i.e., by the comparison of the results of the numerical model with the experimental data. For
a given model, the values of the parameters are varied to minimize the distance between the model and the actual phenomenon that produced the experimental data. Three different aspects can be
considered: (i) the definition of a measure for the distance between the chosen model and the actual phenomenon, (ii) the algorithm to minimize such a distance, and (iii) the definition of a measure
to compare the distance from the actual phenomenon of models with a different number of parameters.
The measure of the distance between a model and the actual phenomenon
Several functions can be used to measure the distance between the actual phenomenon and the model, or at least the distance between the acquired pressure head values during a transient, and the
corresponding simulated values, H, produced by a model (Savic et al. 2009). The calibration procedures aim to estimate the parameter values that minimize or maximize the value of this function,
defined as the optimization function.
For transients in pressurized pipe systems, the sum of squares of the residuals, RSS, can be used as the optimization function (e.g.,
Covas et al. 2005
Jung & Karney 2008
Savic et al. 2009
): where denotes the sample order and
is the total number of samples. Other functions based on RSS can also be used, such as the inverse (
Pezzinga 2014
Pezzinga et al. 2014
): or a weighted RSS (
Kapelan et al. 2003
): where are the weights.
Nash & Sutcliffe (1970)
coefficient, NS: where is the mean value of , is the most used efficiency index in hydrology, when measured and forecasted data are compared and it has been also used for transient tests in
pressurized pipes (
Ferrante et al. 2015
Ferrante & Capponi 2017a
The optimization algorithm and the comparison of calibrated models with a different number of parameters
Various algorithms can be used to minimize the optimization function in the parameter space. The optimization by trial and error is not a reliable and reproducible process while the optimization by
brute force, i.e., by the direct scrutiny of the function on a grid of the parameter space, is numerically cumbersome and can be used only for efficient models with a reduced number of parameters. A
genetic algorithm and two other nonlinear minimization techniques, i.e., the Nelder–Mead or the Levenberg–Marquardt algorithms were used for the presented calibration procedure. All the mentioned
algorithms are available as libraries of interpreters and compilers. A freeware collection of minimization functions is provided in Octave (Eaton et al. 2015).
The minimization of the optimization function leads the chosen model as close as possible to the actual phenomenon but it does not provide enough information about the relative distance of different
models from the actual phenomenon, at least when the comparison is among calibrated models with a different number of parameters. In this case, the measure of the relative distance cannot be solely
based on the residual analysis and on RSS but needs also to consider the number of the parameters involved.
The Akaike criterion is a rigorous model selection criterion based on the Kullback–Leibler information (
Burnham & Anderson 2003
). It provides a measure of the relative distance between models and the unknown actual phenomenon that takes into account the number of the parameters. In the case of least square estimation with
normally distributed residuals it can be written as: where
is the total number of samples. The trade-off between the minimization of and the minimization of the number of the parameters, , that is between underfitting and overfitting, is expressed by the
balance of the two terms of Equation (
). The measure provided by the is relative, in the sense that it can only be used to compare different models in a chosen set of models, fitting the same data. It does not provide an absolute value
for just one model and cannot be used to compare models fitted to different experimental data.
The model assessed to be the best among the others in a set is the one with the minimum value of , i.e., . For all the other fitted models, the larger the value of , the less plausible it is that
they can be considered as the best model. As a rule of thumb (Burnham & Anderson 2003), all the models with have still substantial support to be estimated as the best, models with have considerably
less support while models with have no essential support.
It is worth noting that Equation (12) depends on the value of . If the parameter estimation procedure is carried out perfectly, the obtained value of for a chosen model is the minimum. Hence, the
efficiency and reliability of the calibration algorithm in reaching the minimum of the objective function plays a crucial role in the comparison of the models. This aspect is of great importance for
the fitting of the viscoelastic models in the case of diagnosis by means of transients, because of the low gradients of the optimization functions in the parameter space region close to the minimum.
This flat region implies that the sensitivities of the optimization function with respect to the parameters are low, and hence, that the minimization algorithm has to be efficient since large
variations in the parameters correspond to small variations in the optimization function values.
To verify the reliability of the calibration procedure and of the proposed criterion of model comparison, a synthetic data set was generated by means of a numerical model based on Equation (6). Two
RPV systems (Figure 1), with pipes of a total length, , of 102.58 m, and diameter, D of 96.8 mm, were considered, made of two different pipe materials, similar to HDPE. For the first material, A, the
rheology was considered completely defined by an S3 model with = 353.73 m/s, = 4.500 10^9 Pa, = 4.500 10^10 Pa, = 4.500 10^11 Pa, and = 9.000 10^9 Pa s. As a result, the KV elements were
characterized by = 0.020 s, = 0.200 s, and = 2.000 s. The second material, B, was still defined by an S3 model and had the same value of a as the first one, but = 4.500 10^10 and = 9.00 10^8 Pa s, =
9.00 10^9 Pa s, and = 9.000 10^10 Pa s. As a result, the KV elements were characterized by the same values of of the previous model, i.e., = 0.020 s, = 0.200 s, and = 2.000 s.
The steady-friction term was evaluated during the simulation with the Blasius formula for turbulent flow in smooth pipes. The pressure head variations in time, or pressure signals, generated in the
two systems with the two different pipe materials, were then corrupted with a white noise. The same generated noise was used for both signals, with a zero mean and a standard deviation of 10^−1 m, to
simulate the acquisition by an actual pressure transducer. The sample variance of the generated random noise was 1.005 10^−2 m^2.
Assuming that the generated synthetic signal, affected by the superimposed random noise, represents the truth and that the model based on the implemented S3 contains all and only the actual governing
equations of the phenomenon, a blind calibration procedure was applied as in cases where the actual model is unknown. Since, in reality, also the number of the parameters is considered as unknown,
models with a different number of KV elements and parameters were considered.
The calibration procedure for the five models (S1, S2, S3, S4, and S5) and the two pipe materials (A and B), required the use of a nonlinear algorithm to minimize . The first 20 seconds of the
pressure signal were considered for the evaluation of . The upper and lower bounds of 10^12 and 10^6 were used for the viscoelastic parameters. A genetic algorithm, with populations of 200
individuals, a number of populations equal to the number of the parameters and up to 100 generations, produced values of always greater than 3.0, far from the known solution. As a consequence, the
heuristic Nelder–Mead algorithm was used instead, with a minimum tolerance of 10^−16 on the function and on the parameter variations and a maximum number of 1,000 function evaluations. Since
references to the use of the Levenberg–Marquardt algorithm can be found in the literature (Covas et al. 2005; Soares et al. 2011), a check was made to verify that the Nelder–Mead solution was the
same as the Levenberg–Marquardt algorithm, with the same tolerances. At the end of the calibration procedure, the optimal set of parameters and the minimized value of were available for each
combination of model and material.
In Figure 2 the synthetic signals are compared to those simulated implementing the viscoelastic models S1, S2, S3, S4, and S5 and considering the system made of the two materials, A and B (Figure 2
(a) and 2(b), respectively). Only small differences can be appreciated between the synthetic signals and those simulated by S1 and S2, while other simulated signals overlap for both materials.
Differences among results of the models S1, S2, and S3 are much more evident in Figure 3, where the absolute values of the functions of Equation (5) are plotted for the pipe materials A and B (
Figure 3(a) and 3(b), respectively). The results of the models S4 and S5 were very close to those of S3 and are not plotted.
The wave speed was included as a parameter in the calibration procedure. The calibrated values of a were between 353.2 and 354.1 m/s, and hence with differences of less than ±0.5 m/s with respect to
the value, used to generate the synthetic data.
The minima reached for decrease with the number of KV elements, m, and tend towards the sample variance of the added random noise.
Figure 4 presents the calibration results in the parameter space. Due to their significance, J and are chosen instead of E and . Dashed lines and solid circles denote the true solutions.
With reference to the values, we can conclude that, for both materials, values of S1 and S2 fall in the range of the variation of the true values. For the model S3, the values corresponding to the
obtained minimum are very close to the true ones in the case of material A (Figure 4(a)) while for material B both and are clearly far from the true ones (Figure 4(b)). For S4 and S5, the missing
solutions overlap with others or are outside the limits of logarithmic axis.
No correlation can be clearly indicated between the calibrated values of and the maneuver duration (about 0.15 s), the pipe characteristic time (0.58 s), and the duration of the synthetic signals
used for the calibration (20 s).
Figure 4(a) shows an interesting correlation between estimated and J values, which leads the calibration to end along the line defined by the true solutions in the parameter space.
The variation of with the number m of the KV elements of the used models is shown in Figure 5. While is monotonically decreasing with m, the minima of are obtained for , corresponding to the value of
the true model for both materials, A and B. The models S1 and S2 are characterized by a and hence can reasonably be excluded if compared to S3. The obtained values of , close to 4 and 7 for models S4
and S5, respectively, suggest that these models are significantly more distant from the true model than S3.
The crosses in Figure 5 denote the theoretical values obtained by assuming = 1.005 10^−2 m in Equation (12), i.e., considering that the differences between calibrated and actual data are completely
explained by the random noise. A calibration that does not reach the theoretical minimum produces a higher value of and affects the measure of the distance from the true model. In this sense, a
reliable calibration procedure is crucial. Since hollow circles and crosses coincide in Figure 5, the used algorithms, with the given tolerances and settings, can be considered sufficiently accurate
for the aims of the comparison.
To reduce the number of the parameters, some authors (Covas et al. 2005; Soares et al. 2011) prefer to fix the values of and calibrate the others. To assess the use of this strategy, five reduced S
models, with fixed values of , were also considered in the set of model candidates, with m ranging from 1 to 5. The fixed values of were the first m values of 0.01, 0.10, 1.00, 10.00, and 100.0 s. In
this case, the obtained value of for the reduced S2 model, with fixed values of = 0.01 s and = 0.1 s, was less than that obtained by the best calibration of the S3 model, estimating all the
parameters. This result suggests that, following this strategy, the number of parameters could be further decreased.
The calibration procedure based on the Nelder–Mead algorithm and the model comparison methodology verified on the synthetic data were applied to the experimental data to determine the optimal number
of the KV elements needed to model transients in an actual HDPE pipe.
The experimental apparatus
The tests were carried out at the Water Engineering Laboratory of the University of Perugia, Italy, on the same RPV system used to generate the synthetic data, with the same and D. The pipe was a
HDPE DN110 PN10, according to UNI EN 12201 and UNI EN ISO 15494. An electromagnetic flowmeter was used to measure the discharge in the steady-state initial conditions, with an accuracy of 0.2% of the
measured value. A piezo resistive pressure transducer, PT, with a full scale of 6 bar and an accuracy of 0.15% of the full scale was used to measure the pressure upstream of the maneuver valve.
At the downstream end of the pipe, a hand-operated ball valve, DV, discharging into the air and a remotely controlled butterfly valve, MV, immediately upstream of DV (Figure 1) were used to have an
initial condition of = 3.64 l/s and to produce the maneuver of Equation (13), with the same parameters used for the generation of the synthetic data. The obtained variation in time of the pressure
head at PT, , is shown in Figure 6.
The calibration of the models by the experimental data
Six models differing for the number m of the KV elements were implemented and calibrated to the experimental data. The Nelder–Mead algorithm was used to evaluate the parameters of the models
minimizing the value of , with the same tolerances and limits used for the synthetic data. The comparison of the simulated and experimental data in Figure 6 and in Figure 7 confirms the results
obtained for the calibrations by synthetic data. While the differences in the pressure signal are small, differences in the function are evident, at least for models S1, S2, and S3. Results of the
S4, S5, and S6 models, not shown in the figures, are very close to the results of the S3 model.
The values of the parameters obtained for the six models by the calibration procedure are shown in Figure 8.
As expected, the values of obtained at the end of the calibration for each model decrease monotonically with the number of used KV elements.
The values of shown in Figure 9 allow assessment of whether the decrease is justified by the increase in the model complexity. Based on , S3 is the model closest to the actual phenomenon that
generated the experimental data, among the set of models containing S1, S2, S3, S4, S5, and S6. No other model in the set shows a and hence there is no substantial support to estimate another model
in the set as the best one.
Assuming that the value of of the model S3 corresponds to that of the noise of the experimental data, the values of can be estimated for S4, S5, and S6. The comparison of these values (crosses) with
the data obtained by the calibration confirms that the increase of the number of the parameters, for , cannot be justified by the decrease in . On the other hand, the decrease in from S1 and S2 to S3
justifies the increase in the number of the parameters.
The results shown allow some remarks on the calibration of viscoelastic models for transient test in polymeric pipes. Several issues need to be considered to achieve reliable results.
The first issue is related to the choice of the viscoelastic model. Although other models could be considered (Ferrante & Capponi 2017b), the most common choice is a series of one or more KV elements
with a spring. A set of viscoelastic models with up to six KV elements is considered in this paper.
The parameters of the models can be estimated by means of transient tests, since creep and relaxation tests on specimens do not provide reliable results. This leads to the second issue, that is to
define a measure of the estimation reliability of a set of model parameters. This measure is usually expressed as a function of the differences between simulated and experimental data. The estimate
of the parameters requires the minimization of this function. The sum of the squares of the residuals is one of the most common choices and is used in this paper. Unfortunately, this function
presents wide flat regions in the parameter space close to the optimal solutions and this leads to the third issue, that is the choice of the algorithm to be used to estimate the parameters
minimizing the distance between simulated and measured data. The most used algorithms in the literature are also applied in this paper to synthetic and experimental data. The genetic algorithm with
the shown settings is confirmed to be not reliable and efficient in reaching the global optimal solution. In contrast, the Nelder–Mead algorithm is able to attain the optimal solution in a reasonable
computational time, even for low tolerances. This algorithm is simpler and more robust than the Levenberg–Marquardt algorithm, at least for the considered cases.
All the models in a set, with a different number of viscoelastic elements, provide optimal solutions and this leads to the fourth issue, that is, the comparison of models with a different number of
parameters. The need of the trade-off between the reduction of the squared sum of the residuals and the increase in the number of the parameters can be handled using different approaches. The Akaike
information criterion is used in this paper to introduce into the calibration procedure the number of the parameters as a further parameter, although other techniques can also be used for an
efficient investigation of the behavioral model space (Beaumont et al. 2002; Sadegh & Vrugt 2014). The use of this criterion is tested on synthetic data and is applied to estimate the best model for
experimental data. The obtained results show that the criterion can be successful and that a viscoelastic model with three KV elements, S3, is the best estimated model in the set of models with up to
six KV elements.
The calibrations presented in this paper for the experimental data did not show any correlation of with the duration of the maneuver, the duration of the acquisition, and the characteristic time of
the system. For the synthetic data, these durations are taken into account in other model components and are not correlated to the parameters in the used viscoelastic model equations. The
difficulties in the calibration procedure suggest that the sensitivity of the sum of the squared residuals to is low. Variations of one order of magnitude of can cause small variations in the sum of
the squared residuals. Probably due to this low sensitivity, the strategy of a reduction of the number of parameters by fixing values of can be successful, as for the shown application of a reduced
S2 model. The reduction of the number of the parameters by fixing the values with a different order of magnitude (e.g., 0.01, 0.10, 1.00, 10.00 s) is based on a subjective choice of the fixed values
and unfortunately it does not allow a general conclusion to be drawn here.
With reference to the applications, different model uses, such as for the preliminary design of a simple pipe or for the diagnosis of a complex system, require different levels of accuracy. In this
paper, the obtained minima are comparable to the accuracy of a pressure transducer, as required by the diagnosis of pressurized pipe systems by transient tests. Furthermore, the initial 20 s of the
experimental pressure signal are used to evaluate . This duration should also be changed according to the aims of the calibration and of the model use, since the design of a simple system could
require the simulation of the first characteristic time, while the long-term analysis could require longer durations, with respect to that used here.
The implementation of the rheology of a polymeric material pipe in the transient governing equations requires the estimation of the viscoelastic parameters. The most common way to define such
parameters is by means of a calibration procedure on experimental data. The main contribution of this paper is to address the problem of overfitting of viscoelastic models for transient analysis.
The implementation of the viscoelastic component in a frequency domain model reduces the computational time and speeds up the calibration. Furthermore, the presence of the viscoelastic parameters in
the governing equations is limited to a complex function, . The analysis of this function for the considered calibrations shows that it could be used to enhance the differences from one model to
The standard procedure used, based on the minimization of the sum of the squares of the residuals by means of the combined use of a genetic and the Nelder–Mead algorithm, allows it to be shown that
the Akaike criterion can be used to define the optimal number of parameters.
Since all the used algorithms and methodologies are easily available, this criterion, or other available criteria, can be easily used to avoid the overfitting. Additional investigations are needed to
confirm some of the results shown and to asses the proper choice regarding different or more sophisticated viscoelastic models, optimization functions, calibration algorithms, and criteria for the
comparison of different models. | {"url":"https://iwaponline.com/jh/article/20/1/1/37893/Comparison-of-viscoelastic-models-with-a-different","timestamp":"2024-11-09T10:43:32Z","content_type":"text/html","content_length":"537454","record_id":"<urn:uuid:cee14e57-ddd5-40f9-bca9-be4d5d8bfbdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00851.warc.gz"} |
Automated reasoning
Jump to navigation Jump to search
Automated reasoning is an area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs
that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with
theoretical computer science, and even philosophy.
The most developed subareas of automated reasoning are automated theorem proving (and the less automated but more pragmatic subfield of interactive theorem proving) and automated proof checking
(viewed as guaranteed correct reasoning under fixed assumptions). Extensive work has also been done in reasoning by analogy induction and abduction.
Other important topics include reasoning under uncertainty and non-monotonic reasoning. An important part of the uncertainty field is that of argumentation, where further constraints of minimality
and consistency are applied on top of the more standard automated deduction. John Pollock's OSCAR system^[1] is an example of an automated argumentation system that is more specific than being just
an automated theorem prover.
Tools and techniques of automated reasoning include the classical logics and calculi, fuzzy logic, Bayesian inference, reasoning with maximal entropy and a large number of less formal ad hoc
Early years[edit]
The development of formal logic played a big role in the field of automated reasoning, which itself led to the development of artificial intelligence. A formal proof is a proof in which every logical
inference has been checked back to the fundamental axioms of mathematics. All the intermediate logical steps are supplied, without exception. No appeal is made to intuition, even if the translation
from intuition to logic is routine. Thus, a formal proof is less intuitive, and less susceptible to logical errors.^[2]
Some consider the Cornell Summer meeting of 1957, which brought together a large number of logicians and computer scientists, as the origin of automated reasoning, or automated deduction.^[3] Others
say that it began before that with the 1955 Logic Theorist program of Newell, Shaw and Simon, or with Martin Davis’ 1954 implementation of Presburger’s decision procedure (which proved that the sum
of two even numbers is even).^[4] Automated reasoning, although a significant and popular area of research, went through an "AI winter" in the eighties and early nineties. The field subsequently
revived, however. For example, in 2005, Microsoft started using verification technology in many of their internal projects and is planning to include a logical specification and checking language in
their 2012 version of Visual C.^[3]
Significant contributions[edit]
Principia Mathematica was a milestone work in formal logic written by Alfred North Whitehead and Bertrand Russell. Principia Mathematica - also meaning Principles of Mathematics - was written with a
purpose to derive all or some of the mathematical expressions, in terms of symbolic logic. Principia Mathematica was initially published in three volumes in 1910, 1912 and 1913.^[5]
Logic Theorist (LT) was the first ever program developed in 1956 by Allen Newell, Cliff Shaw and Herbert A. Simon to "mimic human reasoning" in proving theorems and was demonstrated on fifty-two
theorems from chapter two of Principia Mathematica, proving thirty-eight of them.^[6] In addition to proving the theorems, the program found a proof for one of the theorems that was more elegant than
the one provided by Whitehead and Russell. After an unsuccessful attempt at publishing their results, Newell, Shaw, and Herbert reported in their publication in 1958, The Next Advance in Operation
"There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until (in a visible future) the range of
problems they can handle will be co- extensive with the range to which the human mind has been applied."^[7]
Examples of Formal Proofs
Proof systems[edit]
Boyer-Moore Theorem Prover (NQTHM)
The design of NQTHM was influenced by John McCarthy and Woody Bledsoe. Started in 1971 at Edinburgh, Scotland, this was a fully automatic theorem prover built using Pure Lisp. The main aspects of
NQTHM were:
1. the use of Lisp as a working logic.
2. the reliance on a principle of definition for total recursive functions.
3. the extensive use of rewriting and "symbolic evaluation".
4. an induction heuristic based the failure of symbolic evaluation.^[12]
HOL Light
Written in OCaml, HOL Light is designed to have a simple and clean logical foundation and an uncluttered implementation. It is essentially another proof assistant for classical higher order
Developed in France, Coq is another automated proof assistant, which can automatically extract executable programs from specifications, as either Objective CAML or Haskell source code.
Properties, programs and proofs are formalized in the same language called the Calculus of Inductive Constructions (CIC).^[14]
Automated reasoning has been most commonly used to build automated theorem provers. Oftentimes, however, theorem provers require some human guidance to be effective and so more generally qualify as
proof assistants. In some cases such provers have come up with new approaches to proving a theorem. Logic Theorist is a good example of this. The program came up with a proof for one of the theorems
in Principia Mathematica that was more efficient (requiring fewer steps) than the proof provided by Whitehead and Russell. Automated reasoning programs are being applied to solve a growing number of
problems in formal logic, mathematics and computer science, logic programming, software and hardware verification, circuit design, and many others. The TPTP (Sutcliffe and Suttner 1998) is a library
of such problems that is updated on a regular basis. There is also a competition among automated theorem provers held regularly at the CADE conference (Pelletier, Sutcliffe and Suttner 2002); the
problems for the competition are selected from the TPTP library.^[15]
See also[edit]
Conferences and workshops[edit]
External links[edit] | {"url":"https://static.hlt.bme.hu/semantics/external/pages/tud%C3%A1sreprezent%C3%A1ci%C3%B3_(KR)/en.wikipedia.org/wiki/Automated_reasoning.html","timestamp":"2024-11-05T22:12:31Z","content_type":"text/html","content_length":"93397","record_id":"<urn:uuid:3af54d62-aa76-4534-b533-edd858ee97ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00855.warc.gz"} |
A quadratic equation will have a solution based on the value of its discriminant. The term inside a radical symbol (square root) of a quadratic formula is said to be a discriminant. The discriminant
in Math is used to find the nature of the roots of a quadratic equation. The value of the discriminant will determine if the roots of the quadratic equation are real or imaginary, equal or unequal.
Discriminant Definition in Math
The discriminant of a polynomial is a function of its coefficients which gives an idea about the nature of its roots. For a quadratic polynomial ax^2 + bx + c, the formula of discriminant is given by
the following equation :
D = b^2 – 4ac
For a cubic polynomial ax^3 + bx^2 + cx + d, its discriminant is expressed by the following formula
D= b^2c^2−4ac^3−4b^3d−27a^2d^2+18abcd
Similarly, for polynomials of higher degrees also, the discriminant is always a polynomial function of the coefficients. For higher degree polynomials, the discriminant equation is significantly
large. The number of terms in discriminant exponentially increases with the degree of the polynomial. For a fourth-degree polynomial, the discriminant has 16 terms; for fifth-degree polynomial, it
has 59 terms, and for a sixth-degree polynomial, there are 246 terms.
Also, learn:
Discriminant Formula
In algebra, the quadratic equation is expressed as ax^2 + bx + c = 0, and the quadratic formula is represented as \(x =\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\).
Therefore, the discriminant formula for the general quadratic equation is
Discriminant, D = b^2 – 4ac
a is the coefficient of x^2
b is the coefficient of x
c is a constant term
Discriminant of a Polynomial
The discriminant of a quadratic polynomial is the portion of the quadratic formula under the square root symbol: b^2-4ac, that tells whether there are two solutions, one solution, or no solutions to
the given equation.
The discriminant is a homogeneous polynomial in the coefficients. It is quasi-homogeneous in the coefficients since also a homogeneous polynomial in the roots. The discriminant of a polynomial of
degree n is homogeneous of degree 2n − 2 in the coefficients.
Relationship Between Discriminant and Nature of Roots
The discriminant value helps to determine the nature of the roots of the quadratic equation. The relationship between the discriminant value and the nature of roots are as follows:
• If discriminant > 0, then the roots are real and unequal
• If discriminant = 0, then the roots are real and equal
• If discriminant < 0, then the roots are not real (we get a complex solution)
Discriminant Example
Example 1: Determine the discriminant value and the nature of the roots for the given quadratic equation 3x^2+2x+5.
Given: The quadratic equation is 3x^2+2x+5
Here, the coefficients are:
a = 3
b = 2
c = 5
The formula to find the discriminant value is D = b^2 – 4ac
Now, substitute the values in the formula
Discriminant, D = 2^2 – 4(3)(5)
D = 4 – 4 (15)
D = 4 – 60
D = -56
The discriminant value is -56, which is less than 60.
I.e., -56 < 0
Therefore, the roots are not real.
Hence, the quadratic equation has no real roots.
Example 2: Determine the discriminant value and the nature of the roots for the given quadratic equation 2x^2+8x+8.
Given: The quadratic equation is 2x^2+8x+8
Here, the coefficients are:
a = 2, b = 8 and c = 8
The formula to find the discriminant value is D = b^2 – 4ac
Now, substitute the values in the formula
Discriminant, D = 8^2 – 4(2)(8)
D = 64 – 4 (16)
D = 64 – 64
D = 0
The discriminant value is 0
I.e., D = 0
Therefore, the roots are real and equal
Hence, the quadratic equation has a double root (repeated roots)
Stay tuned with BYJU’S – The Learning App and also download the app to learn all the Maths-related articles. | {"url":"https://mathlake.com/Discriminant","timestamp":"2024-11-05T22:53:57Z","content_type":"text/html","content_length":"11960","record_id":"<urn:uuid:f9640545-e2e8-4569-a535-68c502d340c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00258.warc.gz"} |
How Are Fantasy Points Calculated?
How are fantasy points calculated?
Are you tired of losing in fantasy sports because you don’t know how to calculate fantasy points?
Fear not, dear reader! With this guide, you’ll be able to accurately calculate fantasy points and dominate your league in no time.
So sit back, relax, and let’s dive into the wonderful world of fantasy points calculation.
Basic Formula for Fantasy Points Calculation:
Let’s start with the basics. In most sports, the formula for calculating fantasy points is pretty straightforward.
You simply add up the points earned by each player based on their performance in the game. For example, in football, a player may earn 6 points for a touchdown, 1 point for every 10 rushing or
receiving yards, and 1 point for every reception.
By adding up all of a player’s points, you get their total fantasy points for that game.
Here’s an example. Let’s say your starting quarterback throws for 250 yards, 2 touchdowns, and 1 interception in a game.
Using the basic formula for fantasy points in football, their total fantasy points for that game would be:
250 passing yards = 10 fantasy points (1 point for every 25 passing yards) 2 touchdowns = 12 fantasy points (6 points for each touchdown) 1 interception = -2 fantasy points
Total fantasy points = 20
Adjusting Scoring for Different Leagues and Formats:
But what if you’re playing in a league with different scoring rules or formats?
Don’t worry, we’ve got you covered. Here are a few examples of how to adjust scoring for different leagues or formats:
• In PPR (points per reception) leagues, you would add an additional point for each reception made by a player.
• In some leagues, you may earn bonus points for long plays or for reaching certain yardage milestones (e.g. 100 rushing yards, 200 passing yards).
• In daily fantasy sports, the scoring system may be more complex and may vary depending on the specific site or platform being used.
By understanding the unique scoring rules and formats of your league, you can adjust your fantasy points calculations accordingly and gain a competitive advantage over your opponents.
Advanced Fantasy Points Calculations:
Ready to take your fantasy points calculation skills to the next level?
Let’s explore some advanced methods for calculating fantasy points in more complex sports.
In basketball, for example, you may earn points for different statistical categories such as points scored, rebounds, assists, steals, and blocks.
Each category may have a different point value assigned to it, and some leagues may even award negative points for missed shots or turnovers.
In baseball, fantasy points are typically calculated based on a player’s performance in different statistical categories such as hits, home runs, RBIs, and stolen bases.
Check out our Fantasy Basketball Points Calculator:
However, some leagues may also take into account a player’s defensive performance, such as their fielding percentage or number of assists.
By mastering these more complex methods for calculating fantasy points, you can gain a deeper understanding of the game and potentially gain an edge over your opponents.
Common Mistakes in Fantasy Points Calculation:
As with any type of calculation, there are some common mistakes that people often make when calculating fantasy points. Here are a few to watch out for:
• Forgetting to include certain scoring categories or bonuses in your calculation.
• Failing to deduct points for penalties or negative plays (such as interceptions or missed shots).
• Relying too heavily on automated scoring systems without double-checking the results.
To avoid these mistakes, it’s important to carefully review the scoring rules and formats of your league and to double-check your calculations before submitting your final lineup.
Tools for Fantasy Points Calculation:
There are many tools available to help you calculate fantasy points more easily and accurately. Some popular options include:
• Fantasy sports apps or websites: Many fantasy sports platforms provide built-in tools for calculating fantasy points based on your league’s specific scoring rules and formats. These can be a
convenient and reliable way to ensure you’re calculating points correctly.
• Excel spreadsheets: For more advanced users, creating your own Excel spreadsheet to calculate fantasy points can provide even greater flexibility and customization options. There are also many
pre-made templates available online that you can use as a starting point.
• Online calculators: If you’re looking for a quick and easy way to calculate fantasy points without needing to set up a full spreadsheet or app, there are many online calculators available that
can help. These typically allow you to enter player stats manually and then automatically calculate the total fantasy points earned.
Check out our Fantasy Points Calculator:
Calculating fantasy points can seem daunting at first, but with a little practice and understanding of the rules and formats of your league, it can become second nature.
By mastering these skills, you’ll be able to make more informed roster decisions and potentially gain a competitive advantage over your opponents.
So go forth, calculate those fantasy points, and dominate your league like a boss! | {"url":"https://fantasypointscalculator.com/how-are-fantasy-points-calculated-2/","timestamp":"2024-11-15T00:04:26Z","content_type":"text/html","content_length":"72613","record_id":"<urn:uuid:235a01b5-86f9-4e74-97a8-c93df60193a6>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00216.warc.gz"} |
Folding patterns
Take a 30 cm long strip of paper and fold it in two.
Then unfold it and ask students: How many parts are there?
Point out that the two parts are the same size (because they match) so that each part is a half of the strip. Also count the number of folds.
Fold the strip again and then fold it in two a second time.
Ask the students to predict the number of parts and folds.
Unfold the strip and check that there are four equal parts and three folds.
Ask the students to identify quarters.
Repeat this process at least twice more and make a table like this.
│ No. of times folded │ No. of equal parts │ Name of each part │ No. of folds │
│ 1 │ 2 │ half │ 1 │
│ 2 │ 4 │ quarter │ 3 │
│ 3 │ │ │ │
│ 4 │ │ │ │
Now ask students to describe and explain the patterns in the table. You can download Patterns in the Table which gives a completed table and some discussion points.
Note especially the doubling pattern in the second column and the way in which unitary fractions are named.
Able students could be asked to predict the 10th row of the table.
See Growing fractions for another pattern activity involving fractions. | {"url":"https://topdrawer.aamt.edu.au/Patterns/Activities/Folding-patterns","timestamp":"2024-11-01T19:11:13Z","content_type":"text/html","content_length":"57529","record_id":"<urn:uuid:16cc7213-95d6-47a2-8991-44859c6e8fd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00213.warc.gz"} |
Quiz on Dynamic Programming
Here's a quiz to test your dynamic programming skills
Which two properties must all dynamic programming problems have?
Optimal substructure and divisibility
Overlapping subproblems and optimal substructure
Question 1 of 60 attempted
Level up your interview prep. Join Educative to access 80+ hands-on prep courses. | {"url":"https://www.educative.io/courses/algorithms-coding-interviews-python/quiz-on-dynamic-programming","timestamp":"2024-11-14T00:26:45Z","content_type":"text/html","content_length":"801643","record_id":"<urn:uuid:a5667a4b-b368-4384-bd09-aecd2ae469cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00044.warc.gz"} |
Differential equations on a torus
From Encyclopedia of Mathematics
flows on a torus
A class of dynamical systems (cf. Dynamical system). An example is the flow generated by all translations of a torus (considered as a Lie group) by the elements of some one-parameter subgroup of the
torus. In "angular" or "cyclic" coordinates on the torus counted modulo 1 (which may be considered as ordinary coordinates in a Euclidean space $ \mathbf R ^ {n} $ from which the torus $ T ^ {n} $ is
obtained as a quotient group modulo the integer lattice $ \mathbf Z ^ {n} $), this flow is described as follows: Within time $ t $ a point $ x = ( x _ {1} \dots x _ {n} ) $ transforms to the point
$$ \tag{1 } T ^ {t} x = x + t \omega , $$
where $ \omega = ( \omega _ {1} \dots \omega _ {n} ) $ is the set of so-called basic frequencies. All trajectories of this flow are quasi-periodic functions (cf. Quasi-periodic function) of time;
their properties are determined by the arithmetical properties of the basic frequencies. Thus, the trajectories are periodic if all $ \omega _ {i} $ are integer multiples of the same number. In the
other extreme case, when the $ \omega _ {i} $ are linearly independent over $ \mathbf Z $( i.e. no non-trivial linear combination $ \sum k _ {i} \omega _ {i} $ with integers $ k _ {i} $ is equal to
zero), each trajectory is dense in the torus (an irrational winding of the torus), while the flow is ergodic (with respect to Haar measure on $ T ^ {n} $; Haar measure is naturally obtained from
Lebesgue measure in $ \mathbf R ^ {n} $ under factorization by $ \mathbf Z ^ {n} $ and is preserved under the translations $ T ^ {t} $) and is even strictly ergodic; its spectrum is discrete.
Such flows often appear in various problems. Thus, in the case of integrable Hamiltonian systems (cf. Hamiltonian system), "typical" motions with compact support (i.e. remaining in a finite domain of
the phase space) lead to such flows (the corresponding tori are level manifolds of the system of first integrals [8]). Such invariant tori with irrational windings are also frequently encountered in
Hamiltonian systems sufficiently close to integrable ones (this problem is closely connected with small denominators).
The possible types of qualitative behaviour of trajectories of flows without equilibrium positions were fully clarified for a two-dimensional torus by H. Poincaré [1], A. Denjoy [2] (see also [3])
and H. Kneser [4] (for a modified presentation see [5], [6]). (Of all closed surfaces, only the torus and the Klein surface admit such flows, while the study of flows on the latter surface can be
reduced, in principle, to the study of flows on the torus which is its two-sheeted covering surface.)
Figure: d032070a
The following is known about these flows. If there is a doubly-connected domain (a "Kneser ringKneser ring" ) on the surface, bounded by two closed trajectories, while inside that domain the
trajectories spiral away from one of them and spiral towards the other in the opposite direction (see Fig.), then the qualitative behaviour of the trajectories on the surface resembles that of the
trajectories in a bounded domain in the plane. In particular, all non-periodic trajectories in both time directions tend to become periodic. The case (which is possible only on a torus) when there
are no Kneser rings is more interesting; this is equivalent to the existence of a closed transversal $ L $( i.e. a closed curve which is nowhere tangent to the trajectories) intersecting each
trajectory an infinite number of times. On $ L $ the Poincaré return map of $ S $ — the homeomorphism sending a point $ x \in L $ into the first intersection point of the positive semi-trajectory
through $ x $ with $ L $ — is defined. The cascade $ \{ S ^ {n} \} $ on $ L $ is characterized by its Poincaré rotation number $ \alpha $( see, for example, [3]; it is partly dependent on the
specific choice of $ L $; the asymptotic cycle is a completely-invariant characteristic of the original flow [14]). According to Denjoy's theorem, if $ S $ is of class $ C ^ {2} $( which is always
the case if the transversal and the initial flow on the torus are both suitably smooth), while $ \alpha $ is irrational, then $ S $ is topologically conjugate to the rotation of the circle through
the angle $ 2 \pi \alpha $, i.e. it is possible to introduce a cyclic coordinate $ x $ on $ L $ so that $ S $ can be represented as $ x \rightarrow x + \alpha $ $ \mathop{\rm mod} 1 $. (If $ S $ is
of class $ C ^ {1} $, this is not necessarily true [2].) The partition of the torus into trajectories will then be the same, up to a homeomorphism, as in the case described by (1) (except for the
velocity of motion along the trajectories). The smoothness of the coordinate change, which is ensured by Denjoy's theorem, will depend (in addition to the smoothness of $ S $) on the arithmetical
properties of the rotation number $ \alpha $. For almost-all $ \alpha $ it follows from $ S \in C ^ {n} $, $ n \geq 3 $, that the coordinate change belongs to the class $ C ^ {n-} 2 $[9], but such a
change need not be smooth for rotation numbers which can be very rapidly approximated by rational numbers, even if the transformation $ S $ is analytic [7].
If the original flow on $ T ^ {2} $ has an integral invariant, a Kneser ring cannot exist, and $ S $ is smoothly conjugate to a rotation of the circle, irrespective of whether $ \alpha $ is rational
or irrational. Thus, in the absence of equilibrium positions there exist on the torus cyclic coordinates $ x , y $ of the same smoothness class as the flow itself, and the form of the flow in these
coordinates becomes
$$ \tag{2 } \dot{x} = f ( x , y ) ,\ \dot{y} = \alpha f ( x , y ) ,\ f ( x ,\ y ) > 0 $$
(where $ \alpha $ is the rotation number corresponding to the closed transversal $ x = \textrm{ const } $). If $ f $ is sufficiently smooth and if $ \alpha $ displays suitable properties, the flow
(2) can be reduced to (1) (with $ n = 2 $ and $ \omega = ( 1 , \alpha ) $) with the aid of some diffeomorphism, but in the general case this is not always possible, and even ergodic properties of the
flow (2) may differ from those of the flows (1) (a continuous spectrum is possible, but mixing is not possible in the smooth case). See [10]; the missing proofs are produced in [11], [12], [13], [15]
[1] H. Poincaré, "Mémoire sur les courbes définiés par une équation différentielle" J. Math. Pures Appl. , 1 (1885) pp. 167–244 (Also: Oeuvres, Vol.1)
[2] A. Denjoy, "Sur les courbes définies par les équations différentielles à la surface du tore" J. Math. Pures Appl. (9) , 11 : 4 (1932) pp. 333–375
[3] E.A. Coddington, N. Levinson, "Theory of ordinary differential equations" , McGraw-Hill (1955) pp. Chapts. 13–17
[4] H. Kneser, "Reguläre Kurvenscharen auf Ringflächen" Math. Ann. , 91 : 1–2 (1924) pp. 135–154
[5] B.L. Reinhart, "Line elements on the torus" Amer. J. Math. , 81 : 3 (1959) pp. 617–631
[6] A. Aepply, L. Markus, "Integral equivalence of vector fields on manifolds and bifurcation of differential systems" Amer. J. Math. , 85 : 4 (1963) pp. 633–654
[7] V.I. Arnol'd, "Small denominators I. On maps of the circumference onto itself" Transl. Amer. Math. Soc. , 46 (1965) pp. 213–284 Izv. Akad. Nauk SSSR Ser. Mat. , 25 : 1 (1961) pp. 21–86
[8] V.I. Arnol'd, "On a theorem by Liouville concerning integrable problems" Sibirsk. Mat. Zh. , 4 : 2 (1963) pp. 471–474 (In Russian)
[9] M.R. Herman, "Conjugaison C.R. Acad. Sci. , 283 : 8 (1976) pp. 579–582
[10] A.N. Kolmogorov, "On dynamical systems with an integral invariant on the torus" Dokl. Akad. Nauk SSSR , 93 : 5 (1953) pp. 763–766 (In Russian)
[11] S. Sternberg, "On differential equations on the torus" Amer. J. Math. , 79 : 2 (1957) pp. 397–402
[12] M.D. Shklover, "Classical dynamical systems on the torus with continuous spectrum" Izv. Vyssh. Uchebn. Zaved. Mat. , 10 (1967) pp. 113–124 (In Russian)
[13] A.V. Kochergin, "On the absence of mixing in special flows over the rotation of a circle and in flows on a two-dimensional torus" Soviet Math. Dokl. , 13 (1972) pp. 949–952 Dokl. Akad. Nauk SSSR
, 205 : 3 (1972) pp. 515–518
[14] S. Schwartzman, "Asymptotic cycles" Ann. of Math. , 66 : 2 (1957) pp. 270–284
[15] D.V. Anosov, "On an additive functional homology equation connected with an ergodic rotation of the circle" Math. USSR-Izv. , 7 : 6 (1973) pp. 1257–1271 Izv. Akad. Nauk. SSSR , 37 : 6 (1973) pp.
Instead of [8] (the existence of invariant tori for integrable Hamiltonian systems) one may also consult § 49 in [a1]. As to the appearance of flows on tori one may add the fact that every compact
minimal subset of a $ C ^ {2} $- flow on a two-dimensional manifold is either a point, a periodic orbit or equal to the full manifold, in which case the manifold is homeomorphic to a torus; see [a5].
A good introduction into the classical theory of flows on tori can be found in [a3]. Instead of [7] and [9] (smoothness of coordinate transformation) one may consult [a4]. For the results concerning
flows with an integral invariant (including the results of [10] and [12]), see Chapt. 16 of [a2].
[a1] V.I. Arnol'd, "Mathematical methods of classical mechanics" , Springer (1978) (Translated from Russian)
[a2] I.P. [I.P. Kornfel'd] Cornfel'd, S.V. Fomin, Ya.G. Sinai, "Ergodic theory" , Springer (1982) (Translated from Russian)
[a3] C. Godbillon, "Dynamical systems on surfaces" , Springer (1983) (Translated from French)
[a4] M.R. Herman, "Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations" Publ. Math. IHES , 49 (1979) pp. 5–234
[a5] A.J. Schwartz, "A generalization of a Poincaré–Bendixson theorem to closed two dimensional manifolds" Amer. J. Math. , 85 (1963) pp. 453–458
How to Cite This Entry:
Differential equations on a torus. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differential_equations_on_a_torus&oldid=46682
This article was adapted from an original article by D.V. Anosov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/wiki/Differential_equations_on_a_torus","timestamp":"2024-11-04T15:31:27Z","content_type":"text/html","content_length":"27036","record_id":"<urn:uuid:d55a4440-05fd-435e-b653-5619163ff02b>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00150.warc.gz"} |
Bilateral Export Demand Function of India An Empirical Analysis
Bilateral Export Demand Function of India: An Empirical Analysis ()
1. Introduction
International trade has played an important role in the development of both developed and developing countries as countries are dependent on each other due to uneven distribution of scarce resources.
The role of international trade in the development of a country is undoubtedly undeniable. Perhaps more firmly established than the relationship between exports and growth is that between
fluctuations in exports and cyclical variations in economic activity. If shifts in demand were the major factor, then a study of the demand function for exports will provide a fuller understanding of
the causes of economic fluctuations [1] .
India’s exports to the whole world are showing an increasing trend during the period 1993:Q1 to 2008:Q2. India’s exports have increased from 5682.49 million USD in 1993:Q1 to 56433.8 million USD in
2008:Q2. India’s exports have increased by 10 times during this period. India’s exports to the whole world started falling since 2008:Q2 (Figure 1). While exports are falling temporarily in the
aftermath of the Global Financial Crisis which started in the USA and gradually spread across the world, the value of exports has remained essentially flat since 2011. Raissi and Tulin (2015) [2]
quoted that global factors have adversely affected India’s exports, as potentially did the appreciation of the real effective exchange rate. It is believed that exports play an important role in the
growth of the economy. As India’s exports to the whole world are not showing an increasing trend which forces us to see whether India’s exports to the USA are increasing or not as the USA is the
major trading partner of India for a couple of decades.
India’s exports to the USA fall from 6210.94 million USD in 2008:Q4 to 4199.36 million USD 2009:Q2. Though USA remained as a major export partner of India, exports had fallen during 2008:Q4 which was
because of Global Financial Crisis. The volume of exports falls after some lags might be because of consumer response lag and weak consumer confidence index. Since 2009:Q2 India’s exports to the USA
picked up and were showing an increasing trend (Figure 2).
It is believed that export growth should play an important role in economic growth in developing countries. Given the importance of export expansion of
Figure 1. India’s exports to the whole world (Millions USD). Source: Authors’ calculations.
Figure 2. India’s exports to USA (millions of USA Dollars). Source: Authors’ calculations.
growth and balance of payments’ concerns, the worrisome fact is that the Percentage share of the India’s exports to the USA to the whole World has declined from 22.52% in 1999 to 13.38% in 2015 (
Figure 3). Hence, it is a serious concern for the Indian economy as USA is the major export partner of India.
The trade policy announced in 2014 by the government envisaged total exports of $900 billion by 2020, however, it would only be possible if exports grow by 40% per annum from now on [3] .
Another reason for which the study of export demand function of a country becomes important is the fact that many of the developing countries are on the brinks of balance of payments problems.
Keeping in view the importance of exports in the economic growth of a country, an attempt is made in this paper to examine the determinants of bilateral export demand function of India in short run
and long run. As US is the major trading partner of India, we examine India-USA bilateral trade relationship in aggregate level. Rahman et al. [4] pointed out that if bilateral responses to exchange
rates and other variables that determine trade flows differ, then aggregate trade flows yield misleading results. In addition, if the response of the trade flows to the real exchange rate varies by
country with the nature of the trade, disaggregation will give a clear picture.
Given this backdrop, the study has two fold objectives. First, is to find out the determinants of bilateral export demand function of India in short run and long run. Second, is to identify that
among the macroeconomic variables which variable plays an important role in affecting exports. Finally, we hope that the conclusion of this paper will be helpful for policy makers to formulate trade
policies and it will also stimulate scholars to conduct further research, which would be beneficial to policy makers in the future.
The paper is organized as follows. Following an introduction in Section I, Section II reviews a few selected literatures in this area. Section III discusses model specification. Section IV highlights
the methodology used. Variables defined and data sources are presented in Section V. Section VI highlights the empirical results followed by conclusions and policy implications in Section VII.
Figure 3. Percentage share of the India’s exports to the USA to the whole World. Source: Authors’ calculations.
2. Review of Studies
Considering the importance of international trade to economic growth and development, especially in third world countries, a number of empirical studies onthe determinants of export demand functions
have been carried out.
The contributions by Orcutt in 1950 [5] and Houthakker and Magee in 1969 [6] were noteworthy. The level of real income and price competitiveness are the main factors impacting exports. While
analyzing the models of export demand, Houthakker and Magee [6] found that in countries which were imported and in the countries which were exported, the level of real income and price
competitiveness impacted exports significantly. Hatton [1] and Wong et al. [7] also opined that prices play an important role in the determination of exports for developing countries. Wong et al. [7]
found that there is a unique long-run relationship between quantities of export, relative price, real foreign income and real exchange rate variability in Malaysia. Foresta, James J. and Turnerb,
Paul [8] also confirmed the relationship between aggregate exports, world trade and thereal exchange rate in their study of USA.
The concept of elasticity of foreign demand plays an important role in international trade research. A major part of the international trade research is done in this area in the last 20 years.
Exports respond significantly to changes in relative prices. This is evident from the studies by Goldstein and Khan [9] ; Hossain et al. [10] ; Raissi et al. [11] ; Cocar [12] and Islam [13] .
Goldstein and Khan [9] found that in six of the eight countries, they studied had relatively elastic demand and supply. Hossain et al. [10] used the Pesaran bounds tests and the
Johansenco-integration tests in his study in Indonesia and found that the country’s exports had highly elastic demand in the long run. Cocar [12] applied panel unit root and co-integration tests and
showed that the real exchange rate elasticity of total export demand is inelastic, whereas the income elasticity is relatively elastic in USA. Narayan et al. [14] re-estimated the import and the
export demand functions for Mauritius and South Africa using time series data and found that there is a long-run relationship between import demand, income and prices for both countries. Islam [13]
found that GDP, exchange rate and inflation rateare important determinants for foreign trade of Bangladesh. Bolaji et al. [15] confirmed a uni-causal link from export to growth in Nigeria.
Non-price factors such as patent applications and government and business characteristics of a country are also important factors for understanding of international competitiveness. Studies by
Verheyen [16] revealed that these factors also have significant positive effects on export demand.
While most of the literature concentrates on developed economies, there is hardly few research carried out to examine the bilateral export demand function so far as emerging country like India is
concerned. For instance, Raissi et al. [11] studied the short-term and long-run price and income elasticity of Indian exports and found that international relative-price competitiveness, world
demand, and energy shortages had considerable bearing on Indian exports. Takeshi I. [17] examines an empirical analysis of the aggregate export demand function in post-liberalization India. The
empirical results indicate that all estimated coefficients are statistically significant with expected signs and that the absolute value of the coefficient is the largest for the world price,
followed by world income and domestic income. Further, the results reveal that price competitiveness has improved India’s export market. Moreover, the statistically significant world income
elasticity suggests that the global economic boom may contribute to an increase in India’s exports, whereas the global recession has likely had an adverse impact on the Indian economy through its
trade channel.
We believe that these studies are not sufficient enough so far as emerging country like India is concerned. These studies are not sufficient enough to reach any definite conclusion. Hence, any new
study will add to the review of literature. By keeping this in our mind, the present study is carried out to investigate the bilateral export demand function of India by applying the ARDL model and
using the latest data available. We believe, our study will throw some light to the policy makers and for the scope of future research.
3. Model Specification
Export demand of a country is affected by two important factors. These are―1) foreign income, which is an indicator of the economic activity and purchasing power of trading partner, and 2) terms of
trade (or competitiveness effect), which depends on the ratio of the respective price levels and the nominal exchange rate. The econometric model presented below for the empirical study shows a
standard long-run relationship between real exports, nominal exchange rate, relative price and foreign income.
$E{x}_{t}={\beta }_{0}+{\beta }_{1}F{y}_{t}+{\beta }_{2}Erat{e}_{t}+{\beta }_{3}R{p}_{t}+{u}_{t}$(1)
where, Export ( $E{x}_{t}$ ) = real exports of India at time t; Foreign Income ( $F{y}_{t}$ ) = foreign economic activity at time t; Exchange rate ( $Erat{e}_{t}$ ) = nominal exchange rate at time t
(Rupee per unit of USD); Relative price ( $R{p}_{t}$ ) = relative price (which is a measure of competitiveness) at time t; ${u}_{t}$ = the normally distributed error term with all classical
As exports depend on the foreign country’s income, we expect a positive sign for the coefficient of foreign country’s income. However, if rise in real income is due to an increase in the production
of import substitute goods, imports may decline as income increases in which case the coefficient of income in foreign country would be negative. The increase in nominal exchange rate (i.e.
depreciation of the exchange rate) makes domestic goods cheaper in other countries, which increases the competitiveness of domestic goods in the international markets, raising their demand. So we
expect the coefficient of the exchange rate to be positive. A fall in the relative price of a country will cause the domestic goods to be more competitive as compared to foreign goods, which would
result in an increase in exports and decrease in imports and vice versa. Therefore, we expect the coefficient of relative price to be negative.
The above equation includes constant/intercept term “b[0]” because there will be some exports even if all other variables are zero. b[1], b[2], and b[3] are the elasticity coefficients with respect
to the variables $F{Y}_{t},ERAT{E}_{t},R{P}_{t}$ respectively. U[t] is the residual term, which shows the affects of other variables on exports not included in the model.
The exchange rate variable used here is the nominal exchange rate. We point out that the model uses nominal exchange rates, and not real exchange rate, though it is the real exchange rate that is
normally understood to affect the volume of trade. We have divided real exchange rate into its two components of nominal exchange rate and relative price level. The reason for analyzing these
components separately is that the volume of trade responses to real exchange rate changes may differ according to whether the real exchange rate changes are due to nominal exchange rate changes or
the changes in relative price level.
4. Research Methodology
Before proceeding for empirical estimation, each of the macroeconomic variables is initially tested for their stationarity properties and order of integration. The Augmented Dickey Fuller (ADF) and
Phillips Perron (PP) test is used for this purpose. Subsequently, the bound testing approach to ARDL model developed by Pesaran, Shin and Smith [18] is used to check if the variables are cointegrated
or not. We apply ARDL bounds testing approach developed by Pesaran et al. [18] . This approach has a number of advantages as compared to Johansen Juselius (1990) [19] cointegration technique.
Firstly, ARDL requires a smaller sample size compared to Johansen Cointegration technique [20] . Secondly, Johansens technique requires that the variables should be integrated of the same order.
However, the ARDL approach does not require variable to be integrated of the same order. It can be applied whether the variables are purely I (0) or I (1) or mutually cointegrated. Thirdly, the ARDL
approach provides unbiased long run estimates with valid t-statistics if some of the model regressors are endogenous [21] [22] . Fourthly, this approach provides a method of assessing short run and
long run effects of one variable on the other simultaneously and it is also separates short run and long run effects (Bentzen and Engsted, 2001) [23] . Another significant advantage of ARDL approach
over the cointegration technique is that different variables can be assigned different lag length in the ARDL model.
The following ARDL model is estimated to check for the presence of cointegration. ARDL model is used to investigate the relationship between the real exports, foreign income, nominal exchange rate
and relative price. The model specification is as follows:
$\begin{array}{l}\Delta \mathrm{ln}E{x}_{t}={\alpha }_{0}+{\alpha }_{1}\mathrm{ln}E{x}_{t-1}+{\alpha }_{2}\mathrm{ln}F{y}_{t-1}+{\alpha }_{3}\mathrm{ln}Erat{e}_{t-1}+{\alpha }_{4}\mathrm{ln}R{p}_
{t-1}\\ \text{}+\underset{i=1}{\overset{n}{\sum }}{\alpha }_{5i}\Delta \mathrm{ln}E{x}_{t-i}+\underset{i=0}{\overset{n}{\sum }}{\alpha }_{6i}\Delta \mathrm{ln}F{y}_{t-i}+\underset{i=0}{\overset{n}{\
sum }}{\alpha }_{7i}\Delta \mathrm{ln}Erat{e}_{t-i}\\ \text{}+\underset{i=0}{\overset{n}{\sum }}{\alpha }_{8i}\Delta \mathrm{ln}R{p}_{t-i}+{\pi }_{1}E{C}_{t-1}+{e}_{t}\end{array}$(2)
The existence of a cointegrated relationship between the variables in the above mentioned ARDL model specifications is examined with the help of F or Wald test statistics. The Wald test examines the
joint null hypothesis of zero cointegration between the variables, against the alternative hypothesis of the presence of cointegration. The calculated F-statistics are compared with two sets of
critical values computed by Pesaran, Shin and Smith [18] for a given level of significance in their bound testing approach to the analysis of the long-run relationship. If the computed Wald F
statistics exceed/fall above the upper critical value, it implies that all the variables are cointegrated i.e. I (1), and the null hypothesis of zero cointegration can be rejected. On the other hand,
if the computed Wald F statistics fall below the lower bounds critical value, it implies that all the variables are not cointegrated, i.e. I (0), and in this context the null hypothesis of zero
cointegration can’t be rejected. However, if the calculated Wald F statistics fall between the lower and upper bound of critical values, the tests become inconclusive. If the null hypothesis of zero
cointegration is rejected then the ARDL model is estimated to study the short run dynamics. The error correction term measures the speed with which the deviation from the long run equilibrium is
corrected in each period and error correction term is expected to have a negative sign and statistically significant.
Finally, regression diagnostic tests are performed for the ARDL models estimated as per Equation (2). Lagrange Multiplier (LM) test is used to check whether the estimated ARDL model suffer from
residual serial correlation. White test is used to test the null hypothesis that errors are homoscedastic and independent of the regressions, against the alternative hypothesis of the presence of
heteroscedasticity of the unknown, general form. Jarque Bera (J-B) test is used to test the null hypothesis that the residuals are normally distributed. Parameter stability tests play a pivotal role
to ensure reliability of policy simulations based on the model. To test for parameter stability, we have applied the CUSUM (Cumulative Sum) and Cumulative Sum of Squires (CUSUMQ) tests developed by
Brown, Durbin and Evans in1975 [24] . CUSUM and CUSUMQ test are carried out to check for parameter stability in the estimated ARDL model. This test plots the cumulative sum together with the 5%
critical lines. Movement outside the 5% critical lines indicates parameter instability. The test finds no evidence of major parameter instability since the cumulative sum test statistics do not cross
the 5% critical lines. Stability of the estimated elasticities suggests that the model can be considered stable enough for forecasting and policy analysis.
5. Variables Defined and Data Sources
We used India’s monthly exports to the USA (in USD) divided by the unit value of export price in the respective month to generate data on monthly real exports. The time series data of unit value of
export price is not available on a monthly basis. Hence, we interpolated yearly unit value of export price into a monthly unit value of export price through quadratic method which is extensively used
by many researchers. Though the volume of exports is expressed in USD million and rupee terms in Indian context, we prefer exports in USD million terms rather than the rupee term as USD fluctuates
less compared to rupee. In addition to that, the USD is treated as one of the safe haven currencies in the international market. Unit values of exports indices are treated as price indices of
exports. It is used as a deflator to compute the volume of exports from value of exports. Economic theory tells that foreign income is an important determinant of exports. We have used USAGDP to
represent foreign income. The relative price (which is a measure of competitiveness) is measured by the ratio of India’s unit value of exports to USA CPI. Finally, the study uses the bilateral
Rupee-Dollar nominal exchange rate as it is widely believed that exchange rate also plays an important role in affecting exports and imports of a country.
The study uses quarterly data for the period 1993:Q1 to 2015:Q1amounting to 89 observations. We have chosen this period as a starting point because the Reserve Bank of India followed market
determined managed floating flexible exchange rate system during that period. In this study, we used secondary data and it has been collected from various sources. India’s exports to the USA (in
Million USD) are collected from Direction of Trade and Statistics (DOTS) which is a publication of the International Monetary Fund (IMF). The Unit value of export price of India, USA GDP, CPI of USA
and rupee-dollar nominal exchange rate data are collected from International Financial Statistics (IFS), which is a publication of the International Monetary Fund (IMF).
6. Empirical Results
Table 1 presents the results of ADF and PP tests for examining the stationarity properties of macro-economic variables. The optimal lag length for carrying out the ADF and PP test for each of the
variables is chosen on the basis of Akaike Information Criteria (AIC). It is observed from Table 1 that all the variables are non-stationary at level except exchange rate. Hence, we can say that
variables are integrated of order 0, i.e. I (0). However, all the variables become stationary only at first differences that is, they are integrated of order 1 or I (1). Since, the macroeconomic
variables are found to be I (1), the bound testing approach to ARDL model is implemented to check for the presence of any cointegrating relationship among the variables. The ARDL approach to
cointegration cannot be applied if any of the variables is found to be I (2).
Table 2 shows the results of ARDL model which includes both lags and current variables. The ARDL (4, 2, 2, 0) model is chosen based on Akaike Information Criteria (AIC). In ARDL (4, 2, 2, 0) model
for lag 4 corresponds to the variable LNEX, Lag 2, corresponds to LNRP, FY corresponds to Lag 2 and Lag 0 corresponds to LNERATE. After estimating ARDL model, then we proceed to check the
cointegration test. The ARDL cointegration test is reported in Table 3.
The bound testing approach to cointegration is reported in Table 3. When Wald test is performed for ARDL model, the F-statistic is found to be 4.6528. Since the computed F-statistics exceeds the
critical upper bound at 5% level significance, the null hypothesis of zero cointegration can be rejected. This implies there exists a long run equilibrium relationship between exports, foreign
income, exchange rate and relative price.
The long run coefficient estimated from ARDL model is reported in Table 4.
EX stands for Exports. Fy stands for foreign income (US income), RP stands for relative price. Erate stands for nominal exchange rate (Rupee/Dollar). Note: The critical values of Augmented Dickey
Fuller test with trend and intercept in level and first differences are −4.0533, −3.4558 and −3.1537 at 1%, 5% and 10% level of significance and the critical values of Phillips-Perron test with trend
and intercept in level and first differences are −4.0524, −3.4553 and −3.1534 at 1%, 5% and 10% level of significance. Source: Authors’ calculations.
Note: R Squared 0.9554, Adjusted R Squared 0.9492, Durbin Watson Test 1.82, Probability F statistic 0.0000. Source: Authors’ calculations.
Table 3. Results of ARDL Cointegration test (Bounds Testing to Cointegration).
Note: * denotes 5% level of significance. Source: Authors’ calculations.
Table 4. Estimated long run coefficient using the ARDL approach.
Source: Authors’ calculations.
A fall in relative prices will cause domestic goods to become more competitive in comparison to foreign goods in the international market, therefore exports will increase and imports will fall and
vice versa. So expects coefficient of relative price to be negative. Relative price carries a negative sign and statistically significant which implies 1% decrease in relative prices will increase
real exports by 0.22%. Foreign income measures the economic activity and purchasing power of the trading partners. The expected signs of the coefficient of foreign income could be positive or
negative. Economic theory suggests that the volume of exports to a foreign country ought to increase as the real income and purchasing power of the foreign countries rises and vice-versa. An increase
in foreign income will lead to foreigners’ importing more goods from the domestic countries and hence the coefficient of foreign income is positive. However, if increase in foreign income is
associated with an increase in production of import substitute goods, domestic country’s exports will fall and in that case coefficient of foreign income carries the negative sign. In our empirical
analysis we found that foreign income carries a positive sign and is statistically significant, which implies that 1% increase in foreign income will increase real export by 1.63%. The nominal
exchange rate carries a negative sign and is statistically significant which suggest that depreciation of nominal exchange rate would not stimulate the volume of export. On Table 4, we can conclude
that in the long run, real exports are influenced more by foreign income followed by nominal exchange rate and relative price.
The short run dynamics of ARDL model is shown in Table 5. In the short run, the relative price is negatively related to real export which implies that a 1% decrease in relative prices will increase
real exports by 1.06% at one lag. In the short run, current relative price has no import on real export. This might be because of consumer response lag, production lag etc. The foreign income has a
positive impact to real export. A 1% increase in foreign income increases real exports by 3.41% in the current period. When foreign country’s income increases, foreign people are better off and their
standard of living enhanced which allows them to demand more imports. The nominal exchange coefficient carries the negative sign and statistically significant in the short run. This implies that
depreciation of Indian rupee will promote exports during our study period. The error correction term is found to be negative and statistically significant, providing further empirical evidence in
support of the presence of cointegration between the variables. ECT value of −0.46 implies that about 46% of the short run disequilibrium between these variables is corrected every quarter.
Table 5. Short run representation of ARDL model.
Source: Authors’ calculations.
Model Robustness check
We want to check whether our dependent variable that is export is stable or not. For robustness checkour results, we did CUSUM and CUSUM squire test, serial correlation and Heteroscedasticity Test
which is shown in Figure 4 and Figure 5 and Table 6 and Table 7 respectively.
Figure 4 plots the results of CUSUM tests for ARDL (4, 2, 2, 0) model and Figure 5 plots the results of CUSUMQ tests for ARDL (4, 2, 2, 0) model. In both Figure 4 and Figure 5, if the blue line is
located within the two red lines, it implies that our dependent variable (export) is a stable variable. On the contrary, if the blue line is located outside of the two red lines, it implies that our
dependent variable (export) is not a stable variable and model can’t be used for policy implications and can’t be reliable. Both CUSUM and CUSUMQ test finds no evidence of major parameter instability
since the cumulative sum test statistics do not cross the 5% critical lines. Both CUSUM and CUSUMQ tests are found to lie within the 5% critical lines, indicating parameter stability is present in
our model. Stability of the estimated elasticities suggests that the model can be considered stable enough for forecasting and policy analysis.
Breusch-Godfrey test is used to check whether the presence/absence of serial correlation in the data. Table 6 presents the Breusch-Godfrey Serial Correlation LM Test. If the error term is serially
correlated, the estimated OLS standard errors are invalid and estimated coefficients will be biased and inconsistent due to the presence of a lagged dependent variable on the right hand side. The
null hypothesis of the test is that there is no serial correlation in the residual. In Table 6 we found that there is no serial correlation exists in our data.
Table 7 presents heteroscedasticity test. White test is used to test the null hypothesis that errors are homoskedastic and independent of the regressions, against the alternative hypothesis of
presence of hetroskedasticity of the unknown, general form. In Table 7, White test shows that there is no heteroscedasticity present in our data.
Figure 4. Cumulative sum of recursive residuals. Note: The straight line represent critical bounds at 5% level of significance. Source: Authors’ calculations.
Figure 5. Cumulative sum of squires of recursive residuals. Note: The straight line represents critical bounds at the 5 % level of significance. Source: Authors’ calculations.
Table 6. Breusch-Godfrey serial correlation LM test.
Source: Authors’ calculation.
Table 7. Heteroscedasticity test: white.
Source: Authors’ calculation.
7. Conclusions and Policy Implications
In this paper, we examine the determinants of bilateral export demand function of India during 1993:Q1-2015:Q1. The study uses the ARDL model by using the macroeconomic variables such as real
exports, foreign income, nominal exchange rate and relative price. In our empirical estimation, we found that there exists a long run equilibrium relationship between real exports, foreign income,
exchange rate and relative price. In the short run and long run, real exports are influenced more by foreign income followed by relative price. Foreign income carries positive sign and finds to be
statistically significant, which implies that 1% increase in foreign income will increase real export by 1.63% in the long run. Likewise, relative price carries a negative sign and is statistically
significant, which implies 1% decrease in relative prices will increase real exports by 0.22% in the long run. The nominal exchange rate carries a negative sign and is statistically significant (in
both short run and long run), which suggests that depreciation of nominal exchange rate would not stimulate the volume of export during our study period. Hence, for policy point of view if any policy
makers want to promote exports by depreciating, the rupee will not give desirable results. Rather, policy makers should focus more on controlling inflation for which our products will be more
competitive in the international market and it will be helpful to boost our exports. Though foreign country’s income plays an important role in affecting real exports, policy makers have no control
on foreign income and it is influenced by external factors.
Notwithstanding the current results provided useful information for policymakers, this should be treated with caution. The study considered only USA, as it is listed on the top of all major trading
partners of India since many decades. This is one of the limitations of the study, which provides further scope for future research. Moreover, the future research has the scope to examine the
bilateral analysis of export demand function with India’s top 10 major trading partners, which would further contribute to the existing literature.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper. | {"url":"https://scirp.org/journal/paperinformation?paperid=86665","timestamp":"2024-11-09T20:52:07Z","content_type":"application/xhtml+xml","content_length":"145407","record_id":"<urn:uuid:73359d06-9ca6-469e-bd74-2e356b7597ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00328.warc.gz"} |
Goodbye infinity and all that infinite singularity and infinite density descriptions
As many of you are aware, I have serious problems in the application of infinity. and related infinite descriptions, to non-mathematical (reality) situations. Here is a post from 2022:
strophysics > Cosmology and Nongalactic Astrophysics [Submitted on 22 Apr 2022] How the Big Bang Ends up Inside a Black Hole Enrique Gaztanaga The standard model of cosmology assumes that our
Universe began 14 Gyrs (billion years) ago from a singular Big Bang creation. This can explain a vast...
Why do you bring infinity into it? The CMB is good evidence of BB theory, which can be dated back to approx. 13.8 billion ya. I do not believe that science can support the idea of a singularity.
Any ideas of division by zero (in GR), giving infinite density and temperatures is unsupportable.
No one knows what happened at t = 0 (where the singularity would be). There are ideas (and only ideas) about what might have occurred. One idea (and it is just imaginary) is that, at t = 0),
there may have been a nexus, perhaps associated with cyclic process similar to a final 'black hole leading to a big bang, continuing a cyclic process. Some might call this infinite, but infinity
is a mathematical term, which has no correspondence with reality. But no one knows what happened. Any idea of a singularity is definitely 'out'.
I am delighted to find that serious attention is being drawn to these difficulties by Open University texts published by Cambridge University Press. Here is just a start from what I have been so
pleased to read regarding cosmology, and the problems arising from division by zero and like mathematical operations and trying to apply the results to reality.
In the OU text on
Galaxies and Cosmology
we find:
. . . . . . the cosmic microwave background and the expansion of the Universe imply that there was an early phase of the history of the Universe which was characterized by high temperatures and
high densities . . . . . . A natural question to ask then is how far back towards t = 0 can we go in understanding processes in the Universe.
. . . . . . the Friedmann equation gives a model for a radiation dominated Universe that is characterized by the scale factor having a value of zero at the instant of t = 0 . . . . . . the naive
interpretation of this is that the Universe came into existence with an infinitely high temperature; the truth of the matter is that we don't really understand the physical processes in the very
early Universe
So, how early in the history of the Universe can we be confident that our physical theories really do apply? There are essentially two answers . . . . . .
The first is to say that the theories are only well tested for the ranges of physical conditions that can be explored by experiments. Thus, we have a good deal of confidence in describing the
Universe at times when the particle energies were similar to the highest values that can be imparted in large accelerator experiments .
Goodbye INFINITE temperatures and densities etcetera.
(A second approach) is to apply physical theories to conditions that never have been, and probably never will be, tested in the Earth-bound laboratory and to look for observable consequences in
Nature. Clearly, this is a more speculative approach than having to rely on 'tried and tested' physical theory.
While it might be expected that physical theories could be extrapolated to describe processes at ever increasing temperatures, it turns out that there is a limit to out theoretical understanding
of the processes of Nature . . . . . .
I am still reading further and will add as appropriate.
Last edited:
Now I need your assistance please. I am not making a statement, but asking you what is the highest temperature which has actually been achieved scientifically? I am not interested in theoretical
estimates like 10^32 deg. C. What has actually been achieved?
I have Googled extensively and the highest I have found so far is this:
4 trillion degrees C is 4 x 10^12.
I am sure you will be able to better this, as some persons have
(is this comment unkind or unjustified) that INFINITE temperatures are possible. Is this utter unscientific nonsense?
The reason I am asking is to question statements like this:
One second after the Big Bang
- The universe was made up of
fundamental particles
. The universe continued to expand, but not as quickly as during inflation. As the universe cooled, the four
fundamental forces
in nature emerged:
, the
strong force
, the
weak force
and the
electromagnetic force
began to form.
The temperature of the universe was around 10^32 Kelvin. (My emphasis).
Rather different from 4 trillion degrees C is 4 x 10^12.
Hence I consider the comment from the previous post particularly apt:
The first is to say that the theories are only well tested for the ranges of physical conditions that can be explored by experiments.
No matter what instrumentality they used, they could not have measured that kind of heat on the spot! They had to have measured, and even then still just estimated, the temperature from a distance!
The Planck (Big Bang) temperature, whatever it is or ever will be measured to be in the future, is the Big Bang (Planck) temperature! Up and out top of universe . . . and down and in bottom of
universe (including distantly in levels' layers, hyperspaces' and times' horizons to the collapsed cosmological constant of P/BB 'Mirror Horizon' within us), one and the same entity.
Last edited:
Jun 1, 2020
4 trillion degrees C is 4 x 10^12.
Yes. That should be it. Achieving greater temperatures would require more energy from the collider, which would require a much larger collider at this point, as I understand it.
I am sure you will be able to better this, as some persons have guessed (is this comment unkind or unjustified) that INFINITE temperatures are possible. Is this utter unscientific nonsense?
This might fall into a category you may wish to make... Infinitely dumb.
The evidence strongly supports that matter was born from the early nanoseconds, or earlier, perhaps just prior to when the BB theory slides into the realm from metaphysics into physics. This seems to
be at a time when t= 1E-12 sec., approx.
There is only so much energy and mater in the universe so "infinite" would not apply, though there will always be those that view the universe as being infinite in size, which seems strange given its
start from a very tiny spot. For that very reason, BBT had a rough start including Einstein's rejection of it.
IIRC, there was a recent fusion driving event that reached temperatures around 40 million, which is a little greater than the laser fusion event out of Livermore a few months ago.
I find it odd so few people realize an infinite density has to have duality with an infinite void (has to double as an infinite "Abyss"), an absolute void! That there can't be anything colder in the
universe than an infinite heat where the switch flips, has flipped, somewhere along the line over and off. Everywhere being nowhere, and vice-versa, leaving a relative and localized somewhere;
Everything being nothing, and vice-versa, leaving a relative and localized something (a finite and, at once, an infinite potential (an infinite finite) . . . only. No more. No less)! Oh, well!
What is infinitely dumb is when someone can't and won't think "+1" ("+n") and/or "-1" ("-n") doesn't mean "to infinity!"
"Great spirits have always encountered violent opposition from mediocre minds." -- Albert Einstein.
The time of the universe's "Creation" wraps, circles, around to today, this universal instant in time (New Beginnings . . . an endless beginning). The constant of the Planck (Big Bang) instant!
Last edited:
Some things like indirect face slapping words, clearly meant to slap other forum members but by indirection, get awfully tiresome! ("Science begets knowledge, opinion ignorance."):
Scientists, when made, and self-make themselves, elite GODS of science, are inhumanly stupid gods in the extreme of "stupid" who create tyranny, anarchy, mayhem and murder 'en masse' "for the good of
all mankind" in their 'Brave New World'! Their arrogance against, and ignorance of, life, wisdom, freedom, liberty, knows no bounds! Is anyone on the forums (in their writings) "infinitely dumb"? "Is
any try to describe and understand a truly indiscernible universe "utter unscientific nonsense?" Though some obviously think others are too dumb, or too into the "magic", to be here, in my
"OPINION"!!! NO ONE ALLOWED TO BE HERE AND WORK ON THEORIES THAT MIGHT JUST HAVE THE TINIEST, SLIGHTEST, GRAIN OF BREAKTHROUGH IS THAT DUMB OR NONSENSICAL!
There is such a thing as "invariant relativity." A temperature, like a speed, or other quantity or quality, might go higher and ever higher on the spot, or ever broader and/or ever deeper on the
spot, but at a certain point of uncertainty, relativity takes over what has become a treadmill to nowhere in an observer's observation or detection, and won't allow any nonlocal, nonrelative,
observer to observe what is then a nonrelative continuance on the spot. A speeder has left the observer's "observable universe," leaving behind a belief he is still in the observer's "observable
universe" but asymptotically slowing down in still going, fading, away. A possibly still climbing temperature has become locked into an "observable" upper limit, the actual reality on the spot of
climb slowing in distant view to become asymptotically "relatively" meaningless to the distant observer's own local detection.... Truly meaningless in the case of temperature.
Last edited:
At the risk of venturing into an area of my total ignorance does this make any sense:
1. Temperature is the vibration of 'bits'
2. It needs time to 'be'
3. The vibration implies movement (over a tiny distance (?))
4. A limiting factor: the speed of light (if it applies at a quantum 'level')
5. Therefore an upper limit must exist
Scientists at CERN's Large Hadron Collider may have created the world's hottest man-made temperature, forming a quark-gluon plasma that could have reached temperatures of 5.5 trillion degrees Celsius
or 9.9 trillion Fahrenheit.
As many of you are aware, I have serious problems in the application of infinity. and related infinite descriptions, to non-mathematical (reality) situations. Here is a post from 2022:
I am delighted to find that serious attention is being drawn to these difficulties by Open University texts published by Cambridge University Press. Here is just a start from what I have been so
pleased to read regarding cosmology, and the problems arising from division by zero and like mathematical operations and trying to apply the results to reality.
In the OU text on Galaxies and Cosmology we find:
Goodbye INFINITE temperatures and densities etcetera.
I am still reading further and will add as appropriate.
Nah, not a final black hole, just a plain old regular black hole, but big, that reached cosmic mass limit #3, resulting in transition from primordial matter to regular matter in a natural pulverizing
big bang explosion from a single hot dark dense state into the existing open spaces of the universe. Just a natural occurrence, it happens from time to time. No hotter than any other black hole. No
denser than any other black hole. Just more massive. Black holes aren't infinitely hot, they only hold the heat of the trillions of stars they took in, and upon transition back to regular matter,
temperatures revert back to the temperatures of the original stars before transition to primordial matter in a black hole. Anything that might have been in the way was either pushed back or
pulverised, resulting in a big bang bubble in a pushed back section of the rest of the universe, except the biggest black holes and galactic cores were moved less and partially held their relative
positions in the new section of the universe. They survived the big bang, but were mostly stripped of stars. These galactic remnants became some of the drivers for very early galaxy formation and
quasars very soon after our big bang. Just as we can see that feeding or forming black holes remove regular matter from existing spaces, and leave the original space behind, and safely store the
primordial matter for accumulation in a black hole by denying any further access to open spaces, we can surmise that the opposite is true, namely, that big bangs replace matter into the open spaces
of the universe, as compared to space itself expanding. If the universe is already everywhere and already holds all of the open spaces in existence, then expansion of space is nonsensical because
space can't expand beyond everywhere, again pointing to expansion of regular matter into existing open spaces, not expansion of space itself or expansion of the universe. And if everything occurs
naturally and occurs inside the universe, then big bangs must be natural occurrences in the greater universe. And if big bangs occur from primordial matter transitioning into regular atomic matter,
and the only source of primordial matter is inside black holes, then big bangs must come from black holes.
As many of you are aware, I have serious problems in the application of infinity. and related infinite descriptions, to non-mathematical (reality) situations. Here is a post from 2022:
I am delighted to find that serious attention is being drawn to these difficulties by Open University texts published by Cambridge University Press. Here is just a start from what I have been so
pleased to read regarding cosmology, and the problems arising from division by zero and like mathematical operations and trying to apply the results to reality.
In the OU text on Galaxies and Cosmology we find:
Goodbye INFINITE temperatures and densities etcetera.
I am still reading further and will add as appropriate.
MDS: Not sure how relevant this is to infinite/infinity, but I will return to that.
Meanwhile, re: "And if big bangs occur from primordial matter transitioning into regular atomic matter, and the only source of primordial matter is inside black holes, then big bangs must come from
black holes."
Are you meaning
regular atomic
as baryonic? I am thinking in terms of the following Google.
Astronomers therefore use the term 'baryonic' to refer to all objects made of normal atomic matter, essentially ignoring the presence of electrons which, after all, represent only ~0.0005 of the
mass. Neutrinos, on the other hand, are (correctly) considered non-baryonic by astronomers.
Re: first "quote" above, are you suggesting a form of cyclic Universe? If so, would this be a higher dimensional version of Moebius Strip or Klein Bottle? I mean as opposed to a simple "BB>BH>BB>BH .
. . ".
To explain a little . . . . . .
To a flatlander, the surface of a sphere (viable consideration of a two-dimensional surface existing in three space dimensions) is "all there is". To a higher dimensional being, the surface of a
sphere expands (compare expansion of "a universe" - this equates to
observed universe
of a flatlander in time) as the radius of the sphere expands, This explains "expands into" question by invoking higher dimensional observer. One difference is that the "BB>BH>BB>BH . . . " model
cycles in time, whereas the "strip/bottle . . . " models contain their time elements.
If you consider the "BB>BH>BB>BH . . . " model, presumably there would be a return to primordial matter. Of course, you may consider that the "strip/bottle . . . " alternative exists in a similar
"time" dimension. I have yet to see translations of these ideas into a
I want to return to the original infinite/infinity BBT ramifications regarding FTL expansion of the Universe involving matter. (Vide matter/energy).
mdswartz. Just noticed that was only your second post. Welcome to the forum. That was quite a heavyweight second post, and I can see that you will be livening up the proceedings here.
Well done, and please hang around. You are very welcome in keeping the forum on its toes.
However, another question, please, to quickly appreciate where you are coming from.
You raise another interesting question, which I have alluded to above, but which probably merits further consideration:
This has, imho, considerable importance in discussing such matters as "Where did the Universe expand into?". Although analogies have their shortcomings, this one has helped me enormously, and I
therefore feel justified in passing it on. It concerns a flatlander confined to the surface of a sphere. I have checked that it is admissible to consider this mathematically, although I crave a
little indulgence in allowing a higher dimensional being the ability to observe the radius as the sphere expands.
From a drop of water.... | Page 18 | Space.com Forums
So our flatlander, existing on his surface, considers
as something separate, as so many of us do, for practical purposes. Please accept this as part of the setting up of the analogy.
So, having much in common with humanity in our observed universe, he works out that his
is expanding, although the sizes of material objects do not increase in size in relation to the perceived expansion of his
observed universe
. He wonders what his
perceived universe
is expanding into, since
all there is
should not have an outside.
However, unbeknownst to our flatlander, there is a superior being who appreciates that the flatlander's
observed universe
is simply the surface of a sphere which is really quite an insignificant part of "superbeing"s perceived universe. This superbeing sees that the flatlander's
observed universe
is simply his little playground, and does not merit the term "Universe". The flatlander's two dimensional surface still has no boundary to its surface area, which, nevertheless is increasing in area
as the radius increases, and still (in the flatlander's mind) has nowhere to expand into. Problem solved. We just have to see such problems from the standpoint of beings with the ability to perceive
higher dimensions.
Incidentally, this analogy also has something to offer on the question of semantics,
What do we mean by
? Either we keep "Universe" as "all there is", or we use a global
observable universe
to cover all that might be appreciated by all sentient beings everywhere (and perhaps including all observable by them and by their instruments), which we might term "Superverse". The term "Universe"
would then no longer exist, and the various "sub-domain
" would then be
observable universe
sub-groups defined in terms of their observers. Apart from being differentiated by location, their
observable universes
would also change over time, as newer observing capability is added - expansion from unaided sight to invention of the telescope to similar instruments attached to cameras and capable of processing a
wider range of electromagnetic radiation, and so on.
I hope that the above has summarised more concisely a few talking points previous spread over different threads.
Last edited:
MDS: Not sure how relevant this is to infinite/infinity, but I will return to that.
Meanwhile, re: "And if big bangs occur from primordial matter transitioning into regular atomic matter, and the only source of primordial matter is inside black holes, then big bangs must come
from black holes."
Are you meaning regular atomic as baryonic? I am thinking in terms of the following Google.
Re: first "quote" above, are you suggesting a form of cyclic Universe? If so, would this be a higher dimensional version of Moebius Strip or Klein Bottle? I mean as opposed to a simple "BB>BH>BB>
BH . . . ".
To explain a little . . . . . .
To a flatlander, the surface of a sphere (viable consideration of a two-dimensional surface existing in three space dimensions) is "all there is". To a higher dimensional being, the surface of a
sphere expands (compare expansion of "a universe" - this equates to observed universe of a flatlander in time) as the radius of the sphere expands, This explains "expands into" question by
invoking higher dimensional observer. One difference is that the "BB>BH>BB>BH . . . " model cycles in time, whereas the "strip/bottle . . . " models contain their time elements.
If you consider the "BB>BH>BB>BH . . . " model, presumably there would be a return to primordial matter. Of course, you may consider that the "strip/bottle . . . " alternative exists in a similar
"time" dimension. I have yet to see translations of these ideas into a space-time framewrk.
I want to return to the original infinite/infinity BBT ramifications regarding FTL expansion of the Universe involving matter. (Vide matter/energy).
Cat, very sorry to barge in on your thread. The first time I read it I I thought I saw a part questioning if everything comes back to a universal black hole, but I'm not sure if I found that part
when I looked again because I may have checked out the links and replies differently, but my my response was meant to say, no, not a universal black, just a regular black hole. My views aren't
science based, my starting point is everything occurs naturally, so big bangs must be natural, and the only source of primordial matter is inside black holes, so big bangs must come from black holes.
And since gravity, though dumb as brass since it only does what mass indicates, is so perfect at it's job, and our math says there's no escape from black holes, then matter must have power over
gravity, but it only uses it in transitions between states of matter, letting gravity do it's perfect job the rest of the time. Aside from various hybrid stellar collapses, I felt there's 3 primary
states of matter, 3 cosmic mass limits, and 3 transitions. At #1, roughly 1.4 solar masses, regular atomic matter transitions to neutron star, maybe even faster than free fall scale so the core and
edge form first, followed soon by the shock wave and rebounding free falling residual matter. At #2, roughly 2.3 solar masses, neutron star transitions to primordial matter of bottled up quarks,
gluons, nuclear and degeneracy forces and everything squeezed to the center of a black hole. At #3, roughly 1 big bang of mass, transition from primordial matter to regular matter in a pulverizing
big bang explosion. So yes, kind of like a cyclical model, but they happen somewhere in the universe, wherever cosmic mass limit #3 is surpassed, cyclical in that the universe can seemingly go on
forever "running itself". And yes, I'm thinking transition to baryonic matter, then atomic matter, and quickly. I felt electrons and neutrinos and photons didn't count at the initial big bang stage
because neutrons come first and electrons and neutrinos and photons come from decaying free neutrons. So I felt the transition was designed to cause a pulverizing explosion that breaks the bounds of
gravity, allowing escape from the black hole and leading to our ever expanding section of the universe. The explosion goes at a significant fraction of the speed of light but needs not exceed the
speed of light because it's achieved by the repulsive forces of primordial matter pushing out from the inside, and by breaking the bounds of gravity it's a cosmic trade, whereby one center of gravity
is instantaneously traded in for an ever outward expansion of matter permanently freed from the original center of gravity. The primordial matter is sent in all directions and in all places, so
pulverizing to the point of matter eventually recombining similarly to what we see in particle colliders, baryons, hadrons first, then free neutrons, which then decay, bringing the first protons,
electrons, photons, and neutrinos, and first electrical charges, followed by nucleosynthesis of nearly all hydrogen in a new expanding section of the universe composed of nearly all hydrogen. The
transition does take time, but to me it starts immediately and takes days, weeks and years, not eons. If blast speed is 1/2 the speed of light you have a big bang bubble with diameter of roughly 8
billion miles after day 1 and 3 trillion miles after a year, plenty of space to allow for the cooling to allow regular matter formation during and after the first year. The big bang bubble goes
mostly into empty spaces, but if anything is in the way it's pushed back or pulverised, except the largest black holes hold relative positions. What's pulverised adds to the formation of heavy
elements available in the universe and gives an early start of the building blocks for planet formation alongside star formation and very early galaxy formation around any galactic remnants that
survived the big bang. Thanks for asking Cat, again, not science based but based in a firm belief that everything occurs naturally in the universe so big bangs must be natural and must come from
black holes, so everything else is just a guess as to how it could all have possibly happened and that's the guess I came up with, but I really don't know and I don't think anyone else does either.
So my apologies again for barging into your scientific thread with unscientific ideas.
Apr 3, 2020
Cat, very sorry to barge in on your thread. The first time I read it I I thought I saw a part questioning if everything comes back to a universal black hole, but I'm not sure if I found that part
when I looked again because I may have checked out the links and replies differently, but my my response was meant to say, no, not a universal black, just a regular black hole. My views aren't
science based, my starting point is everything occurs naturally, so big bangs must be natural, and the only source of primordial matter is inside black holes, so big bangs must come from black
holes. And since gravity, though dumb as brass since it only does what mass indicates, is so perfect at it's job, and our math says there's no escape from black holes, then matter must have power
over gravity, but it only uses it in transitions between states of matter, letting gravity do it's perfect job the rest of the time. Aside from various hybrid stellar collapses, I felt there's 3
primary states of matter, 3 cosmic mass limits, and 3 transitions. At #1, roughly 1.4 solar masses, regular atomic matter transitions to neutron star, maybe even faster than free fall scale so
the core and edge form first, followed soon by the shock wave and rebounding free falling residual matter. At #2, roughly 2.3 solar masses, neutron star transitions to primordial matter of
bottled up quarks, gluons, nuclear and degeneracy forces and everything squeezed to the center of a black hole. At #3, roughly 1 big bang of mass, transition from primordial matter to regular
matter in a pulverizing big bang explosion. So yes, kind of like a cyclical model, but they happen somewhere in the universe, wherever cosmic mass limit #3 is surpassed, cyclical in that the
universe can seemingly go on forever "running itself". And yes, I'm thinking transition to baryonic matter, then atomic matter, and quickly. I felt electrons and neutrinos and photons didn't
count at the initial big bang stage because neutrons come first and electrons and neutrinos and photons come from decaying free neutrons. So I felt the transition was designed to cause a
pulverizing explosion that breaks the bounds of gravity, allowing escape from the black hole and leading to our ever expanding section of the universe. The explosion goes at a significant
fraction of the speed of light but needs not exceed the speed of light because it's achieved by the repulsive forces of primordial matter pushing out from the inside, and by breaking the bounds
of gravity it's a cosmic trade, whereby one center of gravity is instantaneously traded in for an ever outward expansion of matter permanently freed from the original center of gravity. The
primordial matter is sent in all directions and in all places, so pulverizing to the point of matter eventually recombining similarly to what we see in particle colliders, baryons, hadrons first,
then free neutrons, which then decay, bringing the first protons, electrons, photons, and neutrinos, and first electrical charges, followed by nucleosynthesis of nearly all hydrogen in a new
expanding section of the universe composed of nearly all hydrogen. The transition does take time, but to me it starts immediately and takes days, weeks and years, not eons. If blast speed is 1/2
the speed of light you have a big bang bubble with diameter of roughly 8 billion miles after day 1 and 3 trillion miles after a year, plenty of space to allow for the cooling to allow regular
matter formation during and after the first year. The big bang bubble goes mostly into empty spaces, but if anything is in the way it's pushed back or pulverised, except the largest black holes
hold relative positions. What's pulverised adds to the formation of heavy elements available in the universe and gives an early start of the building blocks for planet formation alongside star
formation and very early galaxy formation around any galactic remnants that survived the big bang. Thanks for asking Cat, again, not science based but based in a firm belief that everything
occurs naturally in the universe so big bangs must be natural and must come from black holes, so everything else is just a guess as to how it could all have possibly happened and that's the guess
I came up with, but I really don't know and I don't think anyone else does either. So my apologies again for barging into your scientific thread with unscientific ideas.
Please edit in some paragraph breaks. Walls of text like this are very difficult to read.
Hi mdswartz, don't apologise - it is there to be replied to. Thank you for providing an interesting reply.
Next point(s). Do you have a short handle, like mds or, in directly replying, perhaps just M?
Anything for understandable brevity. I like Cat at the bottom, but it is not essential.
Next, please take notice of COLGeek. Even in this short post, see how much easier it is to read than the following. I like "quotes" and
where they help understanding, and
sometimes colour (maybe in brackets)
to break up longer sentences - but not essential.
Hi mdswartz, don't apologise - it is there to be replied to. Thank you for providing an interesting reply.
Next point(s). Do you have a short handle, like mds or, in directly replying, perhaps just M? Anything for understandable brevity. I like Cat at the bottom, but it is not essential. Next, please
take notice of COLGeek. Even in this short post, see how much easier it is to read than the following. I like "quotes" and italics or underlinings where they help understanding, and sometimes
colour (maybe in brackets) to break up longer sentences - but not essential.
By the way, go to end of line at the top (starting B - go to end of that line and select the 3 vertical dots, and select quote marks '' (99) and type in, or paste, your text.
OK - I will go back and read your post now.
Last edited:
mdswartz. Just noticed that was only your second post. Welcome to the forum. That was quite a heavyweight second post, and I can see that you will be livening up the proceedings here.
Well done, and please hang around. You are very welcome in keeping the forum on its toes.
However, another question, please, to quickly appreciate where you are coming from.
You raise another interesting question, which I have alluded to above, but which probably merits further consideration:
This has, imho, considerable importance in discussing such matters as "Where did the Universe expand into?". Although analogies have their shortcomings, this one has helped me enormously, and I
therefore feel justified in passing it on. It concerns a flatlander confined to the surface of a sphere. I have checked that it is admissible to consider this mathematically, although I crave a
little indulgence in allowing a higher dimensional being the ability to observe the radius as the sphere expands.
From a drop of water.... | Page 18 | Space.com Forums
So our flatlander, existing on his surface, considers
as something separate, as so many of us do, for practical purposes. Please accept this as part of the setting up of the analogy.
So, having much in common with humanity in our observed universe, he works out that his
is expanding, although the sizes of material objects do not increase in size in relation to the perceived expansion of his
observed universe
. He wonders what his
perceived universe
is expanding into, since
all there is
should not have an outside.
However, unbeknownst to our flatlander, there is a superior being who appreciates that the flatlander's
observed universe
is simply the surface of a sphere which is really quite an insignificant part of "superbeing"s perceived universe. This superbeing sees that the flatlander's
observed universe
is simply his little playground, and does not merit the term "Universe". The flatlander's two dimensional surface still has no boundary to its surface area, which, nevertheless is increasing in
area as the radius increases, and still (in the flatlander's mind) has nowhere to expand into. Problem solved. We just have to see such problems from the standpoint of beings with the ability to
perceive higher dimensions.
Incidentally, this analogy also has something to offer on the question of semantics,
What do we mean by
? Either we keep "Universe" as "all there is", or we use a global
observable universe
to cover all that might be appreciated by all sentient beings everywhere (and perhaps including all observable by them and by their instruments), which we might term "Superverse". The term
"Universe" would then no longer exist, and the various "sub-domain
" would then be
observable universe
sub-groups defined in terms of their observers. Apart from being differentiated by location, their
observable universes
would also change over time, as newer observing capability is added - expansion from unaided sight to invention of the telescope to similar instruments attached to cameras and capable of
processing a wider range of electromagnetic radiation, and so on.
I hope that the above has summarised more concisely a few talking points previous spread over different threads.
Thanks again Cat. The universe I envision is everywhere so expansion of our section consist of matter moving ever outward into the rest of the universe, in just a tiny section of everywhere maybe 40
to 50 billion light years wide or so. And since the big bang I envision breaks the bounds of gravity, the force of the blast pushes back or pulverised everything that might be in the way, except that
gravitational forces close to galactic cores are stronger than the force of the blast, so only stars out from the galactic center are pushed back, but not the galactic cores. This creates a big bang
"bubble of safety" that makes it look like we're unique and alone and that allows evolution of our section of the universe to carry on mostly undisturbed by the rest of the universe, which is still
too far away to see, yet. For 10 billion years or so, the force of the explosion powers expansion of our section of the universe, but as the ever expanding size of the big bang bubble grows, the
force of the blast wanes, and gravity from the rest of the universe eventually becomes primary and acts to help pull our section apart at an increased rate, making it look to us like our expansion is
accelerating. This and my other posts are non-scientific guesses to how big bangs can be natural occurrences which are designed to answer questions like why is expansion accelerating, why are there
such big galaxies and black holes shortly after the big bang, where did all the heavy elements come from, why does expansion continue in the first place, and where did the primordial matter in our
big bang come from. And starting from a standpoint that everything must be naturally occuring within the universe, I came up with these guesses. I hope even a small part might be right because I
believe in nature, not one time mystical occurrences. Sorry again for barging in Cat.
So my apologies again for barging into your scientific thread with unscientific ideas.
Black holes. Yes, you are quite correct. You can have small ones all over the place, but I also use a short of shorthand to denote your linear cyclic mechanism = BB>BH>BB>BH>BB>BH, suggesting just
one large, overwhelming one of each in repetitive succession.
My preferred approach is the Moebius Strip / Klein Bottle idea.
Just take a strip of paper and, instead of joining end to end, first make half a twist before you join. (Just Google it. and Klein Bottle for diagrams).
With the Moebius Strip, if you run your finger along the surface, for (better still) draw a pen along the surface, you will come back to where you start. Although you can see two sides of the strip,
there is actually only one, continuous side.
A Klein Bottle is similar one dimension higher up. Care, as you cannot actually construct one in three dimensions. Diagrams show the return through the side.
My point is that this type of structure, and higher dimension(s) cannot actually be made. Hence they show a bottle, with a tube coming back through the side and opening out to become the inside
My point is that the Universe may be analogous. Imagine a wide Moebius Strip. Start writing along the surface. You will get back to where you start. You do not repeat, you keep writing more lines
below the older ones. You could end up with a whole book written thus.
Of course, this is only an analogy, but it is suggestive of what might be vaguely similar in higher dimensions. The BB>BH>BB>BH>BB>BH, or other "join" has no direct meaning but could be looked on as
a repetitive "location" (meaning nothing) - the junctions in the paper corresponding to the BB>BH nexi (alternative plural of nexus as nexuses). No meaning as far as we can guess - but nobody knows
The other question I dealt with already with the flatlander.
Brought this over in whole from "From a drop of water...." because I don't want to rewrite the most of it here:
"Woke up thinking about black hole "electroweak stars" and galaxies, and galaxy centers! Couldn't get back to sleep because of what I was seeing in my own "mind's eye":
"Turn a 'black hole electroweak star' destroyer inside out and you have the reverse, a 'white hole star' factory' (of a piece with the collapsed cosmological constant (/\) Planck (Big Bang) 'Mirror
" ** (Always remembering the fundamental binary base2 character (smooth to chunky coarse grain and back.... repeating to infinities) duality of Chaos Theory's self-similar fractal 'zooms' universe
structure ("emergent gravity" ("emergent (hyperspace) SPACE")), including its constant reduction of its infinities to a further fundamental binary base2 constant of "setting" and reset ("gravity's
infinities" combine with the "strong (nuclear) binding force" finite)!) ** "
"'White hole star' factory" should probably expand to read and describe 'Transparent 'white hole star' stuff factory."
A black hole purple people eater 'electroweak star' destroyer doesn't blow up for its end or just go dead (impossible to what it is). It is said to eventually simply blow away, dissipate, fade away.
to nothingness. As I claim above in another Schrodinger-like functionality, that fade away to nothingness isn't necessarily so from how it is created, how it is caused, in the first place. It goes
from caterpillar (black hole micro / macro particle point-singularity) to (transparent white hole new star stuff) butterfly or moth over its life. Its very strength (its very point-singularity) is
its inherent, intrinsic, weakness. It is its own inevitable 'mirror' equal but opposite "(of a piece with the collapsed cosmological constant Planck (Big Bang) 'Mirror Horizon')!"
M, yrpost #16.
I think I covered this. No problem, but you do take a lot of words
My part of the flatlander analogy decribes a higher dimensional being thinking the flatlander''s Universe as a toy. I think it not too inaccurate to say "Your observed universe is what you make of
it" as your view depends on what instruments you use (or accept others' use of).
By higher dimensional being I do not intend any religious significance. More like human versus rabbit.
Atlan, thank you for explaining. Of course, as you will know, you can effect the same with this:
As many of you are aware, I have serious problems in the application of infinity. and related infinite descriptions, to non-mathematical (reality) situations. Here is a post from 2022: https://
forums.space.com/threads/big-bang-evidence.55635/#post-568525 Why do you bring infinity into it...
So my apologies again for barging into your scientific thread with unscientific ideas.
STOP THIS
Black holes. Yes, you are quite correct. You can have small ones all over the place, but I also use a short of shorthand to denote your linear cyclic mechanism = BB>BH>BB>BH>BB>BH, suggesting
just one large, overwhelming one of each in repetitive succession.
My preferred approach is the Moebius Strip / Klein Bottle idea.
Just take a strip of paper and, instead of joining end to end, first make half a twist before you join. (Just Google it. and Klein Bottle for diagrams).
With the Moebius Strip, if you run your finger along the surface, for (better still) draw a pen along the surface, you will come back to where you start. Although you can see two sides of the
strip, there is actually only one, continuous side.
A Klein Bottle is similar one dimension higher up. Care, as you cannot actually construct one in three dimensions. Diagrams show the return through the side.
My point is that this type of structure, and higher dimension(s) cannot actually be made. Hence they show a bottle, with a tube coming back through the side and opening out to become the inside
My point is that the Universe may be analogous. Imagine a wide Moebius Strip. Start writing along the surface. You will get back to where you start. You do not repeat, you keep writing more lines
below the older ones. You could end up with a whole book written thus.
Of course, this is only an analogy, but it is suggestive of what might be vaguely similar in higher dimensions. The BB>BH>BB>BH>BB>BH, or other "join" has no direct meaning but could be looked on
as a repetitive "location" (meaning nothing) - the junctions in the paper corresponding to the BB>BH nexi (alternative plural of nexus as nexuses). No meaning as far as we can guess - but nobody
knows anyway.
The other question I dealt with already with the flatlander.
Thanks again Cat. I view the universe as everywhere in all directions, and we're in a small section, safely evolving in our big bang bubble. And not really BB, BH, BB, BH. Any black hole anywhere
might surpass cosmic mass limit #3, most will never pass it, you never know where or when one might get that big. It just happens somewhere, and not often. Can't really see it as a Mobius strip or
any other shape, but mostly I envision it as a giant cube in all directions, but the edge of the universe becomes nonsensical to me so I don't ever think about it beyond that.
Thanks again Cat. I view the universe as everywhere in all directions, and we're in a small section, safely evolving in our big bang bubble. And not really BB, BH, BB, BH. Any black hole anywhere
might surpass cosmic mass limit #3, most will never pass it, you never know where or when one might get that big. It just happens somewhere, and not often. Can't really see it as a Mobius strip
or any other shape, but mostly I envision it as a giant cube in all directions, but the edge of the universe becomes nonsensical to me so I don't ever think about it beyond that.
The Moebius Strip is only meant to suggest a mechanism for a cyclic Universe. It is but a poor analogy, but I find it useful.
Atlan, thank you for explaining. Of course, as you will know, you can effect the same with this:
As many of you are aware, I have serious problems in the application of infinity. and related infinite descriptions, to non-mathematical (reality) situations. Here is a post from 2022: https://
forums.space.com/threads/big-bang-evidence.55635/#post-568525 Why do you bring infinity into it...
Simply doesn't matter, Cat! I'm enjoying, very much, your own "mind's eye" tracking here and simply wanted to get in to augment it with my own track! As the saying goes, "Go, Cat, go!"
Last edited:
Atlan, thank you for your kind words. I really do not know, or cannot remember, how our "overcompetitive streak" came about. I wish it would end.
I never intend any adverse criticism. As I say at the bottom of my posts:
"There never was a good war nor a bad peace."
Benjamin Franklin was not one of my countrymen and iirc had a little to do with the separation of the USA and UK. Nevertheless, I am happy to quote him.
Thanks again Cat. I view the universe as everywhere in all directions, and we're in a small section, safely evolving in our big bang bubble. And not really BB, BH, BB, BH. Any black hole anywhere
might surpass cosmic mass limit #3, most will never pass it, you never know where or when one might get that big. It just happens somewhere, and not often. Can't really see it as a Mobius strip
or any other shape, but mostly I envision it as a giant cube in all directions, but the edge of the universe becomes nonsensical to me so I don't ever think about it beyond that.
M, don't make too much of the BBT thing. I think that there is an infinite (OOOps) difference between BBT and t = 0. That interceding difference - reported as getting closer and closer to trillionths
(or smaller) of a second is, in my opinion, due to the approaching infinitely (OOOps) large difference caused by division by zero.
My guess is that these tiny time spans are entirely due to the vast temperatures (reported elsewhere). Inordinately high temperatures 10^32 K obviously require exceedingly rapid cooling. | {"url":"https://forums.space.com/threads/goodbye-infinity-and-all-that-infinite-singularity-and-infinite-density-descriptions.66352/","timestamp":"2024-11-04T21:45:48Z","content_type":"text/html","content_length":"429469","record_id":"<urn:uuid:89532e91-24a0-4882-96a9-321196040eb9>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00070.warc.gz"} |
Square wave to Triangle Wave with LM324 Op-amp
Here is demonstration of how to convert square wave to triangular wave using LM324 op-amp. To convert square wave to triangle wave using op-amp we have to integrate the square wave over the time
period of the input square wave period. LM324 op-amp has four op-amp build into it and we can use one of them to generate a square and another to create an integrator.
In previous tutorials LM324 op-amp Integrator testing with matlab simulink oscilloscope we have explained how to create an integrator circuit using LM324 operational amplifier. And we have also shown
in the tutorial Square wave generator using LM358 how to create square wave. Since LM358 and LM324 are the same op-amp only differing in number of op-amps inside their IC and power dissipation we can
use the same circuit for LM324 for square wave generation as with LM358. So we will use the same LM324 op-amp to generate square wave using one of the four op-amp and second op-amp to build an
The following picture shows how the LM324 op-amp is used to create square wave to triangle circuit on a breadboard.
The circuit diagram of the above realized square to triangle wave converter using op-amp is shown below.
In the above circuit wiring diagram, the first section op-amp generates a square wave of 1.2KHz. The frequency of the square wave is configured using the resistor R1 and capacitor C1. Here R1 =
470KOhm and the higher the resistance R1 the lower will be the frequency of the square wave, provided the capacitor C1 remains 0.001uF. To create variable frequency one can just replace the R1
resistor with a potentiometer. Potentiometer such as 1MOhm gives higher frequency range from Hz to KHz. With 100KOhm POT the lower frequency is around 3KHz and upwards. The second op-amp is used to
integrate the square wave coming from the output of the first op-amp. Between the square wave generator and integrator op-amp a 10KOhm potentiometer is used which limits the amplitude going into the
integrator circuit. The op-amp integrator circuit is voltage divider biased since we are using single supply voltage. This was explained in the previous tutorial LM358 Op-Amp Integrator Circuit
Analysis with single and dual power supply. The underlying theory of How to Design LM358 Op-Amp Practical Integrator was also previously explained and how to calculate the capacitor value and
feedback resistor value for particular gain and frequency.
Once the circuit is build we can test with Matlab/Simulink oscilloscope and spectrum analyzer. Before doing that we have to connect the input and output signal to the PC line in/mircophone port. The
following picture shows how the above circuit is connected to PC.
Now we can test the square wave to triangle wave converter circuit with Matlab/Simulink. For details how the PC oscilloscope and spectrum analyzer works see How to use Matlab Simulink as Oscilloscope
The following video demonstrates testing of the constructed circuit with Matlab/Simulink software.
The time scope displays the input square wave and output triangle wave as shown below.
The spectrum analyzer display the frequency and magnitude of the square wave and triangle as shown below.
So in this example illustration we explained and showed how to make square to triangle wave converter using LM324 operational amplifier.
Post a Comment | {"url":"https://www.ee-diary.com/2021/11/square-wave-to-triangle-wave-with-lm324.html","timestamp":"2024-11-12T15:40:03Z","content_type":"application/xhtml+xml","content_length":"205146","record_id":"<urn:uuid:3541030c-75fa-4cfa-9316-02ef923c7152>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00179.warc.gz"} |
How to add assumptions in limit in sympy?
To add assumptions in limit calculations using Sympy, you can use the Limit() function, which takes the function and the variable to be evaluated as arguments. The Limit() function also takes an
optional third argument, which is a dictionary of assumptions about the variables in the expression.
Here is an example of how to use the Limit() function with assumptions:
1 from sympy import Limit, Symbol, sin
3 x = Symbol('x')
4 assumptions = {'x': 0}
5 expr = sin(x)/x
6 limit_expr = Limit(expr, x, 0, assumptions=assumptions)
7 print(limit_expr.doit())
In this example, we are calculating the limit of sin(x)/x as x approaches 0, assuming that x is equal to 0. The doit() method is used to evaluate the limit expression, and the result is printed.
You can also use the Assumptions context manager to set assumptions for an expression. Here is an example:
1 from sympy import Symbol, sin, Assumptions
3 x = Symbol('x')
4 expr = sin(x)/x
6 with Assumptions(x > 0):
7 limit_expr = Limit(expr, x, 0)
8 print(limit_expr.doit())
In this example, we are using the Assumptions context manager to assume that x is greater than 0. The limit expression is then evaluated using the Limit() function and the doit() method, and the
result is printed. | {"url":"https://devhubby.com/thread/how-to-add-assumptions-in-limit-in-sympy","timestamp":"2024-11-13T02:34:22Z","content_type":"text/html","content_length":"124427","record_id":"<urn:uuid:0381c4ff-9c49-464b-8df3-210562ddcb8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00444.warc.gz"} |
a risk is switch to the second
present a sent in this in this you in this a we we you present the fast they might those
new with them
and is
this is a but you on what of well with a bit that's caff be and i C okay
and uh
so a
i you already know that the um
yeah are another
kind of but with some for at the end and for C be that
the the on a the on S one egg with um
and one as the only one in some these
yeah the dam goes you than it them
but usually for them most new data can you we we on we need to compute the that be beyond
oh the gradient and the hessian and and usually do they has and very very few already last
uh it basically that's
was up we need to in but very hot
but a at that possibly has an and in in this
in this present is that we we we to do how to
avoid this problem
how to how to avoid
how to
a lot but enough that has an and all so how to a the
the lower the last
in but up last
has yeah
and now is the a life
might talk that we we you
resent deeply side it press for the good
the copy of method
and we will find the
the fast way to compare
to compute looks
the gradient
i also to
compute a process as N
and it here we we
with an
the low rain and that's one for the a possibly has and
and then by on this
but you by on this low when it just when we we fight
we we i
we we we present
a very it's so that need to
to avoid
the input of last has
i skip this require already meant something
reverse light
a you know the to derive with that
the the atlas electrics um we can see that the same the similar conversant robot whereabouts roy
in nineteen seventy nine
but here you see here this is a conference on
two we to approximate the ten so by
but extract the
this the two
to write really says that um is the
okay this term
is rest and it's right here and this time we present here
this time is to inform the nonnegative trends on the front end
i N
and this time to an problems
spice you can train
and the car far and bit eyes
zero are uh plus they've
and is a see here
to that
the on as one of the lead that we have best
all that on the fact that i the same time
and he'll to be two two we before all this we use this
but that's room
with i we've fit i is that that's
okay you C
this the best the is a set up fact the i one
and then you
code for i on the on the but is a seven
to very long for that
but long list that and we employ this
a back room
do you best
the fact
you see here is we we need to compute the the copy and here i'm node that has sent the
possibly may has sent here
and also the gradient here
and the
a call we the also we of go we need to need to compute you
the the cool and
from the be and we compute the gradient
for a for in this fall
and also so be that has an approach me has an
for in this fall
a probably C come men
i i a high computational cost
and of course the of this but might is side
you one here
and a lot of time and also
we we we present here than
the way to avoid this problem
it this line we'll
so that how to
compute the will be and matters
this do though it the bustle
the river a of the approximate ten so
which search but only one come but
we saw this this
we be is spread in the very
come come
this one is that correct up
correct upper up all the
common a
in in on the fact that it's that the the fact that and
and B and is that the rotation matrix
it better
okay to say here this this breath the relation between
the vector is a stand up
okay so
and vector is a
and vector is a set of the most and
my matrices a send up to time so
quite complicated here
you see if you were to i don't the data and
and this one leader
you where the wide the mode and
methods and of the ten so
and this italy's relation between
two batteries and
and we fine
the compact form for that the the copy be and map so here
and with few we we from this result we
find the weighted method
this one is the uh
the final
result for the gradient vectors
and it with you see here this one is the gradient of the tensor with respect to the fact though
the fact that one
and this one is the right
the great and
of the
a cross and tensor
which are but to the fact the and
and the got my mike this is press here the that the how to map product up
this time
what is that the fact the and
is that the the something
and of some this this crime matter
at the nest
we we present
how to compute the
the idea is this we we've split
we can see that the the possibly has and that though
and by and
block might be
and we we fight the plug i that is the press for for every block here
this dist
to and you with
how to you
that was sub just
has us a but just
like did you have a you
if and by it
say this is them
the diagonal block this
is and equal to M you you have to um the diagonal plot mattress and C
compute by
yeah my
i am and go product up a got my
with the and it it matters
a a um of sci ice ice of N
and is it that the diagonal vector
and for that off diagonal my
for the up
diagonal diagonal method
and you know in a equal to N
we we we can decompose
supp might in into
this four
this it is a diagonal matrix
this the that block diagonal matrix enough for
i i know what it
is it problem
i i matter and is also the block diagonal matter
we be spread here
in the next line
a very in i property of that of possibly has an that that we can
it's spread
the approximate has an and that the low end
and that one
you see year
so i say yeah that a i all method
we we already defined though
re slide
and is that the point
but but and method
see here here that the right block low matter
or within this fall
i K yeah A the
square matrix
oh side and
and ask way by and ask where
and this well
in the is the mac of the K method here even in this fall
be here
it in S like we you see here how to
a which to like the case and
and how
how we can
and to spend a little way that's one
i just one for the prosody has and yeah
the the has the a to my has an
and it is a metric
see you the plot like no mapping
see you that the also plot that all method
and case have where
such that
and of car yeah if we employ
deep if we employ the by no
it but to your we can it
we can but we can of one
the in but a blast scale
a little the last has in method
by this
and if we compute here is that a compute the in but uh
the approximate
has and we
all only compute it looked at the you might rate the came at rate and also in lot of this
and this time have very
the by some because this is this term is
has i'll up and by K and
and K
and a
and a square by and ask square
and in but
the same her
we can quickly compute
a because it brought no matrix
so is slide up problem
is that of compute the in of the whole that the whole take we compute in but it's
sub method along the diagonal
so do you have here
and we also had how to compute the in but at that a least
is have very is so uh
it press here
and finally from that them "'cause" new done of that room we fight with that
we we formulate the fast them let's be learned with some that to we reply here
i the approximation has has
and we define you
the fast them book use like
the with uh okay this
this term
this term we that
no this um
i you see
we so
this them with that this this
this a
and the L make
this border
with uh this the
okay also it's spread by the
block diagonal i dunno method
so finally
you see here the the the true this
most one
with the of the
the the five but to the much smaller than the this the approximation
uh has and then and these K we can we do the at the clear do the commit
so the of the
in a of this method
in the very don't to the in but that this
the possibly has and
and a call we don't need to control the to go beyond method
we not only to compute the
a possibly has an
maybe a nice
okay okay
so in this snow the next slide you see here
how to to the power with a how do to the the but but that in in the
you know an increase um
a very simple it and also very if if you soon that's a proposal were about rowing two thousand nine
that's of and
you nine
uh that's but more like that it we it in line
on the
the low power by real met um but power what's so by the la
enough a value and then we greater only
we do
you do the bar with and then we can
after the fire ten iteration we can
the can was and that the that's solution
we don't use this approach
we you are not a
and this you this one T by on the cake Q T condition
and this list to
the medium i of this problem
this for that's of that too
this condition
and as to the simulation
is assume a some place and you see here for three dimensional tensor a with side what one and right
by one hundred by one hundred combo by ten component
the common
we for to be collinear we the auto by this form
and we see here
with the
multiplicative kl K with them what what deeply get the L
the green light
and that the i
is from a up the
one one right it ways that no one that fred
i not yet and
one ten held and B
that's how the in twice and then the and with only
get a convert to the solution and
and i like that on the L
the fast lm level and is them with quickly can what up the only
one at right right
but not case and not a that it we one that we
test with one tell than by one column
i want how than i with the common
you to here one attract recommended
it's really he'll
and the
fast and where make on is also quickly
fit the data up the
fifteen no sitting it weights and
i and all right with some can of fit the data
i is to to much but maybe i skip this like
is used for to this prior to
for of the method but maybe there with
this you know uh an application with the leave at the faded the way that we
we classified the data
that's we from the data that we have forty four interest for a and that we put from the fight
by the cable with flat it to this just kind of ten so C to by sitting
that T two to T two because we have yeah i
i orientation and for scale
and of call we have yet the last of and is that number of plays
for right
it truck of from this
ten so
for the first
first we track by
thirty two
and T F that we have here
and the S like that we employ an an map to talk dove
three too
from the last
the life factor
and the but form we
of and here you see
but the K and it with on is only
i was N
with the us
i T i percent
maybe maybe though with the
are you less
with night night it's three was and but our and with need
not for was a and if we employ this was can trying the ten i
one as that was sent with ten
face and all is not enough ten fate take class
that's me we have one and right
and we
it to the good clothes and
a you see here we already but both a fast and with them too
for for et and a map the fast that was knew that for N
and the M
and that was don't we
a you don't
we to compare you are the last scale that will be and and also that scale
has has C and
the process may has them
and we avoid
they vote of the the last
and it
and you are virtually you see here would have the a into trend
the fast them was new than
we we the fast lemme i rammed
and we term for canonical only
um only to decompose and
that these these two D dizzy
similar to the one the one of the wooden and we can where you but row and
and the mostly but
oh and we is further
thank you
uh yeah think to the
okay we we we we may the the net and also is similar to we man that the relative error
you see here
this is the right of the arrow
but we we do is read how to compute this error of this is very simple that's we
from the conference and you
this so we compute this error and divide by norm up the so
this your L T you ever
no i i you these
for from one is known so we don't you are not the kind of um is a one | {"url":"https://www.superlectures.com/icassp2011/downloadFile.php?id=187&type=subtitles&filename=fast-damped-gauss-newton-algorithm-for-sparse-and-nonnegative-tensor-factorization","timestamp":"2024-11-14T12:31:29Z","content_type":"text/plain","content_length":"33174","record_id":"<urn:uuid:c206f852-0ffb-48ca-bac7-ad9d9eedf487>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00058.warc.gz"} |
The Expected Growth Factor
1 Comment
Our working paper titled "
An augmented q-factor model with expected growth
" (with Kewei, Haitao, and Chen) is now forthcoming at
Review of Finance
. The paper is formerly titled "
5." Alas, who knew that the compiled output of the LaTeX source code "$q^5$" would be invisible to Google Scholar? Oh well, live and learn.
The expected growth factor, its 2 by 3 benchmark portfolios on size and expected growth, the expected growth deciles, and the 3 by 5 testing portfolios on size and expected growth are all available
to download at
. We're waiting for Compustat to update its data in early February. Once the data become available, we will update and circulate the testing portfolios on all 150 anomalies examined in our
5 paper.
Conceptually, in the investment CAPM, firms with high expected investment growth should earn higher expected returns than firms with low expected investment growth, holding current investment and
profitability constant. Intuitively, if expected investment is high next period, the present value of cash flows from next period onward must be high. Consisting mainly of this present value, the
benefit of current investment must also be high. As such, if expected investment is high next period relative to current investment, the current discount rate must be high to offset the high benefit
of current investment to keep current investment low.
Empirically, we estimate expected growth via cross-sectional forecasting regressions of investment-to-assets changes on current Tobin’s
, operating cash flows, and changes in return on equity. Independent 2 by 3 sorts on size and expected growth yield the expected growth factor, with an average premium of 0.84% per month (
= 10.27) and a
-factor alpha of 0.67% (
= 9.75). The
-values far exceed any multiple-testing adjustment that we are aware of.
We augment the
-factor model (“
”) with the expected growth factor to form the model (“
5”). We then perform a large-scale horse race with other recently proposed factor models, including the Fama-French (2018) 6-factor model (“FF6”) and their alternative 6-factor model (“FF6c”), in
which the operating profitability factor is replaced by a cash-based profitability factor, as well as several other factor models.
As testing portfolios, we use the 150 anomalies that are significant (|
| ≥ 1.96) with NYSE breakpoints and value-weighted returns from January 1967 to December 2018 (Hou, Xue, and Zhang 2019). The large set includes 39, 15, 26, 40, and 27 across the momentum,
value-versus-growth, investment, profitability, and intangibles categories.
5 model is the best performing model. The figure below shows the fractions of significant alphas across all and different categories of anomalies. Across all 150, the
5 model leaves 15.3% significant, a fraction that is lower than 34.7%, 49.3%, and 39.3% across the
, FF6, and FF6c model, respectively. In terms of economic magnitude, across the 150 anomalies, the mean absolute high-minus-low alpha in the
5 model is 0.19% per month, which is lower than 0.28%, 0.3%, and 0.27% across the
, FF6, and FF6c model, respectively.
5 model is also the best performer in each of the categories. In particular, in the momentum category, the fraction of significant alphas in the model is 10.3%, in contrast to 28.2%, 48.7%, and
35.9% across the
, FF6, and FF6c model, respectively. In the investment category, the fraction of significant alphas in the
5 model is 3.9%, in contrast to 34.6%, 38.5%, and 30.8% across the
, FF6, and FF6c model, respectively.
While bringing expected growth to the front and center of empirical asset pricing, we acknowledge that the (unobservable) expected growth factor depends on our specification, and in particular, on
operating cash flows as a predictor of future growth. While it is intuitive why cash flows are linked to expected growth, we emphasize a minimalistic interpretation of the
5 model as an effective tool for dimension reduction.
The Fractions of Significant (|t| ≥ 1.96) Alphas Across Different Categories of Anomalies
1 Comment
Cesare Robotti link
3/18/2020 06:10:21 am
The performance of the "q5" model is impressive from an investment perspective. Based on the pairwise non-nested model comparison tests proposed by Barillas, Kan, Robotti, and Shanken (2020), it can
be easily verified that the "q5" model outperforms the Fama and French (2018) 6-factor model over the 1967-2019 sample period. The difference in bias-adjusted sample squared Sharpe ratios between
the two models is 0.253 and the asymptotic p-value of the test is 0.000. The outperformance of the "q5" factor model is even more impressive when the competitor is FF5, the five-factor model
proposed by Fama and French (2015). In this case, the difference in bias-adjusted sample squared Sharpe ratios between the two models is 0.280 and the asymptotic p-value of the test is 0.000.
Leave a Reply. | {"url":"https://theinvestmentcapm.com/1/post/2020/01/the-expected-growth-factor.html","timestamp":"2024-11-10T20:32:59Z","content_type":"text/html","content_length":"41985","record_id":"<urn:uuid:069c15b5-24ce-43ea-95fc-785a7541a290>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00778.warc.gz"} |
How To Play Cribbage
Cribbage, a classic card game, originated in the early 17th century, credited to the English poet Sir John Suckling, who adapted it from an older game called "Noddy." Known for its unique scoring
system and distinctive pegging board, Cribbage quickly gained popularity in England and spread to the American colonies, becoming a favorite among sailors and fishermen for its portability and
engaging gameplay.
The Scoreboard
Cribbage typically utilizes what is knows as a "Cribbage board" for scoring. A typical Cribbage board consists of four pegs (two for either player) and a wooden board marked with two tracks of 121
holes, with some variations.
Players use the two pegs to mark out how many points they have. The peg further along indicates how many points the player currently has, while the back-peg indicates how many points the player had
before that. Each time a player scores a point, no matter how few or under what circumstance, they move their back peg that many points ahead of the front peg, leap frogging it to their current
total. The difference between the two pegs thus always indicates how many points were scored last. The owner of the first peg to occupy the 121 point hole is the winner.
While recommended, the Cribbage board is technically an optional component as you can keep track of your score via pen and paper. However, Cribbage boards are typically easily avaliable and for
relatively cheap.
The objective in Cribbage is to be the first player to get 121 points. The gameplay is divided into three distinct parts, The Deal, The Play and The Show. Each part is explained in detail below.
This version of Cribbage is for two players, there are many other variations possible, but these rules are only for the variation we've chosen for this site. There are a lot of rules, I've tried to
explain them as best I can here, but you can also look at the rules at www.pagat.com or at Cribbage Corner, both of those are good places to learn how Cribbage works.
The Deal
The game starts with both players drawing a card from the deck to find out who is the dealer. The person that gets the lower card is the dealer. If the players draw equal cards then they draw again
until the dealer can be determined. This way of determining the dealer is only done in the first round, in subsequent rounds who is the dealer will alternate between the two players.
The dealer deals 6 cards to himself and 6 cards to the opponent. Each player then chooses two cards from their hand to put face down into the crib. The crib belongs to the dealer and is used at the
end of the round to gain extra points. Which cards you choose to put in the crib is very important, as it affects how many points you can get in later parts of the game.
At this point each player has four cards in their hand, and the Crib has four cards. The deck of cards is then put to the side, and the non-dealer (also called a pone) cuts the deck and then reveals
the top card. This card is referred to as the starter or the cut. If the starter is a Jack then the dealer immediately scores 2 points. This is known as Two for his heels. Once the starter card has
been shown, the players are ready to proceed to the next part of the game.
The Play
The pone (the player who is not the dealer) starts by laying down a card on the table and announcing its value, e.g. lays down a 6 and announces "Six". The dealer then lays down a card and announces
the cumulative value of the cards on the table, e.g. he lays down a 5 and announces "Eleven". This continues with the players laying down one card each until a player cannot lay down another card
without the cumulative value going over 31. The player then says "Go" and the other player can then continue to lay down his cards until he also can't lay down a card without going over 31. He then
says "Go" as well, and the player who laid down the last card will score 1 point if the total value is under 31 but 2 points if the value on the table is exactly 31. They then reset the count to 0
and continue with their remaining cards, starting with the player who did not lay down the last card. An ace is counted as 1, face cards are counted as 10 and other cards are their normal value.
During this phase there are several ways to score points, based on how you lay down your cards. Points are scored as you lay down your cards, e.g. if your opponent has just laid down a 4 and then you
lay down another 4 on top of it then you will score a pair. The starter/cut card is not used at all in this part of the game.
Players always announce the cumulative value of the cards on the table when they lay down a new card. If they score points they will announce the points as well, e.g. 15 for 2, or 31 for 2. When a
player has said "Go" then the other player will say "1 for the Go" when he's claiming the point from laying down the last card. He might also say "1 for last", if the other player has not laid down
any cards since the value was last reset. 1 for the Go or 1 for last are just different ways of announcing the same thing, that the player gets 1 point because he laid down the last card under 31.
Scoring during The Play
• Fifteen: For adding a card that makes the total 15, score 2 points.
• Pair: For adding a card of the same rank as the card just played, score 2 points.
• Pair Royal (Three of a kind): For adding a card of the same rank as the last two cards, score 6.
• Double Pair Royal (Four of a kind): For adding a card of the same rank as the last 3 cards, score 12.
• Run (sequence) of three or more cards: Score 1 point for each card in the sequence. The cards do not need to be in order, but they do need to be all together. E.g. H2 C8 D6 H7 S5 is a 4 card
sequence because C8 D6 H7 S5 can be re-arranged into S5 D6 H7 C8, but H2 C5 C7 D7 S6 is not a sequence because the extra 7 in the middle breaks up the sequence of 5-6-7. Basically if you can take
n cards that are in order and re-arrange them so all the n cards form a numerical sequence then it's a sequence.
• Last card, total value less than 31: Score 1 point.
• Last card, total value exactly 31: Score 2 points.
It's worth noting that even though all face cards count as 10, you cannot create a pair, pair royal or double pair royal with cards unless they have the same "real" rank. E.g. two queens are a pair,
a queen and a king aren't, even though they are both valued at 10. For sequences an ace is always low, you can't make a sequence with a king and an ace next to each other.
It's also worth noting that you can make points in many ways with the same cards. E.g. if the cards on the table are DA C7 and you lay down H7 you will get 2 points because 1+7+7=15 and 2 points
because 7+7 is a pair of sevens. So, in that case you would announce "Fifteen for 4".
This part of the game continues until both players have played all their cards. The scores are updated as soon as a player gets points, and if a player reaches the target score, 121, the game is
finished immediately.
The Show
Once The Play is finished, the players take back their cards from the table and it's time to calculate the score for their hands, and the crib. These are always scored in the same order: pone's hand,
dealer's hand, dealer's crib. As before, the scores are added to the scoreboard as soon as they are calculated, and if a player reaches 121 the game is over immediately, the other player doesn't get
to count his score. This means that there's no chance of a tie, or both players going over 121 in the same round. The dealer will normally get more points since he scores both his hand and the crib,
but the pone scores his hand first, so if they're both close to 121 the pone might win, even though the dealer would have gotten more points if he were allowed to count them.
The Show Scoring
The scoring for The Show is similar to the scoring for The Play, but with some important differences. The starter card is used here with both hands and the crib, so a hand is the hand + the starter,
and the crib is the crib + the starter. You can use the same card for many different combinations, e.g. it can be part of a pair and also part of a sequence.
• One for his nob: For having the jack of the same suit as the starter, score 1 point. E.g. starter is H4, you have HJ.
• Fifteen: Any combination of cards that sum to 15. You can re-use cards, so if you have HJ, SJ and C5 you get 2 points for HJ C5 and another 2 points for SJ C5.
• Pair: For any pair of cards, e.g. SQ DQ, score 2 points.
• Pair Royal (Three of a kind): For any three cards of the same rank, e.g. S8 C8 H8, score 6 points.
• Double Pair Royal (Four of a kind): For any four cards of the same rank, e.g. HA SA DA CA, score 12 points.
• Run (sequence) of three or more cards: Score 1 point for each card in the sequence. E.g. for SA H2 C3 D4, score 4 points.
• Flush, 4 cards: If all the cards in your hand are of the same suit, e.g. SA S5 S9 SJ, score 4 points. These four cards all have to be in your hand, you cannot have three cards in your hand + the
starter count as a flush. A 4 card flush also can't be used for the crib, only for your hand.
• Flush, 5 cards: If all the cards in your hand, and the starter card, are of the same suit, e.g. SA S5 S9 SJ SQ, score 5 points. You can also get a 5 card flush for your crib, if all the cards in
the crib and the starter are of the same suit.
Skunks and Double Skunks
A skunk is when a player wins by over 30 points, his opponent has less than 91 points when the game is over. A double skunk is when a player wins by over 60 points, the opponent has less than 61
points. Normally a skunk will count as two games and a double skunk as 3. However, on this site we're not playing multiple games, we only track each game individually. We will however show you an
image of a skunk or two if you get a skunk, and we do keep track of skunk counts for the statistics page.
And that's it!
Want to play Cribbage and put your newfound skills to the test? Play a round at Cardgames.io. | {"url":"https://cardgames.io/cribbage/how-to-play-cribbage/","timestamp":"2024-11-05T05:33:40Z","content_type":"text/html","content_length":"50185","record_id":"<urn:uuid:de12c779-1c81-4812-8866-81119c62e6b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00448.warc.gz"} |
Lyngdorf Audio TDAI-1120 Integrated Amplifier-DAC
Link: reviewed by Roger Kanno on SoundStage! Hi-Fi on August 15, 2021
General Information
All measurements taken using an Audio Precision APx555 B Series analyzer.
The TDAI-1120 was conditioned for 1 hour at 1/8th full rated power (~6W into 8 ohms) before any measurements were taken. All measurements were taken with both channels driven, using a 120V/20A
dedicated circuit, unless otherwise stated.
The TDAI-1120 offers a multitude of inputs, both digital and analog (unbalanced), a set of configurable line-level analog outputs (fixed or variable, full-range or with crossover), and a pair of
speaker-level outputs. For the purposes of these measurements, the following inputs were evaluated: digital coaxial (RCA) and optical (TosLink) S/PDIF, and the line-level and moving-magnet (MM) phono
analog unbalanced (RCA) inputs. The TDAI-1120 is a sophisticated integrated amplifier capable of room correction (RoomPerfect) and bass management. As such, a factory reset was performed before
measurements were performed, and particular attention was paid to ensure room correction was disengaged, and that both the line-level and speaker level outputs were set to full-range (i.e., no
Based on the accuracy and repeatability at various volume levels of the left/right channel matching (see table below), the TDAI-1120 volume control is likely operating in the digital domain.
Consequently, all analog signals are digitized at the TDAI-1120’s inputs so the unit may perform volume, bass management, and room correction.
All measurements, with the exception of signal-to-noise ratio (SNR), or otherwise stated, were made with the volume set to or near unity gain (0dB) on the volume control. SNR measurements were made
with the volume control set to maximum. At the unity gain volume position, to achieve 10W into 8 ohms, 2Vrms was required at the line-level input and 11mVrms at the phono input. For the digital
inputs, 0dBFS required the volume set to -8.5dB to achieve 10W into 8 ohms at the output.
Because the TDAI-1120 is a digital amplifier technology that exhibits considerable noise just above 20kHz (see FFTs below), our typical input bandwidth filter setting of 10Hz-90kHz was necessarily
changed to 10Hz-22.4kHz for all measurements, except for frequency response and for FFTs. In addition, THD versus frequency sweeps were limited to 6kHz to adequately capture the second and third
signal harmonics with the restricted bandwidth setting.
Volume-control accuracy (measured at speaker outputs): left-right channel tracking
Volume position Channel deviation
-60 0.039dB
-40 0.052dB
-30 0.055dB
-20 0.053dB
-10 0.057dB
0 0.058dB
5 0.057dB
12 0.055dB
Published specifications vs. our primary measurements
The table below summarizes the measurements published by Lyngdorf Audio for the TDAI-1120 compared directly against our own. The published specifications are sourced from Lyngdorf’s website, either
directly or from the manual available for download, or a combination thereof. With the exception of frequency response, where the Audio Precision bandwidth is set at its maximum (DC to 1MHz), assume,
unless otherwise stated, a measurement input bandwidth of 10Hz to 22.4kHz, and the worst-case measured result between the left and right channels.
Parameter Manufacturer SoundStage! Lab
Rated output power into 8 ohms 60W 71W (1% THD)
Rated output power into 4 ohms 120W 136W (1% THD)
Frequency response (20Hz-20kHz) ±0.5dB -1.5, +0.5dB
THD (60W, 20Hz - 6kHz) <0.05% <0.03%
THD+N (1kHz, 1W, 8ohm, A-Weighted) <0.04% <0.085%
THD+N (1kHz, 1W, 4ohm, A-Weighted) <0.04% <0.065%
Phono Input Impedance 47k ohms 47.5k ohms
Line-level output impedance 75 ohms 76 ohms
Our primary measurements revealed the following using the line-level inputs (unless specified, assume a 1kHz sine wave, 10W output, 8-ohm loading, 10Hz to 22.4kHz bandwidth):
Parameter Left channel Right channel
Maximum output power into 8 ohms (1% THD+N, unweighted) 71W 71W
Maximum output power into 4 ohms (1% THD+N, unweighted) 136W 136W
Continuous dynamic power test (5 minutes, both channels driven) passed passed
Crosstalk, one channel driven (10kHz) -68.9dB -70.2dB
Damping factor 39 75
Clipping headroom (8 ohms) 0.7dB 0.7dB
Gain (maximum volume) 25.6dB 25.6dB
IMD ratio (18kHz + 19kHz stimulus tones) <67dB <67dB
Input impedance (line input) 10.2k ohms 10.2k ohms
Input sensitivity (maximum volume) 1.15Vrms 1.15Vrms
Noise level (A-weighted) <300uVrms <300uVrms
Noise level (unweighted) <450uVrms <450uVrms
Output impedance (line out) 76 ohms 76 ohms
Signal-to-noise ratio (full rated power, A-weighted) 94.6dB 94.9dB
Signal-to-noise ratio (full rated power, 20Hz to 20kHz) 91.1dB 91.1dB
Dynamic range (full rated power, A-weighted, digital 24/96) 108.9dB 109.3dB
Dynamic range (full rated power, A-weighted, digital 16/44.1) 95.7dB 95.8dB
THD ratio (unweighted) <0.011% <0.008%
THD ratio (unweighted, digital 24/96) <0.009% <0.008%
THD ratio (unweighted, digital 16/44.1) <0.009% <0.008%
THD+N ratio (A-weighted) <0.011% <0.009%
THD+N ratio (A-weighted, digital 24/96) <0.011% <0.008%
THD+N ratio (A-weighted, digital 16/44.1) <0.011% <0.008%
THD+N ratio (unweighted) <0.011% <0.009%
Minimum observed line AC voltage 124VAC 124VAC
For the continuous dynamic power test, the TDAI-1120 was able to sustain 130W into 4 ohms using an 80Hz tone for 500 ms, alternating with a signal at -10dB of the peak (13W) for 5 seconds, for 5
continuous minutes without inducing a fault or the initiation of a protective circuit. This test is meant to simulate sporadic dynamic bass peaks in music and movies. During the test, the top of the
TDAI-1120 was just slightly warm to the touch.
Our primary measurements revealed the following using the phono-level input (unless specified, assume a 1kHz sine wave, 10W output, 8-ohm loading, 10Hz to 22.4kHz bandwidth):
Parameter Left channel Right channel
Crosstalk, one channel driven (10kHz) -62.6dB -69.2dB
Gain (default phono preamplifier) 44.6dB 44.6dB
IMD ratio (18kHz and 19 kHz stimulus tones) <-66dB <-67dB
IMD ratio (3kHz and 4kHz stimulus tones) <-76dB <-78dB
Input impedance 47.5k ohms 46.5k ohms
Input sensitivity (maximum volume) 6.7mVrms 6.7mVrms
Noise level (A-weighted) <400uVrms <400uVrms
Noise level (unweighted) <1000uVrms <1000uVrms
Overload margin (relative 5mVrms input, 1kHz) 15.4dB 15.3dB
Signal-to-noise ratio (full rated power, A-weighted) 84.5dB 84.5dB
Signal-to-noise ratio (full rated power, 20Hz to 20kHz) 76.2dB 76.9dB
THD (unweighted) <0.01% <0.008%
THD+N (A-weighted) <0.012% <0.009%
THD+N (unweighted) <0.014% <0.013%
Frequency response (8-ohm loading, line-level input)
In our measured frequency-response plot above, the TDAI-1120 is nearly flat within the audioband (20Hz to 20kHz). At the audioband extremes the TDAI-1120 is -1.5dB down at 20Hz and +0.4dB at 20kHz.
These data do not quite corroborate Lyngdorf’s claim of 20Hz to 20kHz (+/-0.5dB). The TDAI-1120 cannot be considered a high-bandwidth audio device as the -3dB point is just shy of 50kHz. In the chart
above and most of the charts below, only a single trace may be visible. This is because the left channel (blue or purple trace) is performing identically to the right channel (red or green trace),
and so they perfectly overlap, indicating that the two channels are ideally matched.
Frequency response vs. input type (8-ohm loading, left channel only)
The plot above shows the TDAI-1120’s frequency response as a function of input type. The green trace is the same analog input data from the previous graph. The blue trace is for a 16-bit/44.1kHz
dithered digital signal from 5Hz to 22kHz, the purple trace is for a 24/96 dithered digital signal from 5Hz to 48kHz, and, finally, pink is 24/192 from 5Hz to 96kHz. The behavior at low frequencies
is the same across input types: -1.5dB at 20Hz. The behavior approaching 20kHz for all input types is also identical, in that there is a rise in level beginning around 5kHz. However, the 16/44.1
signal exhibits a sharp, brick-wall-type attenuation right around 20kHz. The 24/96 digital input also exhibits a sharp, brick-wall-type attenuation near the limit of its frequency range (48kHz),
peaking around +2.2dB at around 40kHz. The 24/192 digital input frequency response is identical to the 24/96 plot, despite the extended theoretical range up to 96kHz.
Frequency response (8-ohm loading, MM phono input)
The plot above shows frequency response for the phono input (MM), and shows the same maximum deviation of -1.5dB at 20Hz and +0.4dB at 20kHz as seen for the line-level analog input. What is shown is
the deviation from the RIAA curve, where the input signal sweep is EQd with an inverted RIAA curve supplied by Audio Precision (i.e., zero deviation would yield a flat line at 0dB). In the flat
portion of the curve, the worst-case RIAA and channel-to-channel deviations are from 5 to 6kHz, where the left channel is -0.1dB down and the right about -0.2dB down.
Phase response (MM input)
Above is the phase response plot from 20Hz to 20kHz for the phono input from 20Hz to 20kHz. For the phono input, since the RIAA equalization curve must be implemented, which ranges from +19.9dB
(20Hz) to -32.6dB (90kHz), phase shift at the output is inevitable. Here we find a worst case -400 degrees at 200-300Hz. This is an indication that the TDAI-1120 likely inverts polarity on the phono
input. If we look at the phase response at 20Hz, the phase shift is 200 degrees. If we assume a polarity inversion (180 degrees), then there would only be 20 degrees of extra phase shift at 20Hz.
Digital linearity (16/44.1 and 24/96 data)
The chart above shows the results of a linearity test for the coaxial digital input (the optical input performed identically) for both 16/44.1 (blue/red) and 24/96 (purple/green) input data, measured
at the line-level output of the TDAI-1120. The digital input is swept with a dithered 1kHz signal from -120dBFS to 0dBFS, and the output is analyzed by the APx555. The ideal response would be a
straight flat line at 0dB. Both digital input types performed similarly, approaching the ideal 0dB relative level from 0dBFS to -90dBFS. At -110dBFS, both channels at 16/44.1 overshot the ideal
output signal amplitude by about 1dB, while the left/right channels at 24/96 undershot by 1dB.
Impulse response (16/44.1 and 24/96 data)
The graph above shows the impulse responses for a -20dBFS 16/44.1 dithered input stimulus (red), and 24/96 dithered input stimulus (green), measured at the line-level output of the TDAI-1120. The
shape is similar to that of a typical sinc function filter, although with less pre- and post-ringing for the 24/96 input data.
J-Test (coaxial and optical inputs)
The plot above shows the results of the J-Test test for the optical digital input (the coaxial input performed identically) measured at the line-level output of the TDAI-1120. J-Test was developed by
Julian Dunn the 1990s. It is a test signal—specifically a -3dBFS undithered 12kHz square wave sampled (in this case) at 48kHz (24 bits). Since even the first odd harmonic (i.e., 36kHz) of the 12kHz
square wave is removed by the bandwidth limitation of the sampling rate, we are left with a 12kHz sine wave. In addition, an undithered 250Hz square wave at -144dBFS is mixed with the signal. This
test file causes the 22 least significant bits to constantly toggle, which produces strong jitter spectral components at the 250Hz rate and its odd harmonics. The test file shows how susceptible the
DAC and delivery interface are to jitter, which would manifest as peaks above the noise floor at 500Hz intervals (e.g., 250Hz, 750Hz, 1250Hz, etc.). Note that the alternating peaks are in the test
file itself, but at levels of -144dBFS and below. The test file can also be used in conjunction with artificially injected sine-wave jitter by the Audio Precision, to show how well the DAC rejects
As mentioned, both the coaxial and optical S/PDIF TDAI-1120 inputs performed identically, showing only spurious peaks in the audioband at -125dBFS and below. When sine-wave jitter was injected at
2kHz, which would manifest as sidebands at 10kHz and 14kHz without any jitter rejection, absolutely no peaks were observed above the noise floor with both inputs on the TDAI-1120, even at the maximum
jitter level available of 1592ns, indicating excellent jitter immunity.
Wideband FFT spectrum of white noise and 19.1kHz sine-wave tone (coaxial input)
The plot above shows a fast Fourier transform (FFT) of the TDAI-1120’s line-level output with white noise at -4 dBFS (blue/red), and a 19.1 kHz sine wave at 0dBFS fed to the coaxial digital input,
sampled at 16/44.1. The sharp roll-off above 20kHz in the white-noise spectrum shows the implementation of a brick-wall-type reconstruction filter. The aliased image at 25kHz is extremely low at
-125dB, and any resultant intermodulated signals (between either the alias or signal harmonics) within the audioband are all very low, below -120dBrA, or 0.0001%. The second, third, and fourth
distortion harmonics (38.2, 57.3, 76.4kHz) of the 19.1kHz tone are much higher in amplitude, lying between -80 and -95dBrA, or 0.01% and 0.002%.
RMS level vs. frequency vs. load impedance (1W, left channel only)
The plots above show RMS level (relative to 0dBrA, which is 1W into 8 ohms or 2.83Vrms) as a function of frequency, for the analog line-level input swept from 5Hz to 100kHz. The blue plot is into an
8-ohm load, the purple is into a 4-ohm load, the pink is an actual speaker (Focal Chora 806, measurements can found here), and the cyan is no load connected. We find a maximum deviation within the
audioband of about 3dB (at 20kHz), which is an indication of a low damping factor, or high output impedance. The maximum variation in RMS level when a real speaker was used as a load is smaller,
deviating by a little less than 0.5dB within the flat portion of the curve (30Hz to 20kHz), with the lowest RMS level, which would correspond to the lowest impedance point for the load, exhibited
around 200Hz, and the highest RMS level, which would correspond to the highest impedance point for the load, at around 5kHz.
THD ratio (unweighted) vs. frequency vs. output power
The plot above shows THD ratios at the output into 8 ohms as a function of frequency (20Hz to 6kHz) for a sine-wave stimulus at the line-level input. The blue and red plots are for left and right
channels at 1W output into 8 ohms, purple/green at 10W, and pink/orange at the full rated power of 60W. The power was varied using the volume control. All three THD plots are relatively flat. The 10W
data exhibited the lowest THD values, between 0.006% and 0.015%, although there is an almost 5dB difference between channels in favor of the right channel. The 1W data show THD values between 0.01
and 0.02%. At the full rated power of 60W, THD values ranged from 0.01 to 0.03%.
THD ratio (unweighted) vs. frequency at 10W (phono input)
Next is a THD ratio as a function of frequency plot for the phono input measured across an 8-ohm load at 10W. The input sweep is EQ’d with an inverted RIAA curve. The THD values vary from about
0.006% to just above 0.01%, but are fairly flat from 20Hz to 6kHz. Again, the right channel outperformed the left by as mush as 5dB.
THD ratio (unweighted) vs. output power at 1kHz into 4 and 8 ohms
The plot above shows THD ratios measured at the output of the TDAI-1120 as a function of output power for the analog line-level-input, for an 8-ohm load (blue/red for left/right channels) and a 4-ohm
load (purple/green for left/right channels). Although there are fluctuations before the “knee,” both the 4-ohm and 8-ohm data are roughly the same, ranging from 0.005% to 0.02%. The “knee” in the
8-ohm data occurs just past 50W, hitting the 1% THD mark between 60 and 70W. For the 4-ohm data, the “knee” occurs around 100W, hitting the 1% THD around 130W.
THD+N ratio (unweighted) vs. output power at 1kHz into 4 and 8 ohms
The chart above shows THD+N ratios measured at the output of the TDAI-1120 as a function of output power for the analog line level-input, for an 8-ohm load (blue/red for left/right channels) and a
4-ohm load (purple/green for left/right channels). There’s a distinct 5dB jump in THD+N (also visible in the THD plot above, but to a lesser degree) when the output voltage is around 1.5-1.6Vrms (
i.e., 0.3W into 8 ohms, 0.7W into 4 ohms), and then a sharp 10dB decrease in THD+N at around 4-5Vrms at the output (i.e., 2/5W into 4/8 ohms). This behavior was repeatable over multiple measurement
trials. Overall, THD+N values before the “knee” ranged from around 0.01% (3 to 10W) to 0.2% (50mW).
THD ratio (unweighted) vs. frequency at 8, 4, and 2 ohms (left channel only)
The chart above shows THD ratios measured at the output of the TDAI-1120 as a function of load (8/4/2 ohms) for a constant input voltage that yielded 5W at the output into 8 ohms (and roughly 10W
into 4 ohms, and 20W into 2 ohms) for the analog line-level input. The 8-ohm load is the blue trace, the 4-ohm load the purple trace, and the 2-ohm load the pink trace. We find increasing levels of
THD from 8 to 4 to 2 ohms, with about a 5dB increase with each halving of the load. Overall, even with a 2-ohm load at roughly 20W, THD values ranged from 0.015% at around 1kHz to just below 0.04% at
FFT spectrum – 1kHz (line-level input)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sine-wave stimulus, measured at the output across an 8-ohm load at 10W for the analog line-level input. We see that the signal’s
second harmonic, at 2kHz, is at -80dBrA (left), or 0.01%, and -95dBrA (right), or 0.002%. The third harmonic is at -85dBrA, or 0.006%; the remaining signal harmonics range from -90dBrA to -120dBrA,
or 0.003 and 0.0001%. Just above 20kHz, we see a steep rise in the noise floor, up to -70dBrA, or 0.03%. Below 1kHz, we see no power-supply noise artifacts.
FFT spectrum – 1kHz (digital input, 16/44.1 data at 0dBFS)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sine-wave stimulus, measured at the output across an 8-ohm load at 10W for the coaxial digital input, sampled at 16/44.1. We see
essentially the same signal harmonic profile as with the analog input. The noise floor, however, is reduced here below 100Hz by about 15dB compared to the analog input.
FFT spectrum – 1kHz (digital input, 24/96 data at 0dBFS)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sine-wave stimulus, measured at the output across an 8-ohm load at 10W for the coaxial digital input, sampled at 24/96. We see
essentially the same signal harmonic profile as with the analog input and the digital coaxial input with a 16/44.1 signal. The noise floor, however, is reduced here above 5kHz by about 5dB compared
to the 16/44.1 sampled FFT above.
FFT spectrum – 1kHz (digital input, 16/44.1 data at -90dBFS)
Shown above is the FFT for a 1kHz -90dBFS dithered 16/44.1 input sine-wave stimulus at the coaxial digital input, measured at the output across an 8-ohm load. We only see the 1kHz primary signal
peak, at the correct amplitude, with a slightly raised noise floor at low frequencies with respect to the 0dBFS FFT above.
FFT spectrum – 1kHz (digital input, 24/96 data at -90dBFS)
Shown above is the FFT for a 1kHz -90dBFS dithered 24/96 input sine-wave stimulus at the coaxial digital input, measured at the output across an 8-ohm load. We only see the 1kHz primary signal peak,
at the correct amplitude, with a slightly raised noise floor at low frequencies with respect to the 0dBFS FFT above.
FFT spectrum – 1kHz (MM phono input)
Shown above is the FFT for a 1kHz input sine-wave stimulus, measured at the output across an 8-ohm load at 10W for the MM phono input. We see essentially the same signal harmonic profile as with the
analog line-level and digital inputs. The highest peak from power-supply noise is at the fundamental (60Hz), reaching about -85dBrA, or 0.006%, and the third noise harmonic (180Hz) is just below
-90dBrA, or 0.003%.
FFT spectrum – 50Hz (line-level input)
Shown above is the FFT for a 50Hz input sine-wave stimulus measured at the output across an 8-ohm load at 10W for the line-level input. The X axis is zoomed in from 40 Hz to 1kHz, so that peaks from
noise artifacts can be directly compared against peaks from the harmonics of the signal. Once again, here there are no noise signals to speak of. Instead, the most predominant peaks are that of the
signal’s second (100Hz) harmonic at -80/-95dBrA (left/right), or 0.01/0.002%, and third harmonic (150Hz) at about -85dBrA, or 0.006%.
FFT spectrum – 50Hz (MM phono input)
Shown above is the FFT for a 50Hz input sine-wave stimulus measured at the output across an 8-ohm load at 10W for the phono input. The most predominant peaks are that of the signal’s second (100Hz)
harmonic at -80/-95dBrA (left/right), or 0.01/0.002%, and third harmonic (150Hz) at about -85dBrA, or 0.006%. Peaks due to power supply noise are visible at 60Hz (-85dBrA or 0.006%) and 180Hz
(-90dBrA or 0.003%).
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus, line-level input)
Shown above is an FFT of the intermodulation disortion (IMD) products for an 18kHz + 19kHz summed sine-wave stimulus tone measured at the output across an 8-ohm load at 10W for the analog line-level
input. The input RMS values are set at -6.02dBrA so that, if summed for a mean frequency of 18.5kHz, would yield 10W (0dBrA) into 8 ohms at the output. We find that the second-order modulation
product (i.e., the difference signal of 1kHz) is at -85/-95dBRA (left/right channels), or 0.006/0.002%, while the third-order modulation products, at 17kHz and 20kHz are higher, at around -80dBrA, or
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus, MM phono input)
Shown above is an FFT of the intermodulation distortion (IMD) products for an 18kHz + 19kHz summed sine-wave stimulus tone measured at the output across an 8-ohm load at 10W for the phono input. Here
we find essentially the same result as with the line-level analog input, with the exception of the expected elevated noise floor at low frequencies due to the RIAA equalization, and the absence of a
2kHz peak at -110dBrA, or 0.0003%, for the left channel that was visible in the line-level IMD FFT.
Square-wave response (10kHz)
Above is the 10kHz square-wave response using the analog line-level input, at roughly 10W into 8 ohms. Due to limitations inherent to the Audio Precision APx555 B Series analyzer, this graph should
not be used to infer or extrapolate the TDAI-1120’s slew-rate performance; rather, it should be seen as a qualitative representation of its limited bandwidth. An ideal square wave can be represented
as the sum of a sine wave and an infinite series of its odd-order harmonics (e.g., 10kHz + 30kHz + 50kHz + 70kHz . . .). A limited bandwidth will show only the sum of the lower-order harmonics, which
may result in noticeable undershoot and/or overshoot, and softening of the edges. The TDAI-1120’s reproduction of the 10kHz square wave is poor, with noticeable overshoot and undershoot, due to its
limited bandwidth, and the 400kHz switching-oscillator frequency used in the digital amplifier section clearly visible modulating the waveform.
FFT spectrum of 400kHz switching frequency relative to a 1kHz tone
The TDA-1120’s class-D amplifier relies on a switching oscillator to convert the input signal to a pulse-width modulated (PWM) square-wave (on/off) signal before sending the signal through a low-pass
filter to generate an output signal. The TDAI-1120 oscillator switches at a rate of about 400kHz. This chart plots an FFT spectrum of the amplifier’s output at 10W into 8 ohms as it’s fed a 1kHz sine
wave. We can see that the 400kHz peak is quite evident, at -25dBrA below the main signal level. There is also a peak at 800kHz and 1200kHz (the second and third harmonics of the 400kHz peak at -50
and -65dBrA). Those three peaks—the fundamental and its second and third harmonics—are direct results of the switching oscillators in the TDAI-1120 amp left- and right-channel modules. The noise
around those very-high-frequency signals are in the signal, but all that noise is far above the audioband—and is therefore inaudible—as well as so high in frequency that any loudspeaker the amplifier
is driving should filter it all out anyway.
Damping factor vs. frequency (20Hz to 20kHz)
The final chart above is the damping factor as a function of frequency. Both channels show a general trend of a higher damping factor at lower frequencies, and lower damping factor at higher
frequencies. The right channel outperformed the left with a peak value around 80 from 20Hz to 1kHz, while the left channel achieved a damping factor of around 40 within the same frequency range. Both
channels’ damping factors are down to around 10 at 20kHz.
Diego Estan
Electronics Measurement Specialist | {"url":"https://soundstagenetwork.com/index.php?option=com_content&view=article&id=2591:lyngdorf-audio-tdai-1120-integrated-amplifier-dac&catid=97:amplifier-measurements&Itemid=154","timestamp":"2024-11-03T16:15:03Z","content_type":"text/html","content_length":"130216","record_id":"<urn:uuid:bd9d3310-8f2a-46c2-8623-81920f251169>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00311.warc.gz"} |
Why real interest rates can’t be calculated?
Would it be possible in a world without money to establish the rate of return on present goods in terms of future goods? In a world without money all that one would have are the rates of exchanges
between various present and future real goods. For instance, one present apple is exchanged for two potatoes in one-year’s time. Or, one shirt is exchanged for three tomatoes in a one-year’s time.
All that we have here are various ratios.
There is, however, no way to establish from these ratios what the rate of return is for one present apple in terms of future potatoes (It is not possible to calculate the percentage since potatoes
and apples are not the same goods). Likewise we cannot establish the rate of return on a shirt in terms of future tomatoes. In other words, we can only establish that one present apple is exchanged
for two future potatoes, and one present shirt is exchanged for three future tomatoes. Only in the framework of the existence of money can the rate of return be established.
For instance, the time preference of a baker, which is established in accordance with his particular set-up, determines that he will be ready to lend ten dollars – which he has earned by selling ten
loaves of bread – for a borrowers promise to repay eleven dollars in a one year’s time.
Similarly the time preference of a shoemaker, which is formed in accordance with his particular set-up, determines that he will be a willing borrower. In short, once the deal is accomplished both
parties to this deal have benefited. The baker will get eleven dollars in one-year’s time that he values much more than his present ten dollars. For the shoemaker the value of the present ten dollars
exceeds the value of eleven dollars in one-year time.
As one can see here, both money and the real factor (time preferences) are involved in establishing the market interest rate, which is 10%. Note that the baker has exchanged ten loaves of bread for
money first, i.e. ten dollars. He then lends the ten-dollars for eleven dollars in one-year’s time. The interest rate that he secures for himself is 10%. In one-year’s time the baker can decide what
goods to purchase with the obtained eleven dollars. As far as the shoemaker is concerned he must generate enough shoes in order to enable him to secure at least eleven dollars to repay the loan to
the baker.
Observe however, that without the existence of money – the medium of exchange – the baker isn’t able to establish how much of future goods he must be paid for his ten loaves of bread that would
comply with the rate of return of 10%.
Consequently, there cannot be any separation between real and financial market interest rates. There is only one interest rate, which is set by the interaction between individuals’ time preferences
and the supply and demand for money.
Since the influences of demand and supply for money and individuals’ time preferences regarding interest rate formation are intertwined, there are no ways or means to isolate these influences. Hence,
the commonly accepted practice of calculating the so-called real interest rate by subtracting percentage changes in the consumer price index from the market interest rate is erroneous. | {"url":"https://www.cobdencentre.org/2012/10/why-real-interest-rate-cant-be-calculated/","timestamp":"2024-11-04T04:09:52Z","content_type":"text/html","content_length":"101746","record_id":"<urn:uuid:81637510-da46-4a7d-81b6-513a620d8fcf>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00500.warc.gz"} |
Can you calculate time in Smartsheet?
I am setting up a new sheet for when employees return to the office with staggered start times and duration of their lunch breaks. Is there a formula that would calculate their end time depending on
what time they started? i.e., I entered 7:15 and take a 45 minute lunch, automatically fill in 4:00 for my end time that day. Another employee may start at 9 and take a 30 minute lunch.
I have looked at the formulas posted in the community and am not finding anything. I know Excel has a Time formula but I'm not seeing it in Smartsheet. If this isn't available, is there a work around
until it would be available? Is a time function something that is being worked on?
Thank you,
Best Answer
• HERE IS A PUBLISHED SHEET that shows the solution described below.
I used a helper column for this particular solution. It can be done in a single column, but that results in a bit of a monster formula that can be tough to troubleshoot if anything does happen to
break or need tweaked.
I called the helper column End and the result is in the [End Time] column.
=IF(VALUE(LEFT([Start Time]@row, FIND(":", [Start Time]@row) - 1)) <> 12, IF(CONTAINS("p", [Start Time]@row), 12), IF(CONTAINS("a", [Start Time]@row), -12)) + VALUE(LEFT([Start Time]@row, FIND
(":", [Start Time]@row) - 1)) + (VALUE(MID([Start Time]@row, FIND(":", [Start Time]@row) + 1, 2)) / 60) + (Lunch@row / 60) + 8
[End Time]:
=MOD(INT(End@row), 12) + ":" + IF((End@row - INT(End@row)) * 60 < 10, "0") + (End@row - INT(End@row)) * 60 + IF(End@row >= 12, "pm", "am")
• There isn't a direct way to calculate time in Smartsheet yet, but I do believe it is being worked on.
In the meantime, there are a few different options for some workarounds.
How is the data being entered?
Are you using 12 or 24 hour time?
I see you entered colons in your post. Is that going to be consistent?
Would lunches always be set durations such as 30 or 45 minutes, or is it possible to take something different such as 32 or 48 minutes?
Is it always going to be the same amount of working hours (8) minus the lunch duration or do other numbers of working hours need to be accounted for?
Will it overlap midnight such as clocking in at 8pm, taking a 30 minute lunch, and clocking out at 4:30am?
I know it seems like a lot of questions (because it is), but I like to keep things as simple as possible. Some solutions require much more complexity than others, so getting all of the details
worked out first allows us to cut out as much of that complexity as we can.
• How is the data being entered? Manual entry by employee
Are you using 12 or 24 hour time? 12 hour time
I see you entered colons in your post. Is that going to be consistent? Yes
Would lunches always be set durations such as 30 or 45 minutes, or is it possible to take something different such as 32 or 48 minutes? They would always be 30, 45, or 60 minutes
Is it always going to be the same amount of working hours (8) minus the lunch duration or do other numbers of working hours need to be accounted for? Yes, working 8 hours per day
Will it overlap midnight such as clocking in at 8pm, taking a 30 minute lunch, and clocking out at 4:30am? No
• Ok. Let me work something up, and I'll get back to you either with more questions or with a solution.
• HERE IS A PUBLISHED SHEET that shows the solution described below.
I used a helper column for this particular solution. It can be done in a single column, but that results in a bit of a monster formula that can be tough to troubleshoot if anything does happen to
break or need tweaked.
I called the helper column End and the result is in the [End Time] column.
=IF(VALUE(LEFT([Start Time]@row, FIND(":", [Start Time]@row) - 1)) <> 12, IF(CONTAINS("p", [Start Time]@row), 12), IF(CONTAINS("a", [Start Time]@row), -12)) + VALUE(LEFT([Start Time]@row, FIND
(":", [Start Time]@row) - 1)) + (VALUE(MID([Start Time]@row, FIND(":", [Start Time]@row) + 1, 2)) / 60) + (Lunch@row / 60) + 8
[End Time]:
=MOD(INT(End@row), 12) + ":" + IF((End@row - INT(End@row)) * 60 < 10, "0") + (End@row - INT(End@row)) * 60 + IF(End@row >= 12, "pm", "am")
• This works perfectly. Thank
• Excellent. Happy to help! 👍️
• Dear Paul,
Is there any simple way to calculate hours between two time stamps? I am trying to make a sheet to calculate overtime.
my team would insert the time they went in and the time that went out and the hours would be calculated based on the In and out.
Thank you
• @Hadrian Mansueto Take a look in THIS THREAD at the solutions there and let me know if you can find something that works for you. There are solutions scattered about, but I think all of them
should be copied over to page 3.
• Hi, looking at the sheet you linked, the end times do not add up correctly. I'm trying to use this example, but stuck at that point.
• For me it works well and helped me a lot.Thanks @Paul Newcome !!!
• Hi @Paul Newcome - In reviewing the smartsheet you linked to, it appears that something isn't calculating correctly. For example, the first row is showing a 30 minute lunch starting at 12:00 pm
ends at 8:30 pm. The formulas are more complex than my brain can handle, so I'm not sure how to play with it to try and adjust.
I also found this solution: https://community.smartsheet.com/discussion/76151/creating-an-end-time-based-on-an-amount-of-time-and-a-start-time
However, that one requires multiple helper fields and I liked the idea of only one extra field if that is possible.
I know it's been a long time since this was posted, and I appreciate any input/update!
• @carriemcintyre That is correct though. If you start at noon, work for 8 hours, and include an additional 30 minute lunch break, that would put your end time at 8:30pm.
• @Paul Newcome - oh duh! knew it was me. talk about cognitive bias, i was looking for a formula to calculate the number of minutes between two times without having to use 24 hour times, and so I
read this sheet as calculating the end time of lunch.
• @carriemcintyre There are a number of posts in the below thread that should help you out. It is easiest to calculate time differences when using 24 hour times, but you can use 12 hour times and
then use an additional helper column to convert it to 24 hour times on the "back-end". This is also explained and shown throughout the thread.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/68149/can-you-calculate-time-in-smartsheet","timestamp":"2024-11-03T10:55:34Z","content_type":"text/html","content_length":"489250","record_id":"<urn:uuid:ae3ba842-930a-43c5-ade6-4cab8ad4c098>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00513.warc.gz"} |
If the implementation supports an extended floating-point type with the properties, as specified by ISO/IEC 60559, of radix (
) of 2, storage width in bits (
) of 16, precision in bits (
) of 8, maximum exponent (
) of 127, and exponent field width in bits (
) of 8, then the
typedef-name std::bfloat16_t
is defined in the header
and names such a type, the macro
is defined, and the floating-point literal suffixes
are supported | {"url":"https://eel.is/c++draft/basic.types","timestamp":"2024-11-11T01:55:26Z","content_type":"text/html","content_length":"176284","record_id":"<urn:uuid:772f3d30-2dc9-466a-a8f5-c38436c9458e>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00699.warc.gz"} |
12.15 data.heap - Heap ¶
Module: data.heap ¶
A heap is a data container that allows efficient retrieval of the minimum or maximum entry. Unlike a <tree-map> (see Treemaps), which always keeps all entries in order, a heap only cares the
minimum or the maximum of the current set; the other entries are only partially ordered, and reordered when the minimu/maximum entry is removed. Hence it is more efficient than a treemap if all
you need is minimum/maximum value. Besides binary heaps can store entries in packed, memory-efficient way.
Class: <binary-heap> ¶
{data.heap} An implementation of a binary heap. Internally it uses min-max heap, so that you can find both minimum and maximum value in O(1). Pushing a new value and popping the minimum/maximum
value are both O(log n).
It also stores its values in a flat vector, a lot more compact than a general tree structure that needs a few pointers per node. By default it uses a sparse vector for the backing storage,
allowing virtually unlimited capacity (see Sparse vectors). But you can use an ordinal vector or a uniform vector as a backing storage instead.
A binary heap isn’t MT-safe structure; you must put it in atom or use mutexes if multiple threads can access to it (see Synchronization primitives).
Function: make-binary-heap :key comparator storage key ¶
{data.heap} Creates and returns a new binary heap.
The comparator keyword argument specifies how to compare the entries. It must have comparison procedure or ordering predicate. The default is default-comparator. See Basic comparators, for the
details of comparators.
The storage keyword argument gives alternative backing storage. It must be either a vector, a uniform vector, or an instance of a sparse vector (see Sparse vectors). The default is an instance of
<sparse-vector>. If you pass a vector or a uniform vector, it determines the maximum number of elements the heap can hold. The heap won’t be extend the storage once it gets full.
The key keyword argument must be a procedure; it is applied on each entry before comparison. Using key procedure allows you to store auxiliary data other than the actual value to be compared. The
following example shows the entries are compared by their car’s:
(define *heap* (make-binary-heap :key car))
(binary-heap-push! *heap* (cons 1 'a))
(binary-heap-push! *heap* (cons 3 'b))
(binary-heap-push! *heap* (cons 1 'c))
(binary-heap-find-min *heap*) ⇒ (1 . c)
(binary-heap-find-max *heap*) ⇒ (3 . b)
Function: build-binary-heap storage :key comparator key num-entries ¶
{data.heap} Create a heap from the data in storage, and returns it. (Sometimes this operation is called heapify.) This allows you to create a heap without allocating a new storage. The comparator
and key arguments are the same as make-binary-heap.
Storage must be either a vector, a uniform vector, or an instance of a sparse vector. The storage is modified to satisfy the heap property, and will be used as the backing storage of the created
heap. Since the storage will be owned by the heap, you shouldn’t modify the storage later.
The storage supposed to have keys from index 0 below num-entries. If num-entries is omitted or #f, entire vector or uniform vector, or up to sparse-vector-num-entries on the sparse vector, is
Function: binary-heap-copy heap ¶
{data.heap} Copy the heap. The backing storage is also copied.
Function: binary-heap-clear! heap ¶
{data.heap} Empty the heap.
Function: binary-heap-num-entries heap ¶
{data.heap} Returns the current number of entries in the heap.
Function: binary-heap-empty? heap ¶
{data.heap} Returns #t if the heap is empty, #f otherwise.
Function: binary-heap-push! heap item ¶
{data.heap} Insert item into the heap. This is O(log n) operation. If the heap is already full, an error is raised.
Function: binary-heap-find-min heap :optional fallback ¶
Function: binary-heap-find-max heap :optional fallback ¶
{data.heap} Returns the minimum and maximum entry of the heap, respectively. The heap will be unmodified. This is O(1) operation.
If the heap is empty, fallback is returned when it is provided, or an error is signaled.
Function: binary-heap-pop-min! heap ¶
Function: binary-heap-pop-max! heap ¶
{data.heap} Removes the minimum and maximum entry of the heap and returns it, respectively. O(log n) operation. If the heap is empty, an error is signaled.
The following procedures are not heap operations, but provided for the convenience.
Function: binary-heap-swap-min! heap item ¶
Function: binary-heap-swap-max! heap item ¶
{data.heap} These are operationally equivalent to the followings, respectively:
(begin0 (binary-heap-pop-min! heap)
(binary-heap-push! heap item))
(begin0 (binary-heap-pop-max! heap)
(binary-heap-push! heap item))
However, those procedures are slightly efficient, using heap property maintaining procedure only once per function call.
Function: binary-heap-find pred heap :optional failure ¶
{data.heap} Returns an item in the heap that satisfies pred. If there are more than one item that satisfy pred, any one of them can be returned. If no item satisfy pred, the thunk failure is
called, whose default is (^[] #f). This is O(n) operation.
Note: The argument order used to be (binary-heap-find heap pred) until 0.9.10. We changed it to align other *-find procedures. The old argument order still work for the backward compatibility,
but the new code should use the current order.
Function: binary-heap-remove! heap pred ¶
{data.heap} Remove all items in the heap that satisfy pred. This is O(n) operation.
Function: binary-heap-delete! heap item ¶
{data.heap} Delete all items in the heap that are equal to item, in terms of the heap’s comparator and key procedure. This is O(n) operation.
Note that the key procedure is applied to item as well before comparison. | {"url":"https://practical-scheme.net/gauche/man/gauche-refe-draft/Heap.html","timestamp":"2024-11-14T18:28:01Z","content_type":"text/html","content_length":"17698","record_id":"<urn:uuid:ab547457-c2c8-41d3-a3f5-ca5b865589cf>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00760.warc.gz"} |
CTL exam question
+ General Questions (11)
I am solving the exam questions problem 4 (d) 2020.02.19.vrs.solutions.pdf (uni-kl.de).
Question : A F E G a
Exam Solution :
My Approach :
In this I solved the inner part E G a which is {s2, s3, s6} intuitively.
But for AF which is "always finally" I get it wrong as shown in solution (Exam).
My intuition regarding A F phi is that in all paths eventually it reaches to states {s2, s3, s6} and as in the solution it says a counterexample of {s5,s4} but from what I understood that if we are
in s5, AF phi means we can always finally reach to {s2, s3, s6} by taking a path s5 -> s4 -> s5 -> s2 (Reached).
Could you please explain about A F phi i.e where I get it wrong ?
You are confused with quantification over paths and variables. The path quantifier A in AF phi asks whether for all infinite outgoing paths eventually phi holds. As the example solution explains,
that is not the case since there is a path that violates this.
You believe that in s5, we can ALWAYS finally reach to {s2,s3,s6} which would be written as GF{s2,s3,s6}, and even EGF{s} but the exam problem asks for AF{s2,s3,s6}. | {"url":"https://q2a.cs.uni-kl.de/2813/ctl-exam-question","timestamp":"2024-11-07T13:52:44Z","content_type":"text/html","content_length":"65257","record_id":"<urn:uuid:a51d1718-1654-4171-bdb1-69236d2a7cca>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00389.warc.gz"} |
Teaching Tricky Trapezoids: Inclusive vs. Exclusive
Do you teach quadrilateral classification? If so, did you know there are THREE ways to define a trapezoid? Americans use either the inclusive or the exclusive definition depending on their
curriculum. To complicate matters even more, teachers who live outside the United States define trapezoids in a completely different way! Believe it or not, the British English definition is the
exact opposite of the two American definitions!
Which definition are you supposed to be teaching? If you’re not sure, it’s entirely possible that you’re teaching the wrong definition! But don’t feel bad if you discover this to be true because you
are not alone. In fact, until recently, I didn’t even know which definition was used by the Common Core State Standards!
Before we dig into this topic, you need to know which definition you’re currently teaching. To find out, answer the trapezoid question below before you read the rest of this post. Then read the
information under the 3 polygons that explains what your answer means.
What Your Answer Reveals
Because there are three ways to define a trapezoid, there are three correct answers to the question. Your response will reveal the definition you use to classify trapezoids.
• If you only chose polygon 3, you use the exclusive definition which states that a trapezoid has EXACTLY one pair of parallel sides. This is the definition that I learned, and it’s the one I
thought the Common Core used (but I was wrong).
• If you chose polygons 1 AND 3, you use the inclusive definition which states a trapezoid has AT LEAST one pair of parallel sides. Many educators favor this definition because the other
quadrilateral definitions are inclusive. For example, a parallelogram is a 4-sided figure with both pairs of opposite sides parallel, which means that squares and rectangles are also
• If you only chose polygon 2, you’re using the British English classification system which states that a trapezoid is a quadrilateral with NO parallel sides. You teach your students that a
quadrilateral with one pair of parallel sides is a trapezium, not a trapezoid.
Which definition SHOULD you be teaching?
Now you know which definition you use to classify trapezoids, but is that the definition you’re supposed to be teaching? If you aren’t 100% sure, make a note to check on it. Until recently, I
thought the Common Core used the exclusive definition, but I discovered that the CCSS actually uses the inclusive definition! I posted a question on my Facebook page to find out which trapezoid
definition most teachers were using, and over 180 people responded. I was surprised to learn that most teachers who follow the CCSS teach the inclusive definition.
How to Teach Kids to Classify Tricky Trapezoids
I recommend that you find out which trapezoid definition you are expected to teach, and only teach that ONE definition. You could tell your students that they might learn a slightly different
definition at some point in the future, but if you go into too much detail, your students will end up more confused than ever.
After you know which definition you’re supposed to be teaching, how do you introduce it to your students and help them learn to classify trapezoids or trapeziums correctly?
I’ve found that the best way to help your kids teaching those tricky trapezoids is with a simple sorting activity. There are two versions of this activity, and it’s best to use both of them if
possible. The first is a printable, hands-on activity for math partners which is great for guided practice. The other is a Google Slides activity you can assign in Google Classroom for additional
practice or assessment. Both activities are included in the Sorting Tricky Trapezoids (or Trapeziums) freebie below. The directions in this post explain how to conduct the teacher-guided partner
activity; directions for using the Google Slides version are included in the freebie.
Trapezoid Sorting Guided Lesson Directions:
1. Begin the activity by introducing the characteristics of a trapezoid (or trapezium) according to the definition you are expected to teach.
2. Next, pair each student with a partner and give each pair one copy of the Quadrilaterals to Sort printable. Ask them to work together to cut out the polygons and stack them in a pile.
3. Explain that they will take turns sorting the quadrilaterals into one of two categories using the T-chart. Give each pair a copy of the T-chart or have one person in each pair draw the T-chart on
a dry erase board.
4. Before guiding them through the sorting activity, assign the roles of Partner A and Partner B in each pair. Then ask Partner A to select the first quadrilateral and place it in the correct column
on the T-chart. Partner A then explains the quadrilateral’s placement to Partner B who gives a thumbs up if he or she agrees. If Partner B does not agree, the two students should discuss the
proper placement of the quadrilateral and move it to the other column if needed.
5. Partner B then chooses one of the remaining quadrilaterals, places it on the chart, and explains its placement to Partner A. Partner A must approve the placement, or the two students discuss the
definition and placement before continuing.
6. Students continue to switch roles throughout the activity. If they aren’t able to agree on the placement of one of the quadrilaterals, they should set it aside for the time being.
7. As students are working, walk around and observe them to see if they are classifying the trapezoids correctly. Stop to help students who are confused or who can’t agree on the placement of one or
more quadrilaterals.
8. If you use Google Classroom, follow up with the Google Slides version of this activity.
Hands-on Activities for Classifying Quadrilaterals
Classify It! Exploring Quadrilaterals includes several introductory activities as well as a challenging game and two assessments.
One reason I wanted to bring the tricky trapezoid situation to your attention is that I’ve recently updated Classify It! Exploring Quadrilaterals to include all three definitions. There are now
THREE versions of the lesson materials within the product file.
No matter which definition you’re supposed to be teaching, Classify It! Exploring Quadrilaterals has you covered. You’ll find lessons, printables, task cards, answer keys, and assessments that are
aligned with the quadrilateral classification system used by your curriculum. Not only are these activities engaging and fun for kids, the lessons will help them nail those quadrilateral
classifications every time! If you don’t believe me, head over to see this product on TpT where you can read feedback from 400 teachers who have used Classify It! Exploring Quadrilaterals with
their students.
If you teach quadrilaterals and haven’t purchased this resource yet, take a few minutes to preview it on TpT. If you use it with your students, I think you’ll agree that Classify It is the most
effective and FUN way to foster a deep understanding of quadrilateral classification! | {"url":"https://lauracandler.com/teaching-tricky-trapezoids/","timestamp":"2024-11-11T17:57:34Z","content_type":"text/html","content_length":"264156","record_id":"<urn:uuid:bed80b27-fbe3-4d80-a291-c33d389ed17b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00287.warc.gz"} |
CPM Homework Help
On graph paper, draw any quadrilateral. Then enlarge (or reduce) it by each of the following ratios.
The quadrilateral below will be used to show you how to do this problem.
For your homework, use a different quadrilateral and follow the steps and hints shown here.
a. $\quad \frac { 4 } { 1 }$
To enlarge it by 4:1, each side must be four times as long.
In this case, each side of the parallelogram must be four units long.
b. $\quad \frac { 7 } { 2 }$
We can start by making this a ratio in the form of
since our original quadrilateral has sides of length 1 unit. Thus we find the new ratio of
This means each side of the new quadrilateral must be 3.5 times longer than that of the original. | {"url":"https://homework.cpm.org/category/CC/textbook/cc1/chapter/4/lesson/4.2.3/problem/4-70","timestamp":"2024-11-04T00:49:01Z","content_type":"text/html","content_length":"36157","record_id":"<urn:uuid:9e718582-e35c-4e5f-a101-dff1698a0907>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00300.warc.gz"} |
TSS and IF
Training Stress Score (TSS) definition
TSS is Training Stress Score: ROUVY uses several physiological metrics to quantify the training stress of a particular workout or portion of a workout.
Training Stress Score. In the ROUVY software, this is an indicator of how hard a workout is.
The formula for TSS is:
TSS = [(s x NP x IF) / (FTP x 3,600)] x 100
Where “s” is workout duration in seconds, “NP” is normalized power (or pace in running), “IF” is intensity factor, “FTP ” is functional threshold power (or pace in running), and “3,600” is the number
of seconds in an hour.
NP Normalize Power: An estimate of the power that a rider can maintain for the same physiological “cost,” if their power output is perfectly constant, rather than variable. Let’s say that is kind of
average Power from one of your Training sessions.
TSS points are non-transferable. It means if you have 150 TSS of 160 TSS points that are needed for the current level and you finish an activity that has the value of 100 TSS points, only 10 will
Intensity Factor (IF) definition
IF Intensity Factor: The ratio of normalized power to the rider’s threshold power. IF = NP/FTP. And Power (W) is calculated as estimated power based on the user's heart rate.
Note: NP, IF and TSS are trademarks of Peaksware, LLC and are used with permission. Learn more at www.trainingpeaks.com.
For more help, please feel free to contact us. | {"url":"https://support.rouvy.com/hc/en-us/articles/4481820359313-TSS-and-IF","timestamp":"2024-11-03T17:04:55Z","content_type":"text/html","content_length":"61198","record_id":"<urn:uuid:4413b178-de2e-4897-bdcf-b2aeef59eb0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00559.warc.gz"} |
Tampan, Paminggir people. Lampung region of Sumatra, Kota Agung district circa 1900, 45 x 36 cm. From the library of Darwin Sjamsudin, Jakarta. Photograph by D Dunlop.
Let us describe particle P within a frame of reference F, and recall that for WikiMechanics reference frames are modeled by some enormous set of quarks. Consider some subset of these quarks located
in the vicinity of P and generally call them the quarks surrounding P. We use $\mathbb{S}$ to represent this local environment, and employ the symbol $\mathbb{G}$ to note all of the other remaining
quarks in F. Then the frame of reference is given by the union
$\sf{F} = \mathbb{S} \cup \mathbb{G}$
We usually assume that the surroundings are just a small part of the frame. Then $\mathbb{G}$ will contain almost the same quarks as F, and $\mathbb{G}$ will have almost the same characteristics as
F. Let all these sets of quarks be described by $\ \overline{\kappa} \;$ their wavevectors. Then
$\overline{ \kappa }^{ \sf{F}} = \overline{ \kappa }^{ \mathbb{S}} + \overline{ \kappa }^{ \mathbb{G}}$
We almost always assume that a frame of reference is
. Then since
is similar to F, we can reckon that
is big and
too. We could say that
has gravitas. Definition:
is called the
component of the frame of reference. And we say that gravitational effects are perfectly negligible if
$\overline{\kappa} ^ { \mathbb{G} } = (0, 0, 0)$
page revision: 237, last edited: 01 Aug 2022 23:21 | {"url":"http://wikimechanics.org/gravitation","timestamp":"2024-11-08T04:50:59Z","content_type":"application/xhtml+xml","content_length":"17737","record_id":"<urn:uuid:67bf4a12-a2c4-44c9-a187-23e056610f07>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00899.warc.gz"} |
Spatial Contact Force
Model spatial contact between two geometries
Simscape / Multibody / Forces and Torques
The Spatial Contact Force block models the contact between a pair of geometries in 3-D space. You can use the built-in penalty method or provide custom normal and friction force laws to model a
Supported Geometries
The Spatial Contact Force block can model contacts between a variety of geometry pairs. You can use the geometries exported from the solid blocks in the Body Elements sublibrary or the geometries of
the point and surface blocks in the Curves and Surfaces sublibrary.
All the exported geometries are convex hull representations of the corresponding solids even though some of the solids may have concave shapes. The figure shows the true geometry and convex hull
representation of an L-shape solid.
Note that when computing inertial properties, the solid blocks use the true geometry.
The Spatial Contact Force block does not model contacts between certain geometry pairs. This table lists the supported pairs.
Convex Hull of Solid Disk Grid Surface Infinite Plane Point Point Cloud
Convex Hull of Solid Yes Yes No Yes Yes Yes
Disk Yes No No Yes No No
Grid Surface No No No No Yes Yes
Infinite Plane Yes Yes No No Yes Yes
Point Yes No Yes Yes No No
Point Cloud Yes No Yes Yes No No
Contact Forces
The image shows how the Spatial Contact Force block models a spatial contact problem between two 3-D geometries. In this case, the contact is between a blue base geometry and a red follower geometry
and there is one contact point.
During contact, each geometry has a contact frame. The contact frames are always coincident and located at the contact point. The xy-planes of the contact frames define the contact plane. The z
-direction of the contact frames is an outward normal vector for the base geometry, but an inward normal vector for the follower geometry. During continuous contact, the contact frames move around
the geometry as the contact point moves.
The block applies contact forces to the geometries at the origin of the contact frames in accordance with Newton's Third Law:
1. The normal force, ${f}_{n}$, which is aligned with the z-axis of the contact frame. This force pushes the geometries apart in order to reduce penetration.
2. The frictional force, ${f}_{f}$, which lies in the contact plane. This force opposes the relative tangential velocities between the geometries.
To specify a normal contact force, in the Normal Force section, set the Method parameter to Smooth Spring-Damper or Provided by Input. If you select Smooth Spring-Damper, the normal force is:
${f}_{n}=s\left(d,w\right)\cdot \left(k\cdot d+b\cdot {d}^{"}\right)$,
• ${f}_{n}$ is the normal force applied in equal-and-opposite fashion to each contacting geometry.
• $d$ is the penetration depth between two contacting geometries.
• $w$ is the transition region width specified in the block.
• ${d}^{"}$ is the first time derivative of the penetration depth.
• $k$ is the normal-force stiffness specified in the block.
• $b$ is the normal-force damping specified in the block.
• $s\left(d,w\right)$ is the smoothing function.
The force law is smoothed near the onset of penetration. When d < w, the smoothing function increases continuously and monotonically over the interval [0, w]. The function is 0 when d = 0, the
function is 1 when d = w, and the function has zero derivative with respect to d at the endpoints of the interval.
To better detect contacts when the value of the Transition Region Width parameter is small, the Spatial Contact Force block supports optional zero-crossing detection. The zero-crossing events only
occur when the separation distance changes from positive or zero to negative and vice versa.
The zero-crossing detection of the Spatial Contact Force block is different than the zero-crossing detection of other Simulink^® blocks, such as From File and Integrator, because the force equation
of the Spatial Contact Force is continuous. For more information about zero-crossing detection in Simulink blocks, see Zero-Crossing Detection.
The Spatial Contact Force block clips the computed force to be always nonnegative. If the force law gives a negative force, the block applies zero force instead. This happens briefly as the
geometries are separating and penetration is about to end. At that point, d is approaching zero and d^' is negative. This modification ensures that the contact normal force is always repulsive and
never attractive.
To specify a frictional force, in the Frictional Force section, set the Method parameter to Smooth Stick-Slip, Provided by Input, or None. If you select Smooth Stick-Slip, the frictional force is
always directly opposed to the direction of the relative velocity at the contact point and is related to the normal force through a coefficient of friction that varies depending on the magnitude of
the relative velocity:
$|{f}_{f}|=\mu \cdot |{f}_{n}|$,
• ${f}_{f}$ is the frictional force.
• ${f}_{n}$ is the normal force.
• $\mu$ is the effective coefficient of friction.
The effective coefficient of friction is a function of the values of the Coefficient of Static Friction, Coefficient of Dynamic Friction, and Critical Velocity parameters, and the magnitude of the
relative tangential velocity. At high relative velocities, the value of the effective coefficient of friction is close to that of the coefficient of dynamic friction. At the critical velocity, the
effective coefficient of friction achieves a maximum value that is equal to the coefficient of static friction. The graph shows the basic relationship in the typical case where ${\mu }_{static}$ > $
{\mu }_{dynamic}$. In this case, the model is able to approximate stiction with a higher effective coefficient of friction near small tangential velocities.
When modeling contacts that involve a point cloud, the Spatial Contact Force block calculates the contact quantities for each point and the output signals have the same order as the points specified
in the Point Could block. For the points that are not in contact with the other geometry, the measured values are zero. When using input forces, the size and order of the input signals must match the
size and order of the points specified in the Point Could block.
B — Base geometry
Geometry port associated with the base geometry.
F — Follower geometry
Geometry port associated with the follower geometry.
fn — Normal contact force magnitude, N
physical signal
Physical signal input port that accepts the normal contact force magnitude between the two geometries. The block clips negative values to zero.
When modeling contacts that involve a point cloud, the input signal must be a 1-by-N array, where N equals the number of points. Each column of the array specifies the normal contact force on one of
the points. The size and order of the input signals must match the size and order of the points specified in the Point Could block. The force is resolved in the contact frame of the point.
When modeling contacts between other types of geometries, the input signal must be a scalar that specifies the normal contact force.
To enable this port, in the Normal Force section, set Method to Provided by Input.
ffr — Friction force, N
physical signal
Physical signal input port that accepts the frictional force between the two geometries.
When modeling contacts that involve a point cloud, the input signal must be a 2-by-N matrix, where N equals the number of points. Each column of the matrix specifies the x-component and y-component
of the applied frictional force on one of the points. The size and order of the input signals must match the size and order of the points specified in the Point Could block. The force is resolved in
the contact frame of the point.
When modeling contacts between other types of geometries, the input signal must be a 2-by-1 array that specifies the x-component and y-component of the applied frictional force resolved in the
contact frame.
To enable this port, in the Frictional Force section, set Method to Provided by Input.
con — Contact signal, unitless
physical signal
Physical signal output port that outputs the contact status of the two geometries.
When modeling contacts that involve a point cloud, the output is a 1-by-N array, where N equals the number of points. Each column of the array indicates the status of the corresponding point. The
point contacts the other geometry if the value is 1. Otherwise, the value is 0.
When modeling contacts between other types of geometries, the output is a scalar that indicates whether the geometries are in contact. The geometries are in contact if the output is 1 or separated if
the output is 0.
To enable this port, in the Sensing section, select Contact Signal.
pen — Penetration depth, m
physical signal
Physical signal output port that outputs the penetration depth between the two geometries.
When modeling contacts that involve a point cloud, the output is a 1-by-N array, where N equals the number of points. Each column of the array indicates the status of the corresponding point. If the
point is penetrating the other geometry, the value is positive and equals the penetration depth. Otherwise, the value is 0.
When modeling contacts between other types of geometries, the output is a scalar that indicates whether the geometries are penetrating each other. If they are penetrating each other, the output is
positive. Otherwise, the output is 0.
To enable this port, in the Sensing section, select Penetration Depth.
sep — Separation distance, m
physical signal
Physical signal output port that outputs the separation distance between the two geometries.
When modeling contacts that involve a point cloud, the output is a 1-by-N array, where N equals the number of points. Each column of the array indicates the status of the corresponding point. If the
point is not penetrating the other geometry, the value is nonnegative and equals the minimum distance between the point and the other geometry. Otherwise, the output is negative and the absolute
value equals the penetration depth.
When modeling contacts between other types of geometries, the output is a scalar. If they are not penetrating each other, the output is nonnegative and equals the minimum distance between the two
geometries. Otherwise, the output is negative and the absolute value equals the penetration depth.
To enable this port, in the Sensing section, select Separation Distance.
fn — Normal contact force magnitude, N
physical signal
Physical signal output port that outputs the magnitude of the normal contact force between the two geometries. The output is always nonnegative.
When modeling contacts that involve a point cloud, the output is a 1-by-N array, where N equals the number of points. Each column of the array equals the magnitude of the normal contact force on the
corresponding point.
When modeling contacts between other types of geometries, the output is a scalar that equals the magnitude of the normal contact force.
To enable this port, in the Sensing section, select Normal Force Magnitude.
ffrm — Frictional force magnitude, N
physical signal
Physical signal output port that outputs the magnitude of the frictional contact force between the two geometries.
When modeling contacts that involve a point cloud, the output is a 1-by-N array, where N equals the number of points. Each column of the array equals the magnitude of the frictional force on the
corresponding point.
When modeling contacts between other types of geometries, the output is a scalar that equals the magnitude of the frictional force.
To enable this port, in the Sensing section, select Frictional Force Magnitude.
vn — Relative normal velocity, m/s
physical signal
Physical signal output port that provides the z-component of the relative velocity between the contact points of the base and follower geometries.
When modeling contacts that involve a point cloud, the output is a 1-by-N array, where N equals the number of points. Each column of the array equals the z-component of the relative velocity between
the corresponding point and the contact point on the other geometry. The relative velocity is resolved in the corresponding contact frame.
When modeling contacts between other types of geometries, the output is a scalar that equals the z-component of the relative velocity between the contact points of the two geometries. The output is
resolved in the contact frame.
To enable this port, in the Sensing section, select Relative Normal Velocity.
vt — Relative tangential velocity, m/s
physical signal
Physical signal output port that outputs the relative tangential velocity between the contact points of the base and follower geometries.
When modeling contacts that involve a point cloud, the output is a 2-by-N matrix, where N equals the number of points. Each column of the array equals the x-component and y-component of the relative
velocity between the corresponding point and the contact point on the other geometry. The relative velocity is resolved in the corresponding contact frame.
When modeling contacts between other types of geometries, the output is a 2-by-1 array that equals the x-component and y-component of the relative velocity between the contact points of the two
geometries. The output is resolved in the contact frame.
To enable this port, in the Sensing section, select Relative Tangential Velocity.
Contact Frame
Rb — Base rotation, unitless
physical signal
Physical signal port that outputs the rotation matrix of the contact frame with respect to the reference frame of the base geometry.
When modeling contacts that involve a point cloud, the output is a 3-D array that has N 3-by-3 matrices, where N equals the number of points in the point cloud. Each 3-by-3 matrix is a rotation
matrix that maps the vectors in the corresponding contact frame to vectors in the reference frame of the base geometry. If a point is not in contact with the other geometry, the corresponding matrix
is an identity matrix.
When modeling contacts between other types of geometries, the output is a 3-by-3 rotation matrix that maps the vectors in the contact frame to vectors in the reference frame of the base geometry.
To enable this port, in the Sensing > Contact Frame section, select Base Rotation.
pb — Base translation, unitless
physical signal
Physical signal port that outputs the location of the contact point with respect to the reference frame of the base geometry.
When modeling contacts that involve a point cloud, the output is a 3-by-N array, where N equals the number of points. Each column of the array indicates the coordinates of a point with respect to the
reference frame of the base geometry. If a point is not in contact with the other geometry, the coordinates are [0 0 0].
When modeling contacts between other types of geometries, the output is a 3-by-1 array that contains the coordinates of the contact point with respect to the reference frame of the base geometry.
To enable this port, in the Sensing > Contact Frame section, select Base Translation.
Rf — Follower rotation, unitless
physical signal
Physical signal port that outputs the rotation matrix of the contact frame with respect to the reference frame of the follower geometry.
When modeling contacts that involve a point cloud, the output is a 3-D array that has N 3-by-3 matrices, where N equals the number of points in the point cloud. Each 3-by-3 matrix is a rotation
matrix that maps the vectors in the corresponding contact frame to vectors in the reference frame of the follower geometry. If a point is not in contact with the other geometry, the corresponding
matrix is an identity matrix.
When modeling contacts between other types of geometries, the output is a 3-by-3 rotation matrix that maps the vectors in the contact frame to vectors in the reference frame of the follower geometry.
To enable this port, in the Sensing > Contact Frame section, select Follower Rotation.
pf — Follower translation, unitless
physical signal
Physical signal port that outputs the location of the contact point with respect to the reference frame of the follower geometry.
When modeling contacts that involve a point cloud, the output is a 3-by-N array, where N equals the number of points. Each column of the array indicates the coordinates of a point with respect to the
reference frame of the follower geometry. If a point is not in contact with the other geometry, the coordinates are [0 0 0].
When modeling contacts between other types of geometries, the output is a 3-by-1 array that contains the coordinates of the contact point with respect to the reference frame of the follower geometry.
To enable this port, in the Sensing > Contact Frame section, select Follower Translation.
Normal Force
Method — Method to specify normal contact force
Smooth Spring-Damper (default) | Provided by Input
Method to specify the normal contact force, specified as either Smooth Spring-Damper or Provided by Input.
Select Smooth Spring-Damper to use the modified spring-damper method to model the normal contact force, or select Provided by Input to input a custom force as the normal contact force.
Stiffness — Resistance of contact spring to geometric penetration
1e6 N/m (default) | scalar
Resistance of the contact spring to geometric penetration, specified as a scalar. The spring stiffness is constant during the contact. The larger the value of the spring stiffness, the harder the
contact between the geometries.
To enable this parameter, in the Normal Force section, set Method to Smooth Spring-Damper.
Damping — Resistance of contact damper to motion while geometries are penetrating
1e3 N/(m/s) (default) | scalar
Resistance of the contact damper to motion while the geometries are penetrating, specified as a scalar. The damping coefficient is a constant value that represents the lost energy from colliding
geometries. The larger the value of the damping coefficient, the more energy is lost when geometries collide and the faster the contact vibrations are dampened. Use a value of zero to model perfectly
elastic collisions, which conserve energy.
To enable this parameter, in the Normal Force section, set Method to Smooth Spring-Damper.
Transition Region Width — Region over which spring-damper force raises to its full value
1e-4 m (default) | scalar
Region over which the spring-damper force raises to its full value, specified as a scalar. The smaller the region, the sharper the onset of contact and the smaller the time step required for the
solver. Reducing the transition region improves model accuracy while expanding the transition region improves simulation speed.
To enable this parameter, in the Normal Force section, set Method to Smooth Spring-Damper.
Frictional Force
Method — Method to specify frictional force
Smooth Stick-Slip (default) | None | Provided by Input
Method to specify the frictional force, specified as Smooth Stick-Slip, None, or Provided by Input.
Select None to omit friction, Smooth Stick-Slip to use the modified stick-slip method to compute the frictional force, or Provided by Input to input a custom frictional force.
Coefficient of Static Friction — Ratio of magnitude of frictional force to the magnitude of normal force when tangential velocity is close to zero
0.5 (default) | nonnegative scalar
Ratio of the magnitude of the frictional force to the magnitude of the normal force when the tangential velocity is close to zero, specified as a positive scalar.
This value is determined by the material properties of the contacting geometries. The value of this parameter is often less than one, although values greater than one are possible for high-friction
materials. In most cases, this value should be higher than the coefficient of dynamic friction.
To enable this parameter, in the Frictional Force section, set Method to Smooth Stick-Slip.
Coefficient of Dynamic Friction — Ratio of magnitude of frictional force to magnitude of normal force when tangential velocity is large
0.3 (default) | nonnegative scalar
Ratio of the magnitude of the frictional force to the magnitude of the normal force when the tangential velocity is large, specified as a nonnegative scalar.
This value is determined by the material properties of the contacting geometries. The value of this parameter is often less than one, although values greater than one are possible for high-friction
materials. In most cases, this value should be less than the coefficient of static friction.
To enable this parameter, in the Frictional Force section, set Method to Smooth Stick-Slip.
Critical Velocity — Velocity that determines blending between static and dynamic coefficients of friction
1e-3 m/s (default) | scalar
Velocity that determines the blending between the static and dynamic coefficients of friction, specified as a scalar.
When the critical velocity is equal to the magnitude of the tangential velocity, the effective coefficient of friction is equal to the value of the Coefficient of Static Friction parameter. As the
magnitude of the tangential velocity increases beyond the specified critical velocity, the effective coefficient of friction asymptotically approaches the value of the Coefficient of Dynamic Friction
To enable this parameter, in the Frictional Force section, set Method to Smooth Stick-Slip.
Detect Contact Start and End — Detect start and end of contacts as zero-crossing events
off (default) | on
Select to detect the start and end of each contact as zero-crossing events. The zero-crossing events occur when the separation distance changes from positive or zero to negative and vice versa.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Version History
Introduced in R2019b
R2023b: Use input forces and measure contact quantities for point cloud and grid surface
You can now use input forces to model contacts that involve a point cloud or a grid surface and measure the corresponding contact quantities. When one of the geometries is a point cloud, the signals
of the input forces must be arrays that specify forces for every point in the cloud, and the measured contact quantities are arrays that contain information for every point in the cloud.
For the point cloud and grid surface contact pair, the Spatial Contact Force block supports zero-crossing detection. The zero-crossing events only occur when the separation distance changes from
positive or zero to negative and vice versa.
R2023b: Measure the penetration depth between two geometries
Measure the penetration depth between two contact geometries. The value equals the overlap distance when two geometries collide and equals zero when the geometries are not in contact.
R2023b: The Separation Distance parameter will be removed
The Separation Distance parameter will be removed in a future release. Use the Penetration Depth parameter instead.
R2023b: Behavior change non-scalar outputs
An inconsistent dimensions error appears if you compile a model that has a Spatial Contact Force block that outputs a non-scalar signal directly to any Simscape block except the PS-Simulink Converter
To avoid the error, use a PS Signal Specification block to connect each non-scalar output signal of the Spatial Contact Force block with the corresponding Simscape blocks and explicitly specify the
dimension of the output signal. | {"url":"https://ch.mathworks.com/help/sm/ref/spatialcontactforce.html","timestamp":"2024-11-01T23:11:10Z","content_type":"text/html","content_length":"150434","record_id":"<urn:uuid:bc7b79c3-85be-4d15-922c-ea1d735be745>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00788.warc.gz"} |
Wondrous Tails Solver
Click to add / remove seals (0 / 9)
Shuffle Recommendation
Current journal chances
At least Chance
1 line
2 lines
3 lines
Strategy Performance and Comparison
Second chance points:
How to use
1. Click on the journal and add 7 seals.
2. The "shuffle: yes / no" verdicts will appear under "Shuffle Recommendation".
□ Follow the one that corresponds to the number of second chance points that you have or are planning to spend before you turn in your journal.
3. The default strategy (1 line maximum chance) is a good choice.
□ To see how it compares to other strategies, click on "Simulate" to calculate the expected chance of getting 1, 2 and 3 lines when following the recommendations of each strategy.
Choosing a strategy
Each strategy optimizes for a different objective.
• 1 line maximum chance: geared towards maximizing the chances of getting 1 line, disregarding what happens to the chances of getting 2 and 3 lines.
• 2 lines maximum chance: geared towards maximizing the chances of getting 2 lines, disregarding what happens to the chances of getting 1 and 3 lines.
• 1 and 2 lines tradeoff: geared towards balancing the chances of getting 1 and 2 lines. Journals with 2 lines are valued 4 times as much as journals with 1 line.
• 3 lines chance: geared towards maximizing the chances of getting 3 lines, disregarding what happens to the chances of getting 1 and 2 lines.
• Potential Lines: the strategy described here.
• Never Shuffle: always recommends not to shuffle.
The three meaningful choices here are "1 line maximum chance", "2 lines maximum chance" and "1 and 2 lines tradeoff". At 8-9 seals, switching between those 3 strategies involves trading up to ~6% of
your chances of getting at least 1 line with up to ~1% of your chances of getting at least 2 lines. Trade-off's worth depends on the nature of the rewards and how much you value them. Run a
simulation from an empty journal for details.
Getting 3 lines is a rare event. If you're feeling lucky or you like gambling feel free to use "Maximize 3 lines chance". "Potential Lines" is a popular strategy that is easy to mentally track. The
purpose of including it in the list is to reassure those who followed it at some point in the past that they're getting better results with the other strategies.
"Never Shuffle" is useful as a benchmark for the other strategies and its numbers should be similar to the ones in the "Chances without shuffling" section.
How it works
"Simulate" uses the specified second chance points to run 100,000 simulations for each strategy. Each simulation run starts from the specified journal. If fewer than 7 seals are present, random ones
will be added in each run till 7 is reached. At 7 seals, the recommendation of the strategy being simulated is applied till we run out of second chance point (or the strategy recommends "no
shuffle"). The results of the simulation describe the percentage of simulation runs where each strategy produced a journal with at least 1, 2 and 3 lines. If the number of seals in the specific
journal is 7, an additional column that shows its recommendations will be added. You can run the simulation from an empty board to see how well each strategy performs in general.
Maximum lines strategies
To understand how those strategies work, let's start with a simpler version. We're given a six-sided die and we're allowed to roll it up to $r$ times. Our goal is to get the highest die roll that we
possibly can. We can stop rolling at any point, but we can't go back to a previous roll. We're asked to come up with a strategy (reroll / don't reroll) that would give us the highest expected value
for the die. The maximum expected value can be calculated as follows:
\[ f(x, r) = \begin{cases} x, & \text{if $r = 0$} \\ max \begin{cases} \frac{\displaystyle\sum_{i=1}^n f(face_i, r - 1)}{n} \\ x \end{cases}, & \text{if $r \gt 0$} \end{cases} \] Where $x$ is the
current roll (1..6), $r$ is the remaining number of rerolls, $n$ is the number of faces (6) and $face_i$ is the value on the $i^{th}$ face (1, 2, 3, 4, 5, 6).
The "reroll / don't reroll" decision for a given state depends on which value of the arguments to $max$ is higher. If $x$ has the higher the value, then the better decision is "don't reroll". If the
other argument has the higher value, then the better decision is "reroll".
The Wondrous Tails shuffling problem can be reduced to the die rolling problem, except in this case we have 11440 faces (the number of different journals with 7 seals) and each face can have a value
between 0 and 72 (the number of ways we can place the last 2 seals to satisfy our objective). The 4 objectives we use here are "at least 1 line", "at least 2 lines", "1 and 2 lines tradeoff" and "3
lines". For the "1 and 2 lines tradeoff" objective / strategy, the value on each face / 7-seals-journal is $a + 4 * b$, where $a$ is the number of ways to get 1 line with the last 2 seals and $b$ is
the number of ways to get 2 lines with the last 2 seals.
Lyaa: The developer of
calculator who was kind enough to let us reuse its graphics. | {"url":"http://ffxiv.morpheusz.com/p/wondrous-tails-solver.html","timestamp":"2024-11-13T12:50:06Z","content_type":"text/html","content_length":"100875","record_id":"<urn:uuid:df95b081-2ed9-4bc5-99f9-acce49bc0324>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00254.warc.gz"} |
Macro that concatenates two cells, references a Date and then copy pastes value into first | Microsoft Community Hub
Forum Discussion
Macro that concatenates two cells, references a Date and then copy pastes value into first
Would love some help writing this macro i tried attaching a file with more explanations within the file if it helps make it clearer. (in yellow highlights & red text) I have two sheets. One tit...
thanks mathetes
the main reason for not using pivot tables here is because it wouldn't match the aesthetics i laid out.
however, it doesn't looks like i can accomplish what i'm trying for without using them in place of (at least to my knowledge)
i tried writing out what the formulas would do, but i can't get around it
is it possible to have a formula that does this:
if column N from 'Data' sheet equals the same month as today, then concatenate columns N & O from 'Data' sheet
but with that, it would have to do this search through all of column N without picking the same row twice
i think i might be able to manage from there
I added a formula to column L of the data tab to concatenate the month/day for today's month as an example of one way I believe what you're asking can be done.
• I took the formula that JMB17 had written and modified it so that the result of the formula is actually a full date in Excel's date format. As you'll see, the condition in the first part of the
IF function remains the same, but the rest, in my version, no longer uses concatenation; rather it uses the DATE function to take the numbers of day, month, and year, to produce the standard
numeric value that the date format then turns into a date. This is then consistent with your dates for holidays, paydays, etc.
Here's the new formula, as it appears in cell L2.
I still have to say, however, that it's not clear to me what you're doing with the rest of the workbook. But if you can make this work, go for it.
• wow, thanks JMB17 . beautiful formula.
thanks as well mathetes. for this column i'll probably end up keeping the format as '00 - 00' but i can use the knowledge from your formula as well.
now, i wanted to see if it was possible to have a formula that would list out all the dates that fall in the same month as Today. i figure from there i should be able to finish up this section
with index/match.
i'm trying to toy around with the below formulas, but i'm getting either #Value, #Calc or #Spill errors.
is this possible, or am i spinning my wheels?
□ Deleted
As I've said before, I'm mystified by how you're approaching this, unable to figure out the goal.
I can, nevertheless, tell you that the SPILL error with FILTER means something is blocking the function from delivering all the results, not that there's something wrong with the formula.
After trying to delete rows and columns adjacent to your own formula, I took your formula and copied it over to a new sheet--you can see the result here.
Clearly some more refinement is needed to the criteria to be applied in the FILTER function.
i've attached an updated file.
in the Data sheet, range )32:V33 is what i'm trying to accomplish. it looks like the formulas work on the Data sheet, but they don't work when i try to transfer over onto the Money sheet.
On the Money sheet, i can't get column L to transfer over. the text doesn't appear even those the formulas should be correct | {"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/macro-that-concatenates-two-cells-references-a-date-and-then-copy-pastes-value-i/2077123/replies/2089193","timestamp":"2024-11-07T19:03:59Z","content_type":"text/html","content_length":"287180","record_id":"<urn:uuid:3a0afb2f-4a3c-407e-aedd-8708ea12606a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00895.warc.gz"} |
TI-Basic Developer
The ▶ Command
Converts an expression from one unit to another.
Menu Location
Press [2nd][▶] to enter ▶: this is
• [2nd][MODE] on a TI-89 or TI-89 Titanium
• [2nd][Y] on a TI-92, TI-92 Plus, or Voyage 200
This command works on all calculators.
2 bytes
The ▶ operator converts an expression to a different unit. Usually, this refers to the built-in units (such as _m (meters), _mph (miles per hour, etc.) which you can select from the UNITS menu.
To use it, you must first have the expression on the left in terms of some unit — this is done by multiplying it by that unit. For instance, "5 meters" is written as 5_m or 5*_m (where _m is the
unit). You can combine units as well: for instance, 5_m^2 (5 square meters) or 30_km/_hr (30 kilometers per hour).
To convert that into a different unit, type ▶ and then a different unit to convert to (again, you can combine units). For instance, to convert 5 square meters to acres, type 5_m^2▶_acre. (Note: the
result will always be expressed as a decimal)
You can't use ▶ to convert between units of temperature (degrees Celsius to degrees Fahrenheit, for instance), since the calculator isn't sure if you mean absolute temperature or a change in
temperature instead. Use the tmpCnv() and ΔtmpCnv() commands instead.
Advanced Uses
It's possible to define your own units as well: units are just any variable beginning with an underscore, and ▶ will perform just as well converting between those. There are two ways to go about it.
The first is to define your units in terms of existing ones: for instance, you might define a furlong (one-eighth of a mile) as follows:
The second method is to start with a unit or several units to keep undefined (for instance, _x). You can then define other units in terms of _x, and convert between them:
Units are treated just like variables, except that they're universal across folders: you can have only one instance of _x, and you can access it as _x no matter which folder you're in. You can use
this if you want to define a universal variable to access in any folder: for instance, if you define a program as _prgm(), you can run it with _prgm() from any folder.
Error Conditions
345 - Inconsistent units happens when converting between two units that measure different types of quantities (for instance, converting length to time).
Related Commands
See Also | {"url":"http://tibasicdev.wikidot.com/68k:convert","timestamp":"2024-11-03T03:20:52Z","content_type":"application/xhtml+xml","content_length":"29189","record_id":"<urn:uuid:2e0dc20e-9468-4d60-8132-e352a0b8f358>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00514.warc.gz"} |
Count To 500
Count To 500 - This song was written and performed by t. Skip counting by 5s to 100 skip counting by 3s and 4s skip counting by 3s is: However, it can also be used as extra support with students
struggling with number sequencing, for new students, sdc. 3, 6, 9, 12, 15, 18, 21, 24, 27, 30,. 100 200 300 400 500 2 write the next three numbers. Web learn numbers 1 to 500 _ counting from 1 to 500
_ baby learning india. Ok, let us get some practice: Nickel counting by 5 to 50. Penny counting by 1 to 10. Your arithmetic teacher or mathematics teacher may refer to this list of numbers as the:
Let's Count to 500 I'll Start 1 Crickets Chirping Meme on ME.ME
Web 5 to 500 by 5s numbers chart. However, it can also be used as extra support with students struggling with number sequencing, for new students, sdc. Nickel counting by 5 to 50. Simple counting
game but we use pictures with the number somewhere in it. Do not print this list as it may be over 100 pages long!
Number charts, counting by 5 from 5 to 500.
100 200 300 400 500 2 write the next three numbers. Web fun way to practice counting money using puzzles! Skip counting by 5s to 100 skip counting by 3s and 4s skip counting by 3s is: See and print
(as pdf or on paper) counting table from 1 to 500. Penny counting by 1 to 10.
10 Best Printable Number Grid To 500
Dime counting by 10 to 100. Half dollar counting by 50 to 500… Web 5 to 500 by 5s numbers chart. Web fun way to practice counting money using puzzles! Web counting by 10s to 500 for kids song.
10 Best Printable Number Grid To 500
Half dollar counting by 50 to 500… Web fun way to practice counting money using puzzles! Pdf and html versions available. See and print (as pdf or on paper) counting table from 1 to 500. Web a large
wildfire burning across 500 acres in huntsville, texas, forced some parts of the city to evacuate friday, as forecasters are predicting extremely.
Counting to 500 in ONE VIDEO YouTube
Your arithmetic teacher or mathematics teacher may refer to this list of numbers as the: Half dollar counting by 50 to 500… Students cut and pass missing numbers on a number lines. 16 drivers are
alive in the race to win a title, but don't count out. Web counting from 100 to 500 in american english.
Number charts, counting by 5 from 5 to 500.
Web counting by 10s to 500 for kids song. Web this tiny tune is all about counting in 50s right up to 500. Web the nascar cup series playoffs get going on sunday evening at darlington raceway with
the southern 500, a labor day tradition. Web lsu defensive lineman maason smith can play for the first time in over a.
10 Best Printable Number Grid To 500
Skip counting by 5s to 100 skip counting by 3s and 4s skip counting by 3s is: Web counting from 100 to 500 in american english. Web this tiny tune is all about counting in 50s right up to 500.
However, it can also be used as extra support with students struggling with number sequencing, for new students, sdc. Web.
Counting Activity 401 500 (Fast) YouTube
Penny counting by 1 to 10. Web here is a simple list of the numbers from zero to four thousand. Remember to keep the images within the limit stated in the forum rules. Randomize this list random
number picker. Skip counting by 50 is fun.
Printable Numbers 1 500 Number Grid, Number Chart, Printable Number
Half dollar counting by 50 to 500… Web here is a simple list of the numbers from zero to four thousand. Web 5, 10, 15, 20, 25, 30, 35, 40, 45, 50,. 100 200 300 400 500 2 write the next three numbers.
Nickel counting by 5 to 50.
500 Second Countdown Timer YouTube
Today precise teaches kids to count to 500, by using multiples of 10. This activity helps kids learn to skip counting while reinforcing the counting pattern. Web counting chart / table from 1 to 500
in american english. Available in color or grayscale. Numbers range from 0 to 500, skip counting.
Students cut and pass missing numbers on a number lines. Today precise teaches kids to count to 500, by using multiples of 10. Numbers range from 0 to 500, skip counting. Learning to count from 1 to
500. Quarter counting by 25 to 250. Web the nascar cup series playoffs get going on sunday evening at darlington raceway with the southern 500, a labor day tradition. Web children are asked to finish
incomplete number charts by adding the omitted digits. The sequence is repeated 3 times to this catchy tune, to help children learn the numbers off by heart. 3, 6, 9, 12, 15, 18, 21, 24, 27, 30,. Web
fun way to practice counting money using puzzles! Web learn numbers 1 to 500 _ counting from 1 to 500 _ baby learning india. This free, printable numbers chart helps children count to 500 in
increments of 5. Half dollar counting by 50 to 500… Ukulele hakuna matata by alvaro angeloro. Web counting chart / table from 1 to 500 in american english. See and print (as pdf or on paper) counting
table from 1 to 500. 16 drivers are alive in the race to win a title, but don't count out. Your arithmetic teacher or mathematics teacher may refer to this list of numbers as the: Randomize this list
random number picker. However, it can also be used as extra support with students struggling with number sequencing, for new students, sdc.
See And Print (As Pdf Or On Paper) Counting Table From 1 To 500.
Web a large wildfire burning across 500 acres in huntsville, texas, forced some parts of the city to evacuate friday, as forecasters are predicting extremely hot weather to persist over labor day
weekend. Available in color or grayscale. Nickel counting by 5 to 50. Web learn numbers 1 to 500 _ counting from 1 to 500 _ baby learning india.
The Sequence Is Repeated 3 Times To This Catchy Tune, To Help Children Learn The Numbers Off By Heart.
16 drivers are alive in the race to win a title, but don't count out. Web 5, 10, 15, 20, 25, 30, 35, 40, 45, 50,. Do not print this list as it may be over 100 pages long! This song was written and
performed by t.
The Chart Can Be Used As A Teaching Tool Or Hung On The Wall As A Reference.
Counting numbers whole numbers positive integer numbers base 10 decimal numbers arabic numerals warning: Dime counting by 10 to 100. Simple counting game but we use pictures with the number somewhere
in it. Web counting by 10s to 500 for kids song.
Students Cut And Pass Missing Numbers On A Number Lines.
Web counting from 100 to 500 in american english. Your arithmetic teacher or mathematics teacher may refer to this list of numbers as the: Web 5 to 500 by 5s numbers chart. This free, printable
numbers chart helps children count to 500 in increments of 5.
Related Post: | {"url":"https://time.ocr.org.uk/en/count-to-500.html","timestamp":"2024-11-06T06:22:28Z","content_type":"text/html","content_length":"28342","record_id":"<urn:uuid:f2696fe9-b1fe-43f4-98d5-64d51d9e550a>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00496.warc.gz"} |
Copyright Conor McBride and Ross Paterson 2005
License BSD-style (see the LICENSE file in the distribution)
Maintainer libraries@haskell.org
Stability experimental
Portability portable
Safe Haskell Trustworthy
Language Haskell2010
Class of data structures that can be traversed from left to right, performing an action on each element. Instances are expected to satisfy the listed laws.
The Traversable class
class (Functor t, Foldable t) => Traversable t where Source #
Functors representing data structures that can be transformed to structures of the same shape by performing an Applicative (or, therefore, Monad) action on each element from left to right.
A more detailed description of what same shape means, the various methods, how traversals are constructed, and example advanced use-cases can be found in the Overview section of Data.Traversable.
For the class laws see the Laws section of Data.Traversable.
traverse :: Applicative f => (a -> f b) -> t a -> f (t b) Source #
Map each element of a structure to an action, evaluate these actions from left to right, and collect the results. For a version that ignores the results see traverse_.
Basic usage:
In the first two examples we show each evaluated action mapping to the output structure.
>>> traverse Just [1,2,3,4]
Just [1,2,3,4]
>>> traverse id [Right 1, Right 2, Right 3, Right 4]
Right [1,2,3,4]
In the next examples, we show that Nothing and Left values short circuit the created structure.
>>> traverse (const Nothing) [1,2,3,4]
>>> traverse (\x -> if odd x then Just x else Nothing) [1,2,3,4]
>>> traverse id [Right 1, Right 2, Right 3, Right 4, Left 0]
Left 0
sequenceA :: Applicative f => t (f a) -> f (t a) Source #
Evaluate each action in the structure from left to right, and collect the results. For a version that ignores the results see sequenceA_.
Basic usage:
For the first two examples we show sequenceA fully evaluating a a structure and collecting the results.
>>> sequenceA [Just 1, Just 2, Just 3]
Just [1,2,3]
>>> sequenceA [Right 1, Right 2, Right 3]
Right [1,2,3]
The next two example show Nothing and Just will short circuit the resulting structure if present in the input. For more context, check the Traversable instances for Either and Maybe.
>>> sequenceA [Just 1, Just 2, Just 3, Nothing]
>>> sequenceA [Right 1, Right 2, Right 3, Left 4]
Left 4
mapM :: Monad m => (a -> m b) -> t a -> m (t b) Source #
Map each element of a structure to a monadic action, evaluate these actions from left to right, and collect the results. For a version that ignores the results see mapM_.
mapM is literally a traverse with a type signature restricted to Monad. Its implementation may be more efficient due to additional power of Monad.
sequence :: Monad m => t (m a) -> m (t a) Source #
Evaluate each monadic action in the structure from left to right, and collect the results. For a version that ignores the results see sequence_.
Basic usage:
The first two examples are instances where the input and and output of sequence are isomorphic.
>>> sequence $ Right [1,2,3,4]
[Right 1,Right 2,Right 3,Right 4]
>>> sequence $ [Right 1,Right 2,Right 3,Right 4]
Right [1,2,3,4]
The following examples demonstrate short circuit behavior for sequence.
>>> sequence $ Left [1,2,3,4]
Left [1,2,3,4]
>>> sequence $ [Left 0, Right 1,Right 2,Right 3,Right 4]
Left 0
Traversable ZipList Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable Complex Source # Since: base-4.9.0.0
Defined in Data.Complex
Traversable Identity Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable First Source # Since: base-4.8.0.0
Defined in Data.Traversable
Traversable Last Source # Since: base-4.8.0.0
Defined in Data.Traversable
Traversable Down Source # Since: base-4.12.0.0
Defined in Data.Traversable
Traversable First Source # Since: base-4.9.0.0
Defined in Data.Semigroup
Traversable Last Source # Since: base-4.9.0.0
Defined in Data.Semigroup
Traversable Max Source # Since: base-4.9.0.0
Defined in Data.Semigroup
Traversable Min Source # Since: base-4.9.0.0
Defined in Data.Semigroup
Traversable Dual Source # Since: base-4.8.0.0
Defined in Data.Traversable
Traversable Product Source # Since: base-4.8.0.0
Defined in Data.Traversable
Traversable Sum Source # Since: base-4.8.0.0
Defined in Data.Traversable
Traversable Par1 Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable NonEmpty Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable Maybe Source # Since: base-2.1
Defined in Data.Traversable
Traversable Solo Source # Since: base-4.15
Defined in Data.Traversable
Traversable [] Source # Since: base-2.1
Defined in Data.Traversable
Traversable (Either a) Source # Since: base-4.7.0.0
Defined in Data.Traversable
Traversable (Proxy :: Type -> Type) Source # Since: base-4.7.0.0
Defined in Data.Traversable
Traversable (Arg a) Source # Since: base-4.9.0.0
Defined in Data.Semigroup
Ix i => Traversable (Array i) Source # Since: base-2.1
Defined in Data.Traversable
Traversable (U1 :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (UAddr :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (UChar :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (UDouble :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (UFloat :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (UInt :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (UWord :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (V1 :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable ((,) a) Source # Since: base-4.7.0.0
Defined in Data.Traversable
Traversable (Const m :: Type -> Type) Source # Since: base-4.7.0.0
Defined in Data.Traversable
Traversable f => Traversable (Ap f) Source # Since: base-4.12.0.0
Defined in Data.Traversable
Traversable f => Traversable (Alt f) Source # Since: base-4.12.0.0
Defined in Data.Traversable
Traversable f => Traversable (Rec1 f) Source # Since: base-4.9.0.0
Defined in Data.Traversable
(Traversable f, Traversable g) => Traversable (Product f g) Source # Since: base-4.9.0.0
Defined in Data.Functor.Product
(Traversable f, Traversable g) => Traversable (Sum f g) Source # Since: base-4.9.0.0
Defined in Data.Functor.Sum
(Traversable f, Traversable g) => Traversable (f :*: g) Source # Since: base-4.9.0.0
Defined in Data.Traversable
(Traversable f, Traversable g) => Traversable (f :+: g) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable (K1 i c :: Type -> Type) Source # Since: base-4.9.0.0
Defined in Data.Traversable
(Traversable f, Traversable g) => Traversable (Compose f g) Source # Since: base-4.9.0.0
Defined in Data.Functor.Compose
(Traversable f, Traversable g) => Traversable (f :.: g) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Traversable f => Traversable (M1 i c f) Source # Since: base-4.9.0.0
Defined in Data.Traversable
Utility functions
for :: (Traversable t, Applicative f) => t a -> (a -> f b) -> f (t b) Source #
for is traverse with its arguments flipped. For a version that ignores the results see for_.
forM :: (Traversable t, Monad m) => t a -> (a -> m b) -> m (t b) Source #
forM is mapM with its arguments flipped. For a version that ignores the results see forM_.
mapAccumL :: forall t s a b. Traversable t => (s -> a -> (s, b)) -> s -> t a -> (s, t b) Source #
The mapAccumL function behaves like a combination of fmap and foldl; it applies a function to each element of a structure, passing an accumulating parameter from left to right, and returning a final
value of this accumulator together with the new structure.
Basic usage:
>>> mapAccumL (\a b -> (a + b, a)) 0 [1..10]
>>> mapAccumL (\a b -> (a <> show b, a)) "0" [1..5]
mapAccumR :: forall t s a b. Traversable t => (s -> a -> (s, b)) -> s -> t a -> (s, t b) Source #
The mapAccumR function behaves like a combination of fmap and foldr; it applies a function to each element of a structure, passing an accumulating parameter from right to left, and returning a final
value of this accumulator together with the new structure.
Basic usage:
>>> mapAccumR (\a b -> (a + b, a)) 0 [1..10]
>>> mapAccumR (\a b -> (a <> show b, a)) "0" [1..5]
General definitions for superclass methods
Traversable structures support element-wise sequencing of Applicative effects (thus also Monad effects) to construct new structures of the same shape as the input.
To illustrate what is meant by same shape, if the input structure is [a], each output structure is a list [b] of the same length as the input. If the input is a Tree a, each output Tree b has the
same graph of intermediate nodes and leaves. Similarly, if the input is a 2-tuple (x, a), each output is a 2-tuple (x, b), and so forth.
It is in fact possible to decompose a traversable structure t a into its shape (a.k.a. spine) of type t () and its element list [a]. The original structure can be faithfully reconstructed from its
spine and element list.
The implementation of a Traversable instance for a given structure follows naturally from its type; see the Construction section for details. Instances must satisfy the laws listed in the Laws
section. The diverse uses of Traversable structures result from the many possible choices of Applicative effects. See the Advanced Traversals section for some examples.
Every Traversable structure is both a Functor and Foldable because it is possible to implement the requisite instances in terms of traverse by using fmapDefault for fmap and foldMapDefault for
foldMap. Direct fine-tuned implementations of these superclass methods can in some cases be more efficient.
The traverse and mapM methods
For an Applicative functor f and a Traversable functor t, the type signatures of traverse and fmap are rather similar:
fmap :: (a -> f b) -> t a -> t (f b)
traverse :: (a -> f b) -> t a -> f (t b)
The key difference is that fmap produces a structure whose elements (of type f b) are individual effects, while traverse produces an aggregate effect yielding structures of type t b.
For example, when f is the IO monad, and t is List, fmap yields a list of IO actions, whereas traverse constructs an IO action that evaluates to a list of the return values of the individual actions
performed left-to-right.
traverse :: (a -> IO b) -> [a] -> IO [b]
The mapM function is a specialisation of traverse to the case when f is a Monad. For monads, mapM is more idiomatic than traverse. The two are otherwise generally identical (though mapM may be
specifically optimised for monads, and could be more efficient than using the more general traverse).
traverse :: (Applicative f, Traversable t) => (a -> f b) -> t a -> f (t b)
mapM :: (Monad m, Traversable t) => (a -> m b) -> t a -> m (t b)
When the traversable term is a simple variable or expression, and the monadic action to run is a non-trivial do block, it can be more natural to write the action last. This idiom is supported by for
and forM, which are the flipped versions of traverse and mapM, respectively.
Their Foldable, just the effects, analogues.
The traverse and mapM methods have analogues in the Data.Foldable module. These are traverse_ and mapM_, and their flipped variants for_ and forM_, respectively. The result type is f (), they don't
return an updated structure, and can be used to sequence effects over all the elements of a Traversable (any Foldable) structure just for their side-effects.
If the Traversable structure is empty, the result is pure (). When effects short-circuit, the f () result may, for example, be Nothing if f is Maybe, or Left e when it is Either e.
It is perhaps worth noting that Maybe is not only a potential Applicative functor for the return value of the first argument of traverse, but is also itself a Traversable structure with either zero
or one element. A convenient idiom for conditionally executing an action just for its effects on a Just value, and doing nothing otherwise is:
-- action :: Monad m => a -> m ()
-- mvalue :: Maybe a
mapM_ action mvalue -- :: m ()
which is more concise than:
maybe (return ()) action mvalue
The mapM_ idiom works verbatim if the type of mvalue is later refactored from Maybe a to Either e a (assuming it remains OK to silently do nothing in the Left case).
Result multiplicity
When traverse or mapM is applied to an empty structure ts (one for which null ts is True) the return value is pure ts regardless of the provided function g :: a -> f b. It is not possible to apply
the function when no values of type a are available, but its type determines the relevant instance of pure.
null ts ==> traverse g ts == pure ts
Otherwise, when ts is non-empty and at least one value of type b results from each f a, the structures t b have the same shape (list length, graph of tree nodes, ...) as the input structure t a, but
the slots previously occupied by elements of type a now hold elements of type b.
A single traversal may produce one, zero or many such structures. The zero case happens when one of the effects f a sequenced as part of the traversal yields no replacement values. Otherwise, the
many case happens when one of sequenced effects yields multiple values.
The traverse function does not perform selective filtering of slots in the output structure as with e.g. mapMaybe.
>>> let incOdd n = if odd n then Just $ n + 1 else Nothing
>>> mapMaybe incOdd [1, 2, 3]
>>> traverse incOdd [1, 3, 5]
Just [2,4,6]
>>> traverse incOdd [1, 2, 3]
In the above examples, with Maybe as the Applicative f, we see that the number of t b structures produced by traverse may differ from one: it is zero when the result short-circuits to Nothing. The
same can happen when f is List and the result is [], or f is Either e and the result is Left (x :: e), or perhaps the empty value of some Alternative functor.
When f is e.g. List, and the map g :: a -> [b] returns more than one value for some inputs a (and at least one for all a), the result of mapM g ts will contain multiple structures of the same shape
as ts:
length (mapM g ts) == product (fmap (length . g) ts)
For example:
>>> length $ mapM (\n -> [1..n]) [1..6]
>>> product $ length . (\n -> [1..n]) <$> [1..6]
In other words, a traversal with a function g :: a -> [b], over an input structure t a, yields a list [t b], whose length is the product of the lengths of the lists that g returns for each element of
the input structure! The individual elements a of the structure are replaced by each element of g a in turn:
>>> mapM (\n -> [1..n]) $ Just 3
[Just 1,Just 2,Just 3]
>>> mapM (\n -> [1..n]) [1..3]
If any element of the structure t a is mapped by g to an empty list, then the entire aggregate result is empty, because no value is available to fill one of the slots of the output structure:
>>> mapM (\n -> [1..n]) $ [0..6] -- [1..0] is empty
The sequenceA and sequence methods
The sequenceA and sequence methods are useful when what you have is a container of pending applicative or monadic effects, and you want to combine them into a single effect that produces zero or more
containers with the computed values.
sequenceA :: (Applicative f, Traversable t) => t (f a) -> f (t a)
sequence :: (Monad m, Traversable t) => t (m a) -> m (t a)
sequenceA = traverse id -- default definition
sequence = sequenceA -- default definition
When the monad m is IO, applying sequence to a list of IO actions, performs each in turn, returning a list of the results:
sequence [putStr "Hello ", putStrLn "World!"]
= (\a b -> [a,b]) <$> putStr "Hello " <*> putStrLn "World!"
= do u1 <- putStr "Hello "
u2 <- putStrLn "World!"
return [u1, u2] -- In this case [(), ()]
For sequenceA, the non-deterministic behaviour of List is most easily seen in the case of a list of lists (of elements of some common fixed type). The result is a cross-product of all the sublists:
>>> sequenceA [[0, 1, 2], [30, 40], [500]]
Because the input list has three (sublist) elements, the result is a list of triples (same shape).
Care with default method implementations
The traverse method has a default implementation in terms of sequenceA:
traverse g = sequenceA . fmap g
but relying on this default implementation is not recommended, it requires that the structure is already independently a Functor. The definition of sequenceA in terms of traverse id is much simpler
than traverse expressed via a composition of sequenceA and fmap. Instances should generally implement traverse explicitly. It may in some cases also make sense to implement a specialised mapM.
Because fmapDefault is defined in terms of traverse (whose default definition in terms of sequenceA uses fmap), you must not use fmapDefault to define the Functor instance if the Traversable instance
directly defines only sequenceA.
Monadic short circuits
When the monad m is Either or Maybe (more generally any MonadPlus), the effect in question is to short-circuit the result on encountering Left or Nothing (more generally mzero).
>>> sequence [Just 1,Just 2,Just 3]
Just [1,2,3]
>>> sequence [Just 1,Nothing,Just 3]
>>> sequence [Right 1,Right 2,Right 3]
Right [1,2,3]
>>> sequence [Right 1,Left "sorry",Right 3]
Left "sorry"
The result of sequence is all-or-nothing, either structures of exactly the same shape as the input or none at all. The sequence function does not perform selective filtering as with e.g. catMaybes or
>>> catMaybes [Just 1,Nothing,Just 3]
>>> rights [Right 1,Left "sorry",Right 3]
Example binary tree instance
The definition of a Traversable instance for a binary tree is rather similar to the corresponding instance of Functor, given the data type:
data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)
a canonical Functor instance would be
instance Functor Tree where
fmap g Empty = Empty
fmap g (Leaf x) = Leaf (g x)
fmap g (Node l k r) = Node (fmap g l) (g k) (fmap g r)
a canonical Traversable instance would be
instance Traversable Tree where
traverse g Empty = pure Empty
traverse g (Leaf x) = Leaf <$> g x
traverse g (Node l k r) = Node <$> traverse g l <*> g k <*> traverse g r
This definition works for any g :: a -> f b, with f an Applicative functor, as the laws for (<*>) imply the requisite associativity.
We can add an explicit non-default mapM if desired:
mapM g Empty = return Empty
mapM g (Leaf x) = Leaf <$> g x
mapM g (Node l k r) = do
ml <- mapM g l
mk <- g k
mr <- mapM g r
return $ Node ml mk mr
See Construction below for a more detailed exploration of the general case, but as mentioned in Overview above, instance definitions are typically rather simple, all the interesting behaviour is a
result of an interesting choice of Applicative functor for a traversal.
Pre-order and post-order tree traversal
It is perhaps worth noting that the traversal defined above gives an in-order sequencing of the elements. If instead you want either pre-order (parent first, then child nodes) or post-order (child
nodes first, then parent) sequencing, you can define the instance accordingly:
inOrderNode :: Tree a -> a -> Tree a -> Tree a
inOrderNode l x r = Node l x r
preOrderNode :: a -> Tree a -> Tree a -> Tree a
preOrderNode x l r = Node l x r
postOrderNode :: Tree a -> Tree a -> a -> Tree a
postOrderNode l r x = Node l x r
-- Traversable instance with in-order traversal
instance Traversable Tree where
traverse g t = case t of
Empty -> pure Empty
Leaf x -> Leaf <$> g x
Node l x r -> inOrderNode <$> traverse g l <*> g x <*> traverse g r
-- Traversable instance with pre-order traversal
instance Traversable Tree where
traverse g t = case t of
Empty -> pure Empty
Leaf x -> Leaf <$> g x
Node l x r -> preOrderNode <$> g x <*> traverse g l <*> traverse g r
-- Traversable instance with post-order traversal
instance Traversable Tree where
traverse g t = case t of
Empty -> pure Empty
Leaf x -> Leaf <$> g x
Node l x r -> postOrderNode <$> traverse g l <*> traverse g r <*> g x
Since the same underlying Tree structure is used in all three cases, it is possible to use newtype wrappers to make all three available at the same time! The user need only wrap the root of the tree
in the appropriate newtype for the desired traversal order. Tne associated instance definitions are shown below (see coercion if unfamiliar with the use of coerce in the sample code):
{-# LANGUAGE ScopedTypeVariables, TypeApplications #-}
-- Default in-order traversal
import Data.Coerce (coerce)
import Data.Traversable
data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)
instance Functor Tree where fmap = fmapDefault
instance Foldable Tree where foldMap = foldMapDefault
instance Traversable Tree where
traverse _ Empty = pure Empty
traverse g (Leaf a) = Leaf <$> g a
traverse g (Node l a r) = Node <$> traverse g l <*> g a <*> traverse g r
-- Optional pre-order traversal
newtype PreOrderTree a = PreOrderTree (Tree a)
instance Functor PreOrderTree where fmap = fmapDefault
instance Foldable PreOrderTree where foldMap = foldMapDefault
instance Traversable PreOrderTree where
traverse _ (PreOrderTree Empty) = pure $ preOrderEmpty
traverse g (PreOrderTree (Leaf x)) = preOrderLeaf <$> g x
traverse g (PreOrderTree (Node l x r)) = preOrderNode
<$> g x
<*> traverse g (coerce l)
<*> traverse g (coerce r)
preOrderEmpty :: forall a. PreOrderTree a
preOrderEmpty = coerce (Empty @a)
preOrderLeaf :: forall a. a -> PreOrderTree a
preOrderLeaf = coerce (Leaf @a)
preOrderNode :: a -> PreOrderTree a -> PreOrderTree a -> PreOrderTree a
preOrderNode x l r = coerce (Node (coerce l) x (coerce r))
-- Optional post-order traversal
newtype PostOrderTree a = PostOrderTree (Tree a)
instance Functor PostOrderTree where fmap = fmapDefault
instance Foldable PostOrderTree where foldMap = foldMapDefault
instance Traversable PostOrderTree where
traverse _ (PostOrderTree Empty) = pure postOrderEmpty
traverse g (PostOrderTree (Leaf x)) = postOrderLeaf <$> g x
traverse g (PostOrderTree (Node l x r)) = postOrderNode
<$> traverse g (coerce l)
<*> traverse g (coerce r)
<*> g x
postOrderEmpty :: forall a. PostOrderTree a
postOrderEmpty = coerce (Empty @a)
postOrderLeaf :: forall a. a -> PostOrderTree a
postOrderLeaf = coerce (Leaf @a)
postOrderNode :: PostOrderTree a -> PostOrderTree a -> a -> PostOrderTree a
postOrderNode l r x = coerce (Node (coerce l) x (coerce r))
With the above, given a sample tree:
inOrder :: Tree Int
inOrder = Node (Node (Leaf 10) 3 (Leaf 20)) 5 (Leaf 42)
we have:
import Data.Foldable (toList)
print $ toList inOrder
print $ toList (coerce inOrder :: PreOrderTree Int)
print $ toList (coerce inOrder :: PostOrderTree Int)
You would typically define instances for additional common type classes, such as Eq, Ord, Show, etc.
Making construction intuitive
In order to be able to reason about how a given type of Applicative effects will be sequenced through a general Traversable structure by its traversable and related methods, it is helpful to look
more closely at how a general traverse method is implemented. We'll look at how general traversals are constructed primarily with a view to being able to predict their behaviour as a user, even if
you're not defining your own Traversable instances.
Traversable structures t a are assembled incrementally from their constituent parts, perhaps by prepending or appending individual elements of type a, or, more generally, by recursively combining
smaller composite traversable building blocks that contain multiple such elements.
As in the tree example above, the components being combined are typically pieced together by a suitable constructor, i.e. a function taking two or more arguments that returns a composite value.
The traverse method enriches simple incremental construction with threading of Applicative effects of some function g :: a -> f b.
The basic building blocks we'll use to model the construction of traverse are a hypothetical set of elementary functions, some of which may have direct analogues in specific Traversable structures.
For example, the (:) constructor is an analogue for lists of prepend or the more general combine.
empty :: t a -- build an empty container
singleton :: a -> t a -- build a one-element container
prepend :: a -> t a -> t a -- extend by prepending a new initial element
append :: t a -> a -> t a -- extend by appending a new final element
combine :: a1 -> a2 -> ... -> an -> t a -- combine multiple inputs
• An empty structure has no elements of type a, so there's nothing to which g can be applied, but since we need an output of type f (t b), we just use the pure instance of f to wrap an empty of
type t b:
traverse _ (empty :: t a) = pure (empty :: t b)
With the List monad, empty is [], while with Maybe it is Nothing. With Either e a we have an empty case for each value of e:
traverse _ (Left e :: Either e a) = pure $ (Left e :: Either e b)
• A singleton structure has just one element of type a, and traverse can take that a, apply g :: a -> f b getting an f b, then fmap singleton over that, getting an f (t b) as required:
traverse g (singleton a) = fmap singleton $ g a
Note that if f is List and g returns multiple values the result will be a list of multiple t b singletons!
Since Maybe and Either are either empty or singletons, we have
traverse _ Nothing = pure Nothing
traverse g (Just a) = Just <$> g a
traverse _ (Left e) = pure (Left e)
traverse g (Right a) = Right <$> g a
For List, empty is [] and singleton is (:[]), so we have:
traverse _ [] = pure []
traverse g [a] = fmap (:[]) (g a)
= (:) <$> (g a) <*> traverse g []
= liftA2 (:) (g a) (traverse g [])
• When the structure is built by adding one more element via prepend or append, traversal amounts to:
traverse g (prepend a t0) = prepend <$> (g a) <*> traverse g t0
= liftA2 prepend (g a) (traverse g t0)
traverse g (append t0 a) = append <$> traverse g t0 <*> g a
= liftA2 append (traverse g t0) (g a)
The origin of the combinatorial product when f is List should now be apparent, when traverse g t0 has n elements and g a has m elements, the non-deterministic Applicative instance of List will
produce a result with m * n elements.
• When combining larger building blocks, we again use (<*>) to combine the traversals of the components. With bare elements a mapped to f b via g, and composite traversable sub-structures
transformed via traverse g:
traverse g (combine a1 a2 ... an) =
combine <$> t1 <*> t2 <*> ... <*> tn
t1 = g a1 -- if a1 fills a slot of type @a@
= traverse g a1 -- if a1 is a traversable substructure
... ditto for the remaining constructor arguments ...
The above definitions sequence the Applicative effects of f in the expected order while producing results of the expected shape t.
For lists this becomes:
traverse g [] = pure []
traverse g (x:xs) = liftA2 (:) (g a) (traverse g xs)
The actual definition of traverse for lists is an equivalent right fold in order to facilitate list fusion.
traverse g = foldr (\x r -> liftA2 (:) (g x) r) (pure [])
Advanced traversals
In the sections below we'll examine some advanced choices of Applicative effects that give rise to very different transformations of Traversable structures.
These examples cover the implementations of fmapDefault, foldMapDefault, mapAccumL and mapAccumR functions illustrating the use of Identity, Const and stateful Applicative effects. The ZipList
example illustrates the use of a less-well known Applicative instance for lists.
This is optional material, which is not essential to a basic understanding of Traversable structures. If this is your first encounter with Traversable structures, you can come back to these at a
later date.
Some of the examples make use of an advanced Haskell feature, namely newtype coercion. This is done for two reasons:
• Use of coerce makes it possible to avoid cluttering the code with functions that wrap and unwrap newtype terms, which at runtime are indistinguishable from the underlying value. Coercion is
particularly convenient when one would have to otherwise apply multiple newtype constructors to function arguments, and then peel off multiple layers of same from the function output.
• Use of coerce can produce more efficient code, by reusing the original value, rather than allocating space for a wrapped clone.
If you're not familiar with coerce, don't worry, it is just a shorthand that, e.g., given:
newtype Foo a = MkFoo { getFoo :: a }
newtype Bar a = MkBar { getBar :: a }
newtype Baz a = MkBaz { getBaz :: a }
f :: Baz Int -> Bar (Foo String)
makes it possible to write:
x :: Int -> String
x = coerce f
instead of
x = getFoo . getBar . f . MkBaz
Identity: the fmapDefault function
The simplest Applicative functor is Identity, which just wraps and unwraps pure values and function application. This allows us to define fmapDefault:
{-# LANGUAGE ScopedTypeVariables, TypeApplications #-}
import Data.Coercible (coerce)
fmapDefault :: forall t a b. Traversable t => (a -> b) -> t a -> t b
fmapDefault = coerce (traverse @t @Identity @a @b)
The use of coercion avoids the need to explicitly wrap and unwrap terms via Identity and runIdentity.
As noted in Overview, fmapDefault can only be used to define the requisite Functor instance of a Traversable structure when the traverse method is explicitly implemented. An infinite loop would
result if in addition traverse were defined in terms of sequenceA and fmap.
State: the mapAccumL, mapAccumR functions
Applicative functors that thread a changing state through a computation are an interesting use-case for traverse. The mapAccumL and mapAccumR functions in this module are each defined in terms of
such traversals.
We first define a simplified (not a monad transformer) version of State that threads a state s through a chain of computations left to right. Its (<*>) operator passes the input state first to its
left argument, and then the resulting state is passed to its right argument, which returns the final state.
newtype StateL s a = StateL { runStateL :: s -> (s, a) }
instance Functor (StateL s) where
fmap f (StateL kx) = StateL $ \ s ->
let (s', x) = kx s in (s', f x)
instance Applicative (StateL s) where
pure a = StateL $ \s -> (s, a)
(StateL kf) <*> (StateL kx) = StateL $ \ s ->
let { (s', f) = kf s
; (s'', x) = kx s' } in (s'', f x)
liftA2 f (StateL kx) (StateL ky) = StateL $ \ s ->
let { (s', x) = kx s
; (s'', y) = ky s' } in (s'', f x y)
With StateL, we can define mapAccumL as follows:
{-# LANGUAGE ScopedTypeVariables, TypeApplications #-}
mapAccumL :: forall t s a b. Traversable t
=> (s -> a -> (s, b)) -> s -> t a -> (s, t b)
mapAccumL g s ts = coerce (traverse @t @(StateL s) @a @b) (flip g) ts s
The use of coercion avoids the need to explicitly wrap and unwrap newtype terms.
The type of flip g is coercible to a -> StateL b, which makes it suitable for use with traverse. As part of the Applicative construction of StateL (t b) the state updates will thread left-to-right
along the sequence of elements of t a.
While mapAccumR has a type signature identical to mapAccumL, it differs in the expected order of evaluation of effects, which must take place right-to-left.
For this we need a variant control structure StateR, which threads the state right-to-left, by passing the input state to its right argument and then using the resulting state as an input to its left
newtype StateR s a = StateR { runStateR :: s -> (s, a) }
instance Functor (StateR s) where
fmap f (StateR kx) = StateR $ \s ->
let (s', x) = kx s in (s', f x)
instance Applicative (StateR s) where
pure a = StateR $ \s -> (s, a)
(StateR kf) <*> (StateR kx) = StateR $ \ s ->
let { (s', x) = kx s
; (s'', f) = kf s' } in (s'', f x)
liftA2 f (StateR kx) (StateR ky) = StateR $ \ s ->
let { (s', y) = ky s
; (s'', x) = kx s' } in (s'', f x y)
With StateR, we can define mapAccumR as follows:
{-# LANGUAGE ScopedTypeVariables, TypeApplications #-}
mapAccumR :: forall t s a b. Traversable t
=> (s -> a -> (s, b)) -> s -> t a -> (s, t b)
mapAccumR g s0 ts = coerce (traverse @t @(StateR s) @a @b) (flip g) ts s0
The use of coercion avoids the need to explicitly wrap and unwrap newtype terms.
Various stateful traversals can be constructed from mapAccumL and mapAccumR for suitable choices of g, or built directly along similar lines.
Const: the foldMapDefault function
The Const Functor enables applications of traverse that summarise the input structure to an output value without constructing any output values of the same type or shape.
As noted above, the Foldable superclass constraint is justified by the fact that it is possible to construct foldMap, foldr, etc., from traverse. The technique used is useful in its own right, and is
explored below.
A key feature of folds is that they can reduce the input structure to a summary value. Often neither the input structure nor a mutated clone is needed once the fold is computed, and through list
fusion the input may not even have been memory resident in its entirety at the same time.
The traverse method does not at first seem to be a suitable building block for folds, because its return value f (t b) appears to retain mutated copies of the input structure. But the presence of t b
in the type signature need not mean that terms of type t b are actually embedded in f (t b). The simplest way to elide the excess terms is by basing the Applicative functor used with traverse on
Not only does Const a b hold just an a value, with the b parameter merely a phantom type, but when m has a Monoid instance, Const m is an Applicative functor:
import Data.Coerce (coerce)
newtype Const a b = Const { getConst :: a } deriving (Eq, Ord, Show) -- etc.
instance Functor (Const m) where fmap = const coerce
instance Monoid m => Applicative (Const m) where
pure _ = Const mempty
(<*>) = coerce (mappend :: m -> m -> m)
liftA2 _ = coerce (mappend :: m -> m -> m)
The use of coercion avoids the need to explicitly wrap and unwrap newtype terms.
We can therefore define a specialisation of traverse:
{-# LANGUAGE ScopedTypeVariables, TypeApplications #-}
traverseC :: forall t a m. (Monoid m, Traversable t)
=> (a -> Const m ()) -> t a -> Const m (t ())
traverseC = traverse @t @(Const m) @a @()
For which the Applicative construction of traverse leads to:
null ts ==> traverseC g ts = Const mempty
traverseC g (prepend x xs) = Const (g x) <> traverseC g xs
In other words, this makes it possible to define:
{-# LANGUAGE ScopedTypeVariables, TypeApplications #-}
foldMapDefault :: forall t a m. (Monoid m, Traversable t) => (a -> m) -> t a -> m
foldMapDefault = coerce (traverse @t @(Const m) @a @())
Which is sufficient to define a Foldable superclass instance:
The use of coercion avoids the need to explicitly wrap and unwrap newtype terms.
instance Traversable t => Foldable t where foldMap = foldMapDefault
It may however be instructive to also directly define candidate default implementations of foldr and foldl', which take a bit more machinery to construct:
{-# LANGUAGE ScopedTypeVariables, TypeApplications #-}
import Data.Coerce (coerce)
import Data.Functor.Const (Const(..))
import Data.Semigroup (Dual(..), Endo(..))
import GHC.Exts (oneShot)
foldrDefault :: forall t a b. Traversable t
=> (a -> b -> b) -> b -> t a -> b
foldrDefault f z = \t ->
coerce (traverse @t @(Const (Endo b)) @a @()) f t z
foldlDefault' :: forall t a b. Traversable t => (b -> a -> b) -> b -> t a -> b
foldlDefault' f z = \t ->
coerce (traverse @t @(Const (Dual (Endo b))) @a @()) f' t z
f' :: a -> b -> b
f' a = oneShot $ \ b -> b `seq` f b a
In the above we're using the Endo b Monoid and its Dual to compose a sequence of b -> b accumulator updates in either left-to-right or right-to-left order.
The use of seq in the definition of foldlDefault' ensures strictness in the accumulator.
The use of coercion avoids the need to explicitly wrap and unwrap newtype terms.
The oneShot function gives a hint to the compiler that aids in correct optimisation of lambda terms that fire at most once (for each element a) and so should not try to pre-compute and re-use
subexpressions that pay off only on repeated execution. Otherwise, it is just the identity function.
ZipList: transposing lists of lists
As a warm-up for looking at the ZipList Applicative functor, we'll first look at a simpler analogue. First define a fixed width 2-element Vec2 type, whose Applicative instance combines a pair of
functions with a pair of values by applying each function to the corresponding value slot:
data Vec2 a = Vec2 a a
instance Functor Vec2 where
fmap f (Vec2 a b) = Vec2 (f a) (f b)
instance Applicative Vec2 where
pure x = Vec2 x x
liftA2 f (Vec2 a b) (Vec2 p q) = Vec2 (f a p) (f b q)
instance Foldable Vec2 where
foldr f z (Vec2 a b) = f a (f b z)
foldMap f (Vec2 a b) = f a <> f b
instance Traversable Vec2 where
traverse f (Vec2 a b) = Vec2 <$> f a <*> f b
Along with a similar definition for fixed width 3-element vectors:
data Vec3 a = Vec3 a a a
instance Functor Vec3 where
fmap f (Vec3 x y z) = Vec3 (f x) (f y) (f z)
instance Applicative Vec3 where
pure x = Vec3 x x x
liftA2 f (Vec3 p q r) (Vec3 x y z) = Vec3 (f p x) (f q y) (f r z)
instance Foldable Vec3 where
foldr f z (Vec3 a b c) = f a (f b (f c z))
foldMap f (Vec3 a b c) = f a <> f b <> f c
instance Traversable Vec3 where
traverse f (Vec3 a b c) = Vec3 <$> f a <*> f b <*> f c
With the above definitions, sequenceA (same as traverse id) acts as a matrix transpose operation on Vec2 (Vec3 Int) producing a corresponding Vec3 (Vec2 Int):
Let t = Vec2 (Vec3 1 2 3) (Vec3 4 5 6) be our Traversable structure, and g = id :: Vec3 Int -> Vec3 Int be the function used to traverse t. We then have:
traverse g t = Vec2 <$> (Vec3 1 2 3) <*> (Vec3 4 5 6)
= Vec3 (Vec2 1 4) (Vec2 2 5) (Vec2 3 6)
This construction can be generalised from fixed width vectors to variable length lists via ZipList. This gives a transpose operation that works well for lists of equal length. If some of the lists
are longer than others, they're truncated to the longest common length.
We've already looked at the standard Applicative instance of List for which applying m functions f1, f2, ..., fm to n input values a1, a2, ..., an produces m * n outputs:
>>> :set -XTupleSections
>>> [("f1",), ("f2",), ("f3",)] <*> [1,2]
There are however two more common ways to turn lists into Applicative control structures. The first is via Const [a], since lists are monoids under concatenation, and we've already seen that Const m
is an Applicative functor when m is a Monoid. The second, is based on zipWith, and is called ZipList:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype ZipList a = ZipList { getZipList :: [a] }
deriving (Show, Eq, ..., Functor)
instance Applicative ZipList where
liftA2 f (ZipList xs) (ZipList ys) = ZipList $ zipWith f xs ys
pure x = repeat x
The liftA2 definition is clear enough, instead of applying f to each pair (x, y) drawn independently from the xs and ys, only corresponding pairs at each index in the two lists are used.
The definition of pure may look surprising, but it is needed to ensure that the instance is lawful:
liftA2 f (pure x) ys == fmap (f x) ys
Since ys can have any length, we need to provide an infinite supply of x values in pure x in order to have a value to pair with each element y.
When ZipList is the Applicative functor used in the construction of a traversal, a ZipList holding a partially built structure with m elements is combined with a component holding n elements via
zipWith, resulting in min m n outputs!
Therefore traverse with g :: a -> ZipList b will produce a ZipList of t b structures whose element count is the minimum length of the ZipLists g a with a ranging over the elements of t. When t is
empty, the length is infinite (as expected for a minimum of an empty set).
If the structure t holds values of type ZipList a, we can use the identity function id :: ZipList a -> ZipList a for the first argument of traverse:
traverse (id :: ZipList a -> ZipList a) :: t (ZipList a) -> ZipList (t a)
The number of elements in the output ZipList will be the length of the shortest ZipList element of t. Each output t a will have the same shape as the input t (ZipList a), i.e. will share its number
of elements.
If we think of the elements of t (ZipList a) as its rows, and the elements of each individual ZipList as the columns of that row, we see that our traversal implements a transpose operation swapping
the rows and columns of t, after first truncating all the rows to the column count of the shortest one.
Since in fact traverse id is just sequenceA the above boils down to a rather concise definition of transpose, with coercion used to implicily wrap and unwrap the ZipList newtype as neeed, giving a
function that operates on a list of lists:
>>> {-# LANGUAGE ScopedTypeVariables #-}
>>> import Control.Applicative (ZipList(..))
>>> import Data.Coerce (coerce)
>>> transpose :: forall a. [[a]] -> [[a]]
>>> transpose = coerce (sequenceA :: [ZipList a] -> ZipList [a])
>>> transpose [[1,2,3],[4..],[7..]]
The use of coercion avoids the need to explicitly wrap and unwrap ZipList terms.
A definition of traverse must satisfy the following laws:
t . traverse f = traverse (t . f) for every applicative transformation t
traverse Identity = Identity
traverse (Compose . fmap g . f) = Compose . fmap (traverse g) . traverse f
A definition of sequenceA must satisfy the following laws:
t . sequenceA = sequenceA . fmap t for every applicative transformation t
sequenceA . fmap Identity = Identity
sequenceA . fmap Compose = Compose . fmap sequenceA . sequenceA
where an applicative transformation is a function
t :: (Applicative f, Applicative g) => f a -> g a
preserving the Applicative operations, i.e.
t (pure x) = pure x
t (f <*> x) = t f <*> t x
and the identity functor Identity and composition functors Compose are from Data.Functor.Identity and Data.Functor.Compose.
A result of the naturality law is a purity law for traverse
traverse pure = pure
(The naturality law is implied by parametricity and thus so is the purity law [1, p15].)
The superclass instances should satisfy the following:
• In the Functor instance, fmap should be equivalent to traversal with the identity applicative functor (fmapDefault).
• In the Foldable instance, foldMap should be equivalent to traversal with a constant applicative functor (foldMapDefault).
Note: the Functor superclass means that (in GHC) Traversable structures cannot impose any constraints on the element type. A Haskell implementation that supports constrained functors could make it
possible to define constrained Traversable structures.
See also
• [1] "The Essence of the Iterator Pattern", by Jeremy Gibbons and Bruno Oliveira, in Mathematically-Structured Functional Programming, 2006, online at http://www.cs.ox.ac.uk/people/jeremy.gibbons/
• "Applicative Programming with Effects", by Conor McBride and Ross Paterson, Journal of Functional Programming 18:1 (2008) 1-13, online at http://www.soi.city.ac.uk/~ross/papers/Applicative.html.
• "An Investigation of the Laws of Traversals", by Mauro Jaskelioff and Ondrej Rypacek, in Mathematically-Structured Functional Programming, 2012, online at http://arxiv.org/pdf/1202.2919. | {"url":"https://dejafu.docs.barrucadu.co.uk/packages/base-4.16.4.0/Data-Traversable.html","timestamp":"2024-11-07T00:09:06Z","content_type":"application/xhtml+xml","content_length":"201529","record_id":"<urn:uuid:c5378b80-84c7-4573-a543-ac9eb333a038>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00605.warc.gz"} |
Solutions To Deitel's C How To Program Exercises
Below are links to the solutions to Deitel’s C How to Program (7th Edition) exercises that have been posted so far. This page is updated as more solutions to the exercises are posted. You are
“STRONGLY ADVISED” to try to solve the problem first before checking the solution. Note that there are multiple ways of solving problems. That is to say, you may have another way (maybe even better)
of solving the problem(s). In such case(s) please let us know. Only answers to programs are provided here. Answers to self review exercises, multiple choice questions and theoretical questions are
not provided. You can test run the programs online for free at ideone.com.
Chapter TwoExercise 2.16 – C program to perform simple arithmeticsExercise 2.17 – C program to print values with printfExercise 2.18 – C program to compare two integer numbersExercise 2.19 – Larger
and smaller valueExercise 2.20 – Diameter, circumference and area of a circleExercise 2.21 – Shapes with asterisksExercise 2.23 – Largest and smallest integersExercise 2.24 – Odd or evenExercise 2.25
– Program to print your initialsExercise 2.26 – MultiplesExercise 2.27 – Checkerboard pattern of asterisksExercise 2.29 – Integer value of a characterExercise 2.30 – Separating digits in an
integerExercise 2.31 – Table of squares and cubesExercise 2.32 – Body mass index calculatorExercise 2.33 – Car-pool savings calculator | {"url":"https://cscprogrammingtutorials.com/solutions-to-deitels-c-how-to-program-exercises/","timestamp":"2024-11-04T21:34:30Z","content_type":"text/html","content_length":"80612","record_id":"<urn:uuid:5082df67-eb82-4bfc-8380-cbedc85e4057>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00453.warc.gz"} |
Homework 5: Lazy Evaluation
CSci 2041: Advanced Programming Principles
This homework will highlight lazy evaluation. Note: Part 3 is the most
substantial part of the homework.
To start, create a directory in your personal repository named “Hwk_05“.
You must place all files for this homework in this directory.
## Part 1: Evaluating expressions by hand
As demonstrated in class, and in the document “Expression Evaluation
Examples” (“expr_eval.pdf“ in “Resources“ on GitHub), we have
evaluated expressions by hand, step by step, to understand the
different ways in which call by value semantics, call by name
semantics, and lazy evaluation work.
### Question 1
Consider the following definitions:
sum [] = 0
sum x::xs -> x + sum xs
take 0 lst = [ ]
take n [ ] = [ ]
take n (x::xs) = x::take (n-1) xs
evens_from 0 v = [ ]
evens_from n v = v+v :: evens_from (n-1) (v+1)
Now, create a file in your homework directory called “question_1.txt“ with
the following:
– Your name
– Your Internet ID
You will now evaluate the following expression by hand:
sum (take 3 (evens_from 5 1))
Similarly to the examples in “expr_eval.pdf“, you must evaluate the
expression three times using three different semantics:
1. Call by value
2. Call by name
3. Lazy evaluation (also known as “call by need”)
Label each of your evaluations clearly by evaluation strategy employed, perhaps
as in this example:
Please write plain text. We also will not accept handwritten work.
### Question 2
Recall these definitions for “foldl“ and “foldr“, and the functions for
folding “and“ over a boolean list. (Note: We removed the underscores from the
names as they appeared on the S4.1 slides.)
foldr f [] v = v
foldr f (x::xs) v = f x (foldr f xs v)
foldl f v [] = v
foldl f v (x::xs) = foldl f (f v x) xs
and b1 b2 = if b1 then b2 else false
andl l = foldl and true l
andr l = foldr and l true
Now, create a file in your homework directory called “question_2.txt“ with
the following:
– Your name
– Your Internet ID
#### Let’s evaluate by hand
You will now evaluate these two expressions:
andl (true::false::true::true::[])
andr (true::false::true::true::[])
You must evaluate the expressions using only two different semantics:
1. Call by value
2. Call by name
Label each of your evaluations clearly by evaluation strategy employed, perhaps
as in this example:
andr – CALL BY VALUE
andr – CALL BY NAME
andl – CALL BY VALUE
andl – CALL BY NAME
A couple of notes 🎵:
+ Write lists in their basic form using the cons (“::“) and nil (“[]“)
operators instead of using semicolons to separate list values between square
+ We don’t consider lazy evaluation for this question. Why not? (Try it and
compare with your other results.)
#### Which is better?
After you have completed all four evaluations (call by value and call by name
for both `andr` and `andl`), state which evaluation is most efficient and
briefly explain why.
#### Reminder
As before, please write plain text. We also will not accept handwritten work.
## Part 2: Efficiently computing the conjunction of a list of Boolean values
Create an OCaml file in your homework directory named “lazy_and.ml“.
Write a function named “ands“ with type “bool list -> bool“ which computes
the same result as the “andl“ and “andr“ functions described in the
problem above, except with the following quality:
> Your function should avoid examining the entire list unless absolutely
necessary, hence the “lazy.” (Hint: We saw this behavior in one of the by-hand
evaluations above.) Stated explicitly: If the function encounters a “false“
value then it can terminate and return “false“.
## Part 3: Implementing Streams in OCaml
As demonstrated in class (“lazy.ml“ in “Sample Programs“ on GitHub) we
developed a type constructor “stream“ which can create “lazy” streams and use
lazy evaluation techniques within OCaml (which itself is a strict/eager
Start this part by creating a file named “streams.ml“ in your homework
directory and copying the following code (including comments) into it:
(* The code below is from Professor Eric Van Wyk. *)
(* Types and functions for lazy values *)
type ‘a lazee = ‘a hidden ref
and ‘a hidden = Value of ‘a
| Thunk of (unit -> ‘a)
let delay (unit_to_x: unit -> ‘a) : ‘a lazee = ref (Thunk unit_to_x)
let force (l: ‘a lazee) : unit = match !l with
| Value _ -> ()
| Thunk f -> l := Value (f ())
let rec demand (l: ‘a lazee) : ‘a =
force l;
match !l with
| Value v -> v
| Thunk f -> raise (Failure “this should not happen”)
(* Streams, using lazy values *)
type ‘a stream = Cons of ‘a * ‘a stream lazee
(* The code below is from YOUR NAME HERE *)
Change “YOUR NAME HERE“ to your name.
***Proper attribution:** Always clearly mark all parts of your files that you
did not write, as in the code above, and attribute them to their author (your
instructor, for example). Then indicate where your work starts in the file by
adding your name and a comment to this effect, such as in the code above.*
You will now define a collection of stream values, and functions over streams.
### 3.1 “cubes_from“
Define a function “cubes_from“ with type “int -> int stream“ that creates
a stream of the cubes of numbers starting with the cube of the input value.
For example, “cubes_from 5“ should return a stream that contains the values
125, 216, 343, …
Demonstrate to yourself that it works by using “take“ to generate a finite
number of cubes.
### 3.2 “cubes_from_zip“
Copy the “zip“ function from the “lazy.ml“ file from class to your current
file (with proper attribution), then define a function “cubes_from_zip“ with
the same type and behavior as “cubes_from“ above, but which uses the “zip“
function (maybe more than once) to construct the stream of cubes.
You may copy and use more functions besides “zip“ from the “lazy.ml“ file
from class, but you must use proper attribution.
### 3.3 “cubes_from_map“
Now copy the “map“ function from the “lazy.ml“ file from class to your
current file (with proper attribution), and define a function named
“cubes_from_map“ with the same type and behavior as “cubes_from“ above, but
which uses “map“.
### 3.4 “drop“
Define a function named “drop“ with type “int -> ‘a stream -> ‘a stream“.
This function should discard zero or more elements from the stream, and the
first argument indicates how many to discard (“drop”).
*(Notice the type difference between “drop“, and “take“ (from “lazy.ml“).
This is because “take“ removes finite elements from a stream, so it stores
them as a list.)*
### 3.5 “drop_until“
Write a function named “drop_until“ with type
“(‘a -> bool) -> ‘a stream -> ‘a stream“. This function should discard as
many consecutive initial elements as possible from the stream, and output the
remaining stream. The first argument (a function) indicates whether to keep an
element (“true“ means keep, “false“ means drop).
Stated explicitly, this function drops elements from the input stream until it
finds one for which the function returns “true“. It then outputs the
remaining part of the stream (including the ““true“” element it found).
Example (using “head“ and “nats“ from “lazy.ml“):
head (drop_until (fun v -> v > 35) nats)
The above expression should give “36“.
### 3.6 “foldr“, “and_fold“, and “sum_positive_prefix“
Recall “foldr“ from Part 1 above. For this question, you will define a new
“foldr“ function which uses streams instead of lists. Here are some hints:
+ Because streams have no “empty stream” or “nil” constructor, there is no
“base” value passed to your new “foldr“ — it takes only two arguments.
+ Your new “foldr“’s type must include the “lazee“ type somewhere, so it
can give a value without examining the entire stream.
For full points, you must do the following:
1. Write the “foldr“ function with type annotations, including both input
types and the output type.
2. Write an OCaml comment above your “foldr“ function explaining “foldr“’s
type and briefly how your function works.
#### “and_fold“
Next, define a function named “and_fold“ with type “bool stream -> bool“.
It attempts to determine if all elements in the stream are “true“. If it
encounters a “false“ element, it should return “false“.
This function should simply pass some kind of “and” function, and the stream,
to your new “foldr“.
#### “sum_positive_prefix“
Also, define a function named “sum_positive_prefix“. It should use your
“foldr“ to add up all the positive integers in an “int stream“ which appear
**before** any non-positive number.
#### A concrete example
To help us think about the above functions, consider the following stream
let ns : int stream = zip ( – ) (from 1000) (cubes_from 1)
Notice that “ns“ is an “int stream“ computed by using subtraction to zip
together two streams; The first stream is all positive numbers starting from
1000, and the second stream is cubes starting from 1. In particular, this
stream has some positive numbers at the beginning, but then the numbers in the
stream become negative because second stream gets larger faster than the first
For example, evaluating “take 15 ns“ yields this:
[999; 993; 975; 939; 879; 789; 663; 495; 279; 9; -321; -717; -1185; -1731; -2361]
With “ns“ in mind, consider these two streams:
let are_positive ns = map (fun n -> n > 0) ns
let ns_positive : bool stream = are_positive ns
Here, “ns_positive“ is a “bool stream“ whose initial values are “true“,
followed by an unbounded number of “false“ elements.
and_fold ns_positive
will evaluate to “false“, but
and_fold (are_positive (from 1))
will not terminate normally.
Similarly, “sum_positive_prefix ns“ will evaluate to “7020“, but
“sum_positive_prefix (from 1)“ will not terminate normally.
### 3.7 The Sieve of Eratosthenes, ““sieve“”
For this final question, we ask you to write a function which computes the
[Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes).
Here is some background:
#### The Sieve
We can compute prime numbers by starting with the sequence (a stream) of
natural numbers beginning at 2 and using the following process:
1. Consider the first number in the sequence as a prime number.
2. Discard all multiples of the above number in the remaining sequence.
3. For resulting remaining sequence, go to step (1).
To see this process in action, consider this prefix of natural numbers starting
at 2:
2 3 4 5 6 7 8 9 10 11 …
Starting at step (1), we consider 2 as prime for step (2) we discard all
multiples of 2 in the remaining sequence. Notice that because 3 is not a
multiple of 2, we won’t discard it:
3 5 7 9 11 …
We then essentially “cons” 2 onto the resulting sequence beginning at 3, which
gives us this:
Cons (2, 3 5 7 9 11 … )
For step (3), we have the resulting sequence beginning at 3 as our new sequence
and go back to step (1) to repeat the process and get this:
Cons (2, Cons (3, 5 7 11 … ))
You can see that we will repeat again, with 5 as the new prime and the
remaining sequence beginning at 7. This process repeats for each prime number
#### The “sieve“ function
Implement the above process as a function named “sieve“, which has type
“int stream -> int stream“. Here are some hints:
+ With your “sieve“ and the “from“ function from class, you should be able
to define the stream of prime numbers as “let primes = sieve (from 2)“.
Thus, “take 10 primes“ should evaluate to
“[2; 3; 5; 7; 11; 13; 17; 19; 23; 29]“.
+ You might consider a helper “sift“ function which implements the process we
described above. For example, it could have type
“int -> int stream -> int stream“ and would output a stream which does not
contain any multiples of its input “int“ argument. For example, using the
“from“ function from “lazy.ml“, you could have “sift 2 (from 3)“ output
the stream “3 5 7 9 11 …“ as we saw above.
## Turning in your work
Push these files to your “Hwk_05“ GitHub directory by the due date:
– “question_1.txt“
– “question_2.txt“
– “lazy_and.ml“
– “streams.ml“
Remember: If you use any other functions from “lazy.ml“ that we developed in
class, then clearly denote them with proper attribution in your solution.
As in Homework 4, part of your score will be based on your code quality. (See
“Writing transparent code” in Homework 4.) | {"url":"https://codingprolab.com/answer/homework-5-lazy-evaluation/","timestamp":"2024-11-12T04:09:42Z","content_type":"text/html","content_length":"130965","record_id":"<urn:uuid:62d8d918-a134-4aef-9924-383f90754ea2>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00132.warc.gz"} |
The Great Design
What’s the Truth About Pi?
3,800 years ago, pi’s value was identified as different than the Archimedean value used today. Why? Is there an error in Archamedes’ development of pi? See the mathematically-based answer to that
question and more, including:
• What value of Pi did the ancients use 3,800 years ago?
• What has been hidden about the value of pi?
• What if the true value of pi is not 3.141…?
• Finally: the answer to the age old question of why does a negative number times a negative number equal a positive number
1.0 History of Pi
2.0 What is Pi – Really
3.0 Distance, Velocity, Time
4.0 Using Pi = 3.0
5.0 Using Pi = 3.141592654
6.0 Practical Proof that Pi is 3.0
7.0 Experiments – Proof That Pi is 3.0
7.1 Experiment 1: Surface Area-to-Weight Measurement – Square
7.2 Experiment 2: Surface Area-to -Weight Measurement Triangle
7.3 Experiment 3: Circle Inscribed in a Square
7.4 Experiment 4: Sphere Inscribed in a Cube
7.5 Experiment 4: Procedures
7.6 Experiment 5: Cylinder Inscribed in a Rectangle
7.7 Experiment 6: Coffee Cup Measurements
8.0 Bending the Rule
9.0 Conclusion
10.0 Multiplication of Positive & Negative Numbers
10.1 Analysis of Multiplication Process
10.2 Examples 29A – 29F
10.3 Division Analysis
10.4 Summary
Get a Sneak Peek of Volume 1
Is Arhcimedes' method for calculating Pi accurate?
Examine it and see for yourself
Look at the margin of error. Archimedes' value for pi (3.14) is an approximation - not an exact value. Would you accept an approximation or errors for your bank account balance? Then, why do you
accept it for pi? What else may be wrong? Read the book to find out.
Real Answers You Can Prove
Is the unit circle the key to accurately calculating pi?
Read the book to see how the simple activity of a vehicle driving around a racetrack provides a clear illustration of the connection between the unit circle and pi.
Do your own simple experiments. Prove the value of pi to yourself!
Carry out your own simple experiments to accurately calculate the value of pi – using everyday materials found around your house, classroom and office.
Why is a negative number x a negative number a positive?
Finally – a REAL answer to this question! We already know that it’s true. Now see WHY it’s true with simple mathematical proof, that you can test for yourself. | {"url":"http://greatdesignbook.com/books-volume-1-geometric-pi/","timestamp":"2024-11-02T14:31:11Z","content_type":"text/html","content_length":"133864","record_id":"<urn:uuid:ee841dc6-c5aa-4445-a438-f407908e812b>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00887.warc.gz"} |
Data Science - Department of Mathematics - TUM
Research Group Data Science
Our research group works towards a mathematical understanding and a mathematics driven development of data science methods connected to a variety of applications.This is done using different
mathematical tools, taken from many different fields of mathematics such as Optimization, Numerical Analysis, Harmonic Analysis, Probability and Statistics, Optimal Transport theory, Theory of PDEs.
The research group is organized into three subgroups: the Applied Data Science group (Prof. Donna Ankerst), the Optimization and Data Analysis group (Prof. Felix Krahmer) and the Applied Numerical
Analysis group (Prof. Massimo Fornasier). In addition the Mathematical Imaging & Data Analysis group (Dr. Frank Filbir) at Helmoltz Munich is associated to our research group.
Our Research works on a variety of different topics, ranging from theoretical foundations to algorithms and applications. An overview of some of the topics we have recently been working on can be
found in the following:
Neural Networks
In order to model neural networks and interpret their training in mathematical terms, a connection between Deep Learning and Optimal Control theory can be established. Particularly structured neural
networks can be described by an ordinary differential equation or even a partial differential one, and their training can be recast differently using well-known and established results such as the
Pontryagin Maximum Principle. Also, their expressivity can be studied from this perspective to shed some light on some interesting behaviors of NN such as their surprising ability to generalize.
Clinical risk prediction, validation and communication
Clinical risk prediction models have become a mainstay in practice, with methodologic research now focused on maximizing their power through optimal handling of missing data on the development end
and personalized interfaces on the user end. Multiple external validation has also witnessed a rise with new objectives to separate transportability from reproducibility. Research in this group
focuses on the development of statistical methodology and visualization for state-of-the-art approaches to clinical risk prediction, validation and communication.
Uncertainty Quantification for High-Dimensional Inverse Problems
In many high-dimensional Inverse Problems, such as the retrieval of a Magnetic Resonance Image (MRI), the estimators are non-linear and given as solutions of optimization routines. The goal of this
research path is to characterize how accurate these estimators are by using some probabilistic tools, a technique that is known as Uncertainty Quantification. This is particularly important in
safety-critical applications such as autonomous driving or medical imaging. The results obtained in this group established UQ for certain high-dimensional problems which, in turn, allow for the
evaluation of the accuracy of fast methods for MRI reconstruction.
Compressed Sensing
When sending a signal or measuring an object, usually, only a transformed version of the original signal is obtained. In some cases this is desired (e.g., to reduce the file size when storing photos
or sending messages), in other cases it can not be avoided (e.g., when using a MRI machine).
The underlying problem is recovering the large original signal from a rather small measurement. In the standard case of a linear system, this corresponds to solving an underdetermined system Ax=y
where the dimension of the unknown signal vector x is way larger than the dimension of the measurement vector y.
A challenge which can not be solved in general.
In our compressed sensing projects, we work an various applications with a wide range of different measurement operators. Our main goal is developing efficient recovery algorithms with comparably
small restrictions on the signal and underlying measurement operator.
Approximation Theory for Wasserstein Sobolev Spaces
The research focus lies on a mathematical profound and robust approximation of functions, such as the Wasserstein distance, in Wasserstein Sobolev spaces. Thereby, those approximations shall be /
computable/, and may indeed be obtained by deep neural networks.
Unlimited Sampling
A major result in the field of Digital Signal Processing is the
Nyquist-Shannon Theorem, which allows for the reconstruction of a
time-continuous signal from its samples. However, circuits used for its
realization may suffer from a severe Dynamic Range (DR) bottleneck that
leads to data loss during the acquisition process.
1. Slide 1(Current Item)
2. Slide 2
Slide 1 of 2
Because of that, state-of-art approaches to the reconstruction problem are decoupled on their hardware and software sides. Our research group is working on both of them, taking care of the folding
process of the samples
inside the circuit's DR, avoiding data loss; and on the unfolding process problem.
Consensus-based Optimization
The global minimization of a potentially nonconvex nonsmooth cost function living in a high-dimensional space is a long-standing problem in applied mathematics.
Consensus-based optimization (CBO) is a multi-particle derivative-free optimization method that can provably find global minimizers of such functions with probabilistic convergence guarantees.
The algorithm is in the spirit of metaheuristics with a working principle inspired by consensus dynamics and opinion formation.
More rigorously, CBO methods use an ensemble of finitely many interacting particles, which is formally described by a system of discretized stochastic differential equations, with the aim of
exploring the energy landscape of the objective and by collaboration forming a global consensus about the location of the global minimizer.
At our chair we focus on different facets related to obtaining a deeper understanding of CBO as well as its relation with well-known methods such as Simulated Annealing, Particle Swarm Optimization
and Stochastic Gradient Descent.
Amongst others this includes rigorous convergence analyses of the method on both the plane and on hypersurfaces, which involves technical tools from stochastic analysis and the analysis of partial
differential equations.
1. Slide 1(Current Item)
2. Slide 2
Slide 1 of 2
In addition we work on efficient implementations and improvements to the dynamics in order to make the method accessible to a broader audience, in particular since CBO proved successful in various
different real-world inspired application in data science, signal processing and machine learning.
Quantization of Signals and Images
One of the issues addressed by the group concerns the quantization of images and signals. Quantization refers to the process of mapping "large" and unlimited sets into a "small" and finite set. These
maps are very useful from an engineering perspective for so-called analog-to-digital converters, but they are also much exploited in imaging. The group currently works by studying signals defined on
manifolds with the aim of printing images on them. Halftoning techniques are also exploited to achieve this, whereby an attempt is made to represent continuous images in sequences of dots so that the
image is indistinguishable when viewed from a certain distance.
Signatures of Paths
This line of research is related to the Signature which is a mathematical tool to encode multidimensional processes or time series. It takes elements in the tensor space and consists of iterated
integrals against the process. In this way, it is possible to encode essential characteristics of the underlying process which can be used as features for machine learning models for example. In this
context the group investigates the behavior of the Signature w.r.t. to different sampling techniques and approximations of the paths. | {"url":"https://www.math.cit.tum.de/en/math/research/groups/data-science/","timestamp":"2024-11-05T16:41:54Z","content_type":"text/html","content_length":"49955","record_id":"<urn:uuid:7890d0b2-2e34-4a31-a27b-c4412730d19d>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00001.warc.gz"} |
Sum of Squares Calculator
Descriptive Statistics
Sum of Squares Calculator
Sum of Squares Calculator
i Enter numbers separated by commas or spaces.
Understanding the Sum of Squares Calculator
The Sum of Squares Calculator is an essential tool used in descriptive statistics. It helps you calculate the sum of squares for a given set of numbers. This calculation is crucial to determine how
much variance or spread is present in data points.
Application of the Sum of Squares Calculator
In statistics, the sum of squares is used to measure the dispersion within a dataset. This calculator can be employed in a variety of fields, such as finance, research, engineering, and social
sciences, to assess data variability and to perform further statistical analysis, such as variance and standard deviation.
How the Sum of Squares Calculator Benefits You
Using the Sum of Squares Calculator simplifies the process of finding the sum of squares in your dataset. Instead of manually computing each step, you just need to enter your numbers and let the
calculator provide the result. This saves time and reduces the likelihood of errors, making it easier for you to analyze your data efficiently.
How the Calculation is Done
The sum of squares is derived by calculating the mean of a set of numbers, finding the difference between each number and the mean, squaring these differences, and then summing these squared
differences. This process gives a clear measure of the spread of the numbers in your dataset.
Relevance and Further Application
Understanding the sum of squares is fundamental in statistics as it forms the basis for other important calculations, such as variance and standard deviation. By using this calculator, you can ensure
accurate and quick results, enabling you to focus on more complex analysis or interpretation of your data.
What is the Sum of Squares?
The sum of squares is a statistical measure used to quantify the amount of variance or dispersion in a set of data values. It is the sum of the squared differences between each data point and the
mean of the dataset.
How does the Sum of Squares Calculator work?
The calculator takes a set of numbers as input, calculates the mean of these numbers, finds the difference between each number and the mean, squares these differences, and then sums them up to give
the sum of squares.
Why is the sum of squares important in statistical analysis?
The sum of squares is foundational in statistical analysis because it provides a measure of the spread or variability of a dataset. This measure is crucial when calculating variance, standard
deviation, and conducting hypothesis tests.
Can I use the Sum of Squares Calculator for any type of data?
You can use the Sum of Squares Calculator for any quantitative data set. It is particularly useful in fields like finance, research, engineering, and social sciences to assess variability within
How do I interpret the result from the Sum of Squares Calculator?
A higher sum of squares indicates greater variability or dispersion within the dataset, while a lower value suggests less variability. This helps you understand how spread out the data points are
around the mean.
Does the calculator handle large datasets?
Yes, the calculator can handle relatively large datasets; however, performance might vary depending on your browser and device capabilities. For extremely large datasets, it might be more efficient
to use statistical software or programming languages.
Is there any difference between the sum of squares and the variance?
Yes. While the sum of squares measures the total squared deviation from the mean, variance is the average of these squared deviations. Variance is calculated by dividing the sum of squares by the
number of data points minus one (for a sample) or by the number of data points (for a population).
What is the formula used in the Sum of Squares Calculator?
The formula is: Sum of Squares = Σ(x_i – mean)², where x_i represents each individual data point in the dataset, and mean is the average of the dataset.
Can this calculator help in calculating standard deviation?
Yes. The sum of squares is a preliminary step in calculating the variance, and the standard deviation is the square root of the variance. So, by using this calculator, you can quickly move forward in
calculating the standard deviation of your dataset.
Is this calculator useful for datasets with outliers?
While the sum of squares takes into account all data points, including outliers, it can sometimes be influenced heavily by them. In cases where outliers may distort the results, additional
statistical tools and methods might be necessary to analyze the data accurately. | {"url":"https://www.onlycalculators.com/statistics/descriptive-statistics/sum-of-squares-calculator/","timestamp":"2024-11-08T06:02:35Z","content_type":"text/html","content_length":"243251","record_id":"<urn:uuid:2b217310-ce98-42f9-a571-88301abd074c>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00311.warc.gz"} |
Inspiring Drawing Tutorials
Draw A Circle And A Dot Without Lifting Pen
Draw A Circle And A Dot Without Lifting Pen - Web can you draw these shapes without lifting your pencil off the paper or going over any line you have already drawn? Try to make a circle with a dot in
the middle without raising your pencil.and yes, it is possible! Web can you draw the images without taking your pencil off the page? There's no sound and the video quality is a tad dated but you
still get the gist. Follow the first 5 steps and it should look👀 like this! 78 views 6 years ago.
You will need a pencil and a ton of paper! You have to think outside the box for this one. Put your pen in the middle. Let it go ️ (the fold) and finish the circle (without letting the pen/pencil of
the paper) go around the circle again🔃, do another fold and then go on the fold. Web draw a circle with dot in middle without lifting your pen up.
In this version you are required to click on the dots to show the route of the pencil. Web one of the most used exercises for practicing writing or drawing without lifting the pen is drawing a circle
in a box. Web draw a circle with dot in middle without lifting your pen up. Find a piece of paper. There's no sound and the video quality is a tad dated but you still get the gist.
(i used skrap paper) error loading imadeits. Draw a circle don't lift your pencil. You have to think outside the box for this one. Web draw circle arround dot without lifting a pen. Go of the fold to
start🏃🏻 another circle!
Web nearly at the corner of the fold, draw 3/4 of a circle⚪️. It's a bit like the nine dots puzzle but here i didn't find any working solution. Before you start drawing, which ones do you think you
will be able to draw without lifting your pencil or pen, and not going over any line yuo have already drawn?.
16k views 2 years ago #puzzle #tricks. You need a pencil and paper. Web how to draw a circle with a dot in the middle without lifting your pen. #challengeyourself #challenge #puzzle #fun #firedup #
firedupseminars. Fun for all the family!
Web draw a dot inside a circle without lifting the pen. You will need a pencil and a ton of paper! Web nearly at the corner of the fold, draw 3/4 of a circle⚪️. In this version you are required to
click on the dots to show the route of the pencil. Web how to draw a circle with a.
Web nearly at the corner of the fold, draw 3/4 of a circle⚪️. Can you draw the figure without lifting your pen off the paper or going over the same line twice90% people fail to answer#puzzled #puzzle
#riddles. It's a bit like the nine dots puzzle but here i didn't find any working solution. Web draw a dot inside a.
Being able to use both sides of your brain at the same time can have an impact on reading comprehension, memory, spatial awareness and the ability to. Web draw a circle with dot in middle without
lifting your pen up. Find a piece of paper. To draw this without lifting the pen and without tracing the same line more than.
How to draw circle and dot without lifting pencilhow to draw circle and dot without lifting penciltry your hand at solving fun puzzles with me.thread: Web draw a dot inside a circle without lifting
the pen. Web this is a computer version of the classic pencil and paper puzzles in which the objective is to trace the diagram without taking.
Can you draw a circle with a dot in the centre without lifting your pen? Before you start drawing, which ones do you think you will be able to draw without lifting your pencil or pen, and not going
over any line yuo have already drawn? Ios 17.2 has 59 new features and changes for iphone you won't want to.
Before you start drawing, which ones do you think you will be able to draw without lifting your pencil or pen, and not going over any line yuo have already drawn? Follow the first 5 steps and it
should look👀 like this! (i used skrap paper) error loading imadeits. Put your pen in the middle. Web how to draw a.
Some days ago, our math teacher said that he would give a good grade to the first one that will manage to draw this: Web can you draw these shapes without lifting your pencil off the paper or going
over any line you have already drawn? Web this is a computer version of the classic pencil and paper puzzles in.
Draw A Circle And A Dot Without Lifting Pen - Before you start drawing, which ones do you think you will be able to draw without lifting your pencil or pen, and not going over any line yuo have
already drawn? Web © 2024 google llc. How to draw a cartoon ladybug. Watch the full video to know the trick. You will need a pencil and a ton of paper! Now we have to modify our question a little:
See everything that's new on apple's latest ios update: It's a bit like the nine dots puzzle but here i didn't find any working solution. Web how to draw a circle outside a dot without lifting the
pen. In this version you are required to click on the dots to show the route of the pencil.
In this version you are required to click on the dots to show the route of the pencil. Fun for all the family! Being able to use both sides of your brain at the same time can have an impact on
reading comprehension, memory, spatial awareness and the ability to. We can convert the shape into a graph by assigning vertices to intersections and assigning edges to the lines between the
vertices. Web draw a dot inside a circle without lifting the pen.
Hai friends here i show you how make. Tutorial on drawing circle with dot inside without leaving your pen. Follow the first 5 steps and it should look👀 like this! Try to make a circle with a dot in
the middle without raising your pencil.and yes, it is possible!
Web without lifting the pencil. Go of the fold to start🏃🏻 another circle! Now we have to modify our question a little:
Can you draw a circle with a dot in the centre without lifting your pen? Fun for all the family! How to draw circle and dot without lifting pencilhow to draw circle and dot without lifting penciltry
your hand at solving fun puzzles with me.thread:
#Challengeyourself #Challenge #Puzzle #Fun #Firedup #Firedupseminars.
To draw this without lifting the pen and without tracing the same line more than once. Before you start drawing, which ones do you think you will be able to draw without lifting your pencil or pen,
and not going over any line yuo have already drawn? Web this is a computer version of the classic pencil and paper puzzles in which the objective is to trace the diagram without taking the pencil off
the paper and without going over the same line twice. Watch the full video to know the trick.
Web How To Draw A Circle With A Dot In The Middle Without Lifting Your Pen.
Web can you draw these shapes without lifting your pencil off the paper or going over any line you have already drawn? Web how to draw a circle outside a dot without lifting the pen. Follow the first
5 steps and it should look👀 like this! Find a piece of paper.
Fold The Corner And Put Your Pen On It.
See everything that's new on apple's latest ios update: Can you draw the figure without lifting your pen off the paper or going over the same line twice90% people fail to answer#puzzled #puzzle #
riddles. Kimberly sechser · original audio How to draw a cartoon ladybug.
Web Without Lifting The Pencil.
We can convert the shape into a graph by assigning vertices to intersections and assigning edges to the lines between the vertices. Web © 2024 google llc. Check out this video to learn how to draw a
circle with a dot in the middle without lifting your pen. Web draw a dot inside a circle without lifting the pen. | {"url":"https://one.wkkf.org/art/drawing-tutorials/draw-a-circle-and-a-dot-without-lifting-pen.html","timestamp":"2024-11-07T20:16:56Z","content_type":"text/html","content_length":"34697","record_id":"<urn:uuid:7352ecec-5099-4c77-b5e7-ba91a36fbc4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00416.warc.gz"} |
Calculated Charge State Distributions and Anisotropies following the β-Decay of 6He
Publication Type
6He, 6Li+, Anisotropy, Beta, Decay, Helium
According to the standard model the beta-decay of 6He is a pure Gamow-Teller transition. The aim of this thesis is to provide theoretical support in the search for new physics beyond the standard
model by examining the angular distribution of beta particles following decay. The simple structure of 6He, along with its ability to under go beta-decay into 6Li+ makes it an ideal candidate for
studying the weak force. Due to the sudden increase in nuclear charge from Z = 2 to Z = 3, and the recoil momentum of the daughter nucleus resulting from the emitted leptons, this decay causes an
electronic rearrangement of the final 6Li+ atom. The method of calculation involves expanding the initial state of 6He in terms of a complete set of final states of 6Li+. Correlated nonrelativistic
Hylleraas-like wave functions were used to create a pseudospectrum which span both the bound and continuum states of 6Li+. With the use of the sudden approximation, and the Taylor series expansion of
the recoil momentum operator R = eiK.r, transition probabilities from an initial helium state to each of the final lithium ion pseudostates were calculated by applying Born's rule. Recoil terms up to
the O(K2) were kept. Stieltjes imaging techniques were used to arrange the transition probabilities into bins according to the energy of each lithium pseudostate. The calculations were performed for
the singlet 1S, triplet 2S, and triplet 2P (ML = 0 and ML = +/1) initial states of 6He and were compared with experimental results. These calculated charge state probability coeficients can be used
by experimentalists to give a more accurate calculation of the correlation coeficient. Because the probability coeficients depend on ML, the angular distribution becomes anisotropic; their angular
anisotropy was also calculated.
Recommended Citation
Sculhoff, Eva E., "Calculated Charge State Distributions and Anisotropies following the β-Decay of 6He" (2022). Electronic Theses and Dissertations. 8782. | {"url":"https://scholar.uwindsor.ca/etd/8782/","timestamp":"2024-11-05T09:33:19Z","content_type":"text/html","content_length":"37442","record_id":"<urn:uuid:2ca6baf5-4b86-4842-9984-5e311b7ae122>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00116.warc.gz"} |
New routine NLTE15µmCool-E v1.0 for calculating the non-local thermodynamic equilibrium (non-LTE) CO2 15µm cooling in general circulation models (GCMs) of Earth's atmosphere
Articles | Volume 17, issue 13
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
New routine NLTE15µmCool-E v1.0 for calculating the non-local thermodynamic equilibrium (non-LTE) CO[2] 15µm cooling in general circulation models (GCMs) of Earth's atmosphere
We present a new routine for calculating the non-local thermodynamic equilibrium (non-LTE) 15µm CO[2] cooling–heating of mesosphere and lower thermosphere in general circulation models. It uses the
optimized models of the non-LTE in CO[2] for day and night conditions and delivers cooling–heating with an error not exceeding 1Kd^−1 even for strong temperature disturbances. The routine uses the
accelerated lambda iteration and opacity distribution function techniques for the exact solution of the non-LTE problem and is about 1000 times faster than the standard matrix and line-by-line
solution. It has an interface for feedbacks from the model and is ready for implementation. It may use any quenching rate coefficient of the CO[2](ν[2])+O(^3P) reaction, handles large variations in O
(^3P), and allows the user to vary the number of vibrational levels and bands to find a balance between the calculation speed and accuracy. The suggested routine can handle the broad variation in CO
[2] both below and above the current volume mixing ratio, up to 4000ppmv. This allows the use of this routine for modeling Earth's ancient atmospheres and the climate changes caused by increasing CO
Received: 07 Jun 2023 – Discussion started: 31 Jul 2023 – Revised: 20 Mar 2024 – Accepted: 20 Apr 2024 – Published: 11 Jul 2024
The infrared radiative cooling of atmosphere is an important component of its energy budget. It requires special techniques for reliable estimation. For the local thermodynamic equilibrium (LTE),
when the molecular emissions of the atmospheric unit volume are described by the Planck function for local temperature, this estimation is confined to the solution of the radiative transfer equation
in broad spectral regions occupied by molecular bands. In the middle and upper atmosphere, the breakdown of LTE (non-LTE) requires finding the sources of the non-equilibrium molecular emissions,
which are obtained by the solution of the non-LTE problem. This makes the non-LTE cooling calculation significantly more time-consuming. Today, common opinion is that it is impossible to use exact
methods for calculating the radiative cooling in general circulation models (GCMs); see, for instance, López-Puertas et al. (2024), who promote this point of view. Various parameterizations have been
developed for quickly calculating this cooling in GCMs; see Fomichev et al. (1998), Fomichev (2009), and Feofilov and Kutepov (2012) for reviews of works on parameterizing the non-LTE cooling of
Earth’s middle atmosphere in the 15µm CO[2], 9.6µm O[3] and rotational H[2]O bands. These algorithms, however, lack the accuracy needed for current GCMs.
Infrared emission in the 15µm CO[2] band is the main cooling mechanism of middle and upper atmospheres of Earth, Venus and Mars (e.g., Goody and Yung, 1995; Sharma and Wintersteiner, 1990; Pollock
et al., 1993; Bougher et al., 1994; López-Puertas and Taylor, 2001; Feofilov and Kutepov, 2012). On Earth, the magnitude of the mesosphere and lower-thermosphere (MLT) cooling affects both the
mesospheric temperature and the mesospheric height: the stronger the cooling, the colder and higher the mesopause (Bougher et al., 1994).
The goal of this work is to present a new routine for calculating the non-LTE 15µm CO[2] cooling of Earth’s MLT, which utilizes exact radiative transfer and non-LTE problem solution techniques and
is fast enough to be applied in GCMs. The new routine can handle the broad variation in CO[2] both below and above the current volume mixing ratio, up to 4000ppmv. This allows the use of this
routine for modeling Earth’s ancient atmospheres and the climate changes caused by increasing CO[2].
The routine we present is the optimized version of the research ALI-ARMS (Accelerated Lambda Iteration for Atmospheric Radiation and Molecular Spectra) model and code (Kutepov et al., 1998; Feofilov
and Kutepov, 2012). In this code, atmospheric cooling is the by-product of the non-LTE problem solution. For nearly 20 years, the earlier version of this routine has been successfully applied (
Hartogh et al., 2005; Medvedev et al., 2015) in the GCM of Martian atmosphere.
In this paper, we demonstrate the performance of the routine for calculation of the 15µm CO[2] cooling of Earth’s MLT. However, generally this is a universal code, which potentially may be applied
to modeling the non-LTE cooling in any molecular band or bands of several molecules in any planetary atmosphere. We successfully tested it calculating the non-LTE cooling of Earth's MLT in the 6.3µm
H[2]O band (Feofilov et al., 2009) and 9.6 and 4.7µm O[3] bands (Manuilova et al., 1998) as well as the cooling of Titan atmosphere in the 6.7 and 3.3µm CH[4] bands (Kutepov et al., 2013a; Feofilov
et al., 2016). This potentially allows the application of this algorithm for calculating the radiative cooling in GCMs of various atmospheres; among them, the atmospheres of water reach exoplanets
(e.g., Valencia et al., 2007; Acuña et al., 2021) and gas giants.
In the next section, we outline techniques currently applied in GCMs for calculating the non-LTE CO[2] 15µm cooling. In Sect. 3, we briefly discuss the method and techniques applied for calculating
this cooling in our new routine. In Sect. 4, we present the model of non-LTE in CO[2] used in the routine and the routine computational performance. Section 5 discusses the accuracy of the new
routine. The Conclusions (Sect. 6) summarize the results of our study. Appendix A contains technical details of the code and recommendations for its implementation and usage in GCMs.
2The non-LTE radiative cooling of atmosphere and its calculations in general circulation models
The energy loss of the atmospheric unit volume due to infrared radiation is calculated as the radiative flux divergence taken with the opposite sign:
$\begin{array}{}\text{(1)}& \mathbit{h}=-\frac{\mathrm{1}}{\mathrm{4}\mathit{\pi }}\int \mathrm{d}\mathit{\omega }\int \mathrm{d}\mathit{u }\frac{\mathrm{d}{I}_{\mathit{u }\mathit{\omega }}}{\mathrm
where I[μν] is the intensity of radiation at the frequency ν along the ray ω.
With the local thermodynamic equilibrium (LTE), the discretization of the integral radiative transfer equation (RTE) (e.g., Goody, 1964; Goody and Yung, 1995) leads to a simple linear algebra
operation for calculating the LTE 15µm band cooling:
$\begin{array}{}\text{(2)}& \mathbit{h}=\mathbf{W}×\mathbit{B},\end{array}$
where h is the vector of cooling in N[D] grid points; B is the vector of the Planck function for a local T; and W is an N[D]×N[D] matrix, which accounts for radiative transfer in a number of 15µm CO
[2] bands contributing to the total cooling.
Extension of GCMs to the mesosphere and thermosphere required accounting for the non-LTE for calculating the CO[2] 15µm cooling. The standard way of solving the non-LTE problem (e.g., Curtis and
Goody, 1956; Goody, 1964; Goody and Yung, 1995) requires inverting the $\left({N}_{\mathrm{L}}×{N}_{\mathrm{D}}\right)×\left({N}_{\mathrm{L}}×{N}_{\mathrm{D}}\right)$ matrix, where N[L] is the number
of CO[2] vibrational levels included in the model. Equation (2) in this case remains unchanged; however, the matrix W is rebuilt to account for differences between the Planck function and the non-LTE
source functions in each band resulting from the non-LTE problem solution. This makes the calculation of the non-LTE cooling dramatically costlier (see Sect. 3 for more details).
Fomichev et al. (1998) and Fomichev (2009) (see references therein) discussed in detail various routines that had been suggested in previous studies for calculating the 15µm cooling in GCMs while
accounting for the non-LTE. The direct matrix solution of the simplified non-LTE problem in CO[2] (often with an approximate radiative transfer treatment) or using pre-calculated W matrices for a
limited number of atmospheric situations for further interpolation of its elements helped to reduce computational time at the expense of calculation accuracy.
A new way of calculating the non-LTE 15µm CO[2] cooling in GCMs was suggested by Kutepov (1978). Relying on results of Kutepov and Shved (1978), who showed that the fundamental 15µm CO[2] band
01101→00001 (see below, Fig. 1) of the main CO[2] isotope dominates the cooling above about 85km, Kutepov (1978) derived the recursive expression for h coming from the analytical solution of the
first-order differential equation for the non-LTE cooling h in the fundamental band. This expression directly accounted for the CO[2](ν[2])+O(^3P) quenching rate coefficient k and the O(^3P) density.
It was derived using the “second-order escape probability” approach (Frisch and Frisch, 1975; Frisch, 2022) for the approximate solution of the Wiener–Hopf-type integral radiative transfer equation
in the semi-infinite atmosphere. The algorithm of Kutepov (1978) – see its refined version by Kutepov and Fomichev (1993) – calculates h upward in the non-LTE layers using the LTE h or the non-LTE h
obtained using other techniques as a lower boundary condition. Fomichev et al. (1993) linked this algorithm to the matrix routine for calculating h in the LTE layers developed by Akmaev and Shved (
1982). Later Fomichev et al. (1998) modified the routine of Fomichev et al. (1993) by adding the interpolation of the W matrices for the CO[2] within 150–720ppmv using tables of pre-calculated
elements, and they described in detail the structure of the revised matrix W. These authors also extended the routine altitude range to layers above 110km. Cooling in this region is calculated from
the simple balance equation for the first excited vibrational level of the main CO[2] isotope while accounting for absorption of the radiative flux from below, cooling to space and collisional
quenching. For smooth temperature profiles, Fomichev et al. (1998) reported maximal cooling calculation errors of less than 2 and up to 5Kd^−1, for 360 and 720ppmv CO[2], respectively. In this
paper we will call this routine F98.
Basic features of F98, namely (a) a broad altitude range covered, (b) straightforward accounting for the non-LTE, and (c) high computational efficiency, attracted many users. For more than 2 decades,
the F98 routine has been the most widely used algorithm for calculating the 15µm CO[2] cooling in GCMs of mesosphere and thermosphere; see, for instance, Eckermann (2023) for its latest application.
However, as we show below, the F98 errors are large for non-smooth temperature profiles reaching 20–25Kd^−1 in the mesopause region. On the other hand, even very minor variation in the CO[2]
cooling may have significant impact on the GCM results in MLT. Kutepov et al. (2013b) showed (their Fig. 18.2) that variation in the CO[2] cooling of ∼1–3Kd^−1 in the Leibniz Institute Middle
Atmosphere (LIMA) model (Berger, 2008) in the mesopause region caused significant warming of up to 5–6K (about 105km) and cooling of up to −10K (below 105km) at latitudes between 90°S and 40°N
for July 2005. This and other tests lead to the conclusion (Uwe Berger, personal communication, 2010) that the accuracy of cooling–heating rate calculations in GCMs “should not exceed 1Kd^−1 for
any temperature distribution”.
Fortunately, this accuracy requirement overlapped in time with the dramatic progress in the non-LTE radiative transfer calculations. This allowed the development of a new routine, NLTE15µmCool-E
(hereafter KF23 for brevity within this paper), for calculating the non-LTE 15µm CO[2] cooling in GCMs of Earth's atmosphere; the routine exploits new exact algorithms for solving the non-LTE
problem and, therefore, fits enhanced accuracy requirements. At the same time, it is fast enough to be used in GCMs.
KF23 is the optimized version of our basic ALI-ARMS model and code (Kutepov et al., 1998; Feofilov and Kutepov, 2012), which utilizes two advanced techniques: (1) the ALI technique for the solution
of the non-LTE problem and (2) the opacity distribution function (ODF) technique for optimizing the radiative transfer calculations. In this section, we outline these techniques and the current
status and latest applications of ALI-ARMS code.
3Method and techniques applied in a new routine for calculating the non-LTE cooling
3.1Solution of the non-LTE problem
The non-LTE problem has two primary constituents: (1) the statistical equilibrium equations (SEEs), which express the equality of the total population and de-population rates for each molecular
level, and (2) the radiative transfer equation (RTE), which relates the radiation field to the populations of levels, at all altitudes in the atmosphere (Hubeny and Mihalas, 2015). Hence, the system
of equations for the level populations is non-local (and non-linear). The most obvious way of dealing with this situation is to iterate between the SEEs and RTE. This process, traditionally called
“lambda iteration” (LI), has been investigated in the astronomical context since the 1920s (Unsöld, 1938). It inverts N[D] matrices N[L]×N[L] at each iteration step. This simple approach has been
applied in Earth's and planetary atmosphere radiative transfer; see, for instance Appleby (1990) and Wintersteiner et al. (1992). If the optical depths are large (as is the case for the CO[2] 15µm
band), the algorithm converges slowly. Kutepov et al. (1998) studied several LI schemes and showed that for the CO[2] non-LTE problem, the number of iterations I[LI] for these algorithms may reach ∼
200 for even moderate convergence criterion $\mathrm{1}×{\mathrm{10}}^{-\mathrm{3}}$. This slow convergence is caused by the photons trapped in the cores of the most optically thick lines and by the
strong non-linearity of SEEs related to quasi-resonant exchange of vibrational energy by the molecular collisions.
An alternative way of dealing with the non-LTE is a joint treatment of SEEs and RTE, when RTE is discretized with respect to the optical depth or altitude grid to get a matrix representation of
radiative terms in SEEs. This approach, known in the atmospheric science as the Curtis matrix (CM) technique (e.g., Goody, 1964; Goody and Yung, 1995; López-Puertas and Taylor, 2001), leads to a
matrix of dimensions $\left({N}_{\mathrm{L}}×{N}_{\mathrm{D}}\right)×\left({N}_{\mathrm{L}}×{N}_{\mathrm{D}}\right)$. In stellar atmosphere studies, the generalized version of this technique is known
as the Rybicki method (Mihalas, 1978). The time required for the solution of the non-LTE problem using the CM technique is controlled by the number of operations for matrix inversion. The advantage
of the classic matrix method lies in the simultaneous determination of all populations at all altitudes instead of the sequential evaluation of populations step by step at each altitude using the
radiative field from the previous iteration. Therefore, “matrix iteration” usually converges better than lambda iterations. However, the convergence of both algorithms depends strongly on how the
local non-linearity is treated; see the next section. To construct an adequate model, one must account for a large number of excited levels of various molecular species plus use a detailed model of
atmospheric stratification. As a result both N[L] and N[D] can become very large. The dimensions of primary matrices are reduced by introducing various assumptions (for instance, the LTE assumption
for rotational sublevels as well as, for example, LTE in the groups of vibrational levels closely spaced in energy; see also the discussion of GRANADA – Generic RAdiative traNsfer AnD non-LTE
population Algorithm – code in Sect. 3.4). Nevertheless, usually the time to solve the non-LTE problem using the CM method significantly exceeds the time needed when an LI algorithm is applied to the
same problem (see Sect. 5 for more details).
In the 1990s, stellar astrophysicists developed a family of powerful techniques that utilize lambda iteration with an approximate (or accelerated) lambda operator (see Rybicki and Hummer, 1991;
Rybicki and Hummer, 1992; and references therein). In these so-called ALI techniques (for accelerated lambda iteration), the integral lambda operator, which links the radiation intensity at a given
point with the source function at all points, is approximated by a local (or nearly local) operator. With a local operator, the largest matrices again, as in the LI case, have dimensions N[L]×N[L].
However, in this case, the convergence is rapid, since most of the transfer in cores of the lines (described by the local part of lambda operators) cancels out analytically and only the difference
between exact and approximate radiative terms in the SEEs is treated iteratively. Kutepov et al. (1998) shoved that for the CO[2] non-LTE problem I[ALI]≪I[LI] (see Sect. 5 for more details).
3.2Treating the strong local non-linearity caused by intensive VV exchange
Strong local non-linearity of the non-LTE radiative transfer problem in molecular bands caused by intensive near-resonant exchange of vibrational energy between molecules was studied by Kutepov
et al. (1998). They showed that this non-linearity causes a dramatic deceleration of the convergence. Various schemes of additional “internal” iterations (without recalculating radiative excitation
rates) aimed at adjusting populations of levels coupled by strong inter- and intramolecular vibrational–vibrational (VV) exchange did not bring any help. To accelerate the convergence, Kutepov et al.
(1998) suggested “decoupling”, which utilizes the Avrett (1966) approach of treating the “source function equality in the line multiples”. The SEE terms, which describe the VV coupling, depend on the
products ${n}_{v}\phantom{\rule{0.125em}{0ex}}{n}_{{v}^{\prime }}$, where n[v] is the population of vibrational level v of one molecular species, whereas ${n}_{{v}^{\prime }}$ is the population of
level v^′ of the same or another molecular species. In the iteration process, one needs to present these terms as ${n}_{v}\phantom{\rule{0.125em}{0ex}}{n}_{{v}^{\prime }}^{†}$, where † denotes the
population of the level with the lower degree of excitation, which is taken from the previous iteration. Kutepov et al. (1998) showed that this stops “the propagation of errors” by iterations and
guarantees the fastest convergence. This decoupling requires only slight modification of matrices to be inverted (without additional linearization of the non-LTE problem and, therefore, additional
programming efforts). It provides, however, the same acceleration of convergence as the application of the Newton–Raphson method for the solution of a system of non-linear equations (Gusev and
Kutepov, 2003).
3.3The radiative transfer in the molecular bands
With line-by-line (LBL) calculations, a very large number of frequency points to be accounted for significantly decelerate calculations of the 15µm CO[2] radiative fluxes. In LTE the reduction in
frequency points is usually achieved by utilizing the so-called CKD (for correlated k distribution) method, which is based on grouping the gaseous spectral transmittances in accordance with the
absorption coefficient k. The accuracy of this approach is better than 1%; see, for instance, Fu and Liou (1992). However, the k correlation is not applicable under the non-LTE conditions because
the vibrational level populations involved in the k distributions are unknown and depend themselves on the solution of the radiative transfer equation.
To overcome this problem, stellar astrophysicists developed the ODF technique. In this approach, they treat the non-LTE radiative transfer in “super-lines” associated with multiplets of very large
line numbers (e.g., Hubeny and Lanz, 1995; Hubeny and Mihalas, 2015). They re-sample the normalized absorption and emission cross sections of super-lines, consisting of hundreds or thousands of
lines, to yield a monotonic function of frequency that can be represented by relatively small numbers of frequency points. Though the idea is like k correlation, these normalized absorption and
emission profiles do not depend on the total populations of upper and lower “super-levels” but only on the relative population of sublevels that are closely spaced in energy within each super-level,
which are supposed to be in LTE.
Feofilov and Kutepov (2012) described the adaptation of the ODF technique to the solution of the CO[2] non-LTE problem. They treated each CO[2] band branch as a super-line. Therefore, each CO[2] band
was presented by only three lines for perpendicular bands and two lines for parallel bands. This way of treating the radiative transfer in the molecular band is about 50–100 times faster than the
classic LBL approach. Whereas in the LBL approach the radiation transfer equation is solved for each of N[F] frequency grid points within each rotational–vibrational line, in the ODF techniques the
same number of frequency grid points as applied only to each super-line. Thus, the acceleration factor is approximately equal to the number of rotational–vibrational lines in the branch. As Feofilov
and Kutepov (2012) show, the ODF approach introduces very small errors into the 15µm cooling. For current CO[2] density (400ppm in the lower atmosphere), these errors do not exceed 0.3Kd^−1 in a
broad range of temperature variations; see Fig. 18 of Feofilov and Kutepov (2012). They increase roughly linearly with the CO[2] increase.
3.4From matrix and LI to ALI techniques
Since the 1960s, the Curtis matrix algorithms, with the rare exceptions for LI mentioned above in Sect. 3.1, have been dominating the solution of the non-LTE problems in Earth's and planetary
atmospheres, including the studies of the 15µm CO[2] cooling (see, for instance, the work by Zhu, 1990, who developed Curtis matrix parameterization of CO[2] cooling of MLT, which was very advanced
for that time). Numerous non-LTE studies of the research group from the Institute of Astrophysics of Andalusia, Granada, applied the GRANADA (Generic RAdiative traNsfer AnD non-LTE population
Algorithm) code described by Funke et al. (2012). The core of it is a standard CM algorithm, which the authors in this and their earlier publications prefer to called MCM (for modified Curtis
matrix). The cited paper discusses various ways of splitting large matrices into blocks, solving the non-LTE problem for selected sub-sets of levels and iterating to get the solution for all
vibrational levels. In stellar astrophysics (Mihalas, 1978), this approach is known as the “generalized equivalent two-level approach for multi-level problems”. For many years it has had no use
because of convergence problems. The GRANADA code also includes LI but not the ALI technique, although the transformation of LI into ALI requires a minimum of programming efforts but speeds up the
convergence for optically thick problems at least 10-fold (e.g., Rybicki and Hummer, 1991; Kutepov et al., 1998). Additionally, Funke et al. (2012) neither compared the computational performance of
MCM and LI algorithms nor described the handling of a strong local non-linearity caused by VV coupling.
3.5The ALI-ARMS code
Kutepov et al. (1991, 1997) successfully applied the ALI technique to study the non-LTE emissions of molecular gases in planetary atmospheres (the 4.3µm CO[2] band in the Martian atmosphere and the
4.7µm CO band in Earth's atmosphere, respectively). Kutepov et al. (1998) and Gusev and Kutepov (2003) described in detail the adaptation of the ALI code developed by Rybicki and Hummer (1991) for
stellar atmospheres to the solution of the non-LTE problem for molecular bands of planetary atmospheres. They studied the performance of the new ALI-ARMS code and demonstrated computational
superiority of ALI-ARMS compared to various LI and CM/MCM techniques. The ALI-ARMS code and its applications were described by Feofilov and Kutepov (2012). Later it was applied to study the
rotational non-LTE in the CO[2] 4.3µm band in Martian atmosphere, observed by the Planetary Fourier Spectrometer (PFS) in the Mars Express mission (Kutepov et al., 2017, and references therein); to
study self-consistent two-channel CO[2] and temperature retrievals from the limb radiance measured by the SABER (Sounding of the Atmosphere using Broadband Emission Radiometry) instrument on board
the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) Mission (Rezac et al., 2015); and to explain the SABER nighttime CO[2] 4.3µm limb emission enhancement caused by a recently
discovered new channel of energy transfer from OH(ν) to CO[2] and to simultaneous retrievals of O(^3P) and total OH densities in the nighttime MLT (Panka et al., 2017, 2018, 2020). Earlier, the
ALI-ARMS code was used (Kutepov et al., 2006) to pinpoint an important missing process of strong VV coupling between the isotopes in the CO[2] non-LTE model of the SABER operational algorithm and to
model the H[2]O 6.3µm emission and the H[2]O density retrievals in MLT from the SABER 6.3µm limb radiances (Feofilov et al., 2009). As we show below in Sects. 4 and 5, the ALI-ARMS code also
provides an efficient way of calculating the 15µm CO[2] cooling–heating in the GCMs.
4New routine for the CO[2] cooling calculations
4.1The CO[2] non-LTE day- and nighttime models
To optimize the cooling calculations, we used as a reference our working line-by-line non-LTE model in CO[2], which comprises 60 vibration levels of five CO[2] isotopic species and two levels of N[2]
, O[2] and O(^3P). In Fig. 1, we show the lower levels of this model (up to 5000cm^−1). The set of collisional rate coefficients for the vibrational–translational (VT) and VV exchange that we apply
is described by Shved et al. (1998) and is similar to rates used by López-Puertas and Taylor (2001). However, it relies on different scaling rules based on first-order perturbation theory. Compared
to our extended line-by-line model (Feofilov and Kutepov, 2012), which includes a total of about 350 vibrational levels of seven CO[2] isotopes and over 200000 rotational–vibrational lines, the CO
[2] cooling of the 60-level model differs by less than 0.05 and 0.5Kd^−1 for the CO[2] mixing ratios of 400 and 4000ppmv, respectively, for both daytime and nighttime conditions and for any
temperature profile.
In the next steps, we gradually reduced the number of levels and bands in the model to optimize the calculations, keeping the cooling rate errors smaller than 1Kd^−1 compared to the reference
Molecules and vibrational levels are given with the HITRAN notifications: 626 corresponds to ^16O^12C^16O, 44 to N[2], 66 to O[2] and 6 to O(^3P). D marks levels added in the daytime model.
In Table 1, we show vibrational levels included in the optimized day- and nighttime models. The isotopes in the table are marked using the lower digit of the atomic weight: 626 corresponds to ^16O^12
C^16O, 636 corresponds to ^16O^13C^16O and so on. We account for 28 and 18 vibrational levels in the day- and nighttime models, respectively, which include the levels of the four most abundant CO[2]
isotopes and N[2] and O[2] levels (two and one for each in the daytime and nighttime, respectively). We also include O(^3P), but we do not include O(^1D) because its effect on the total CO[2]
cooling–heating is negligible for both nighttime and daytime conditions. The model uses the CO[2] spectroscopic information for all transitions available from HITRAN2016 (Gordon et al., 2017) for the
levels listed in Table 1. In Table 2, we provide the numbers of bands, band branches and lines used for daytime and nighttime calculations.
4.2Computational performance and comparison with other algorithms
In the astronomical context, detailed analysis of the operation numbers and times needed for the solution of the non-LTE problem is a must because of the complexity of the problem (Hubeny and Mihalas
, 2015). Even though the numbers of levels and lines involved in the non-LTE problem of planetary atmospheres are smaller, speed is a crucial parameter for the GCMs, so we perform the same type of
analysis in this section below.
We compared three algorithms of the non-LTE problem solution, namely LI, ALI and the matrix method (hereafter MM); see Sect. 2 for details. Although the LI technique is much more computationally
expensive compared to ALI algorithms (Kutepov et al., 1998; Gusev and Kutepov, 2003), one cannot say a priori the same about the MM approach applied to a problem with a reduced number of levels
because of the low number of iterations it usually requires, and this required testing. We generated the matrices of the MM algorithm from the matrix presentations of lambda operators, as discussed
by Kutepov et al. (1998). We used the discontinuous finite element (DFE) algorithm as the most efficient way of solving the radiative transfer equation (Gusev and Kutepov, 2003; Hubeny and Mihalas,
2015), and we compared the LBL and ODF techniques (see Sect. 2.3). Below in this section, we discuss the performances of all algorithms only with the non-linearity caused by near-resonance VV energy
exchanges resolved as outlined in Sect. 3.2. Without this, the number of iterations of all considered algorithms would be a few times higher.
For each of the three techniques, we checked the numbers of operations and times needed for each single iteration and then accounted for a number of iterations and compared total numbers of
operations and times for the entire non-LTE problem solution.
We found that the time required for each iteration of the algorithms we studied is dominated by three components: (1) time for auxiliary operations T[Aux] (such as the filling of large arrays like
vectors or matrices to be inverted); (2) time for solving the radiative transfer equation (T[Rad]) and forming the radiative rate terms in the SEEs; and (3) time for matrix inversions (T[Inv]). The
cooling itself is the by-product of the non-LTE problem solution and is estimated nearly instantaneously as in Kutepov et al. (1998):
$\begin{array}{}\text{(3)}& \mathbit{h}=\sum _{\mathrm{b}}{\left[\left({n}_{\mathrm{lo}}{B}_{\mathrm{lo},\mathrm{up}}\stackrel{\mathrm{‾}}{{J}_{\mathrm{lo},\mathrm{up}}}-{A}_{\mathrm{up},\mathrm{lo}}
where B[lo,up] and A[up,lo] are the band Einstein coefficients and $\stackrel{\mathrm{‾}}{{J}_{\mathrm{lo},\mathrm{up}}}$ is the mean intensity in the band, which enters radiative rate coefficients
of SEE matrices, whereas n[lo]/E[lo] and n[up]/E[up] are the populations/energies of lower and upper vibrational levels in the band, respectively. The sum in Eq. (3) applies to all CO[2] transitions
in the model.
T[Aux], T[Rad], etc. are given in seconds.
In Table 2, we present a summary of our study. The table gives the main parameters of the non-LTE models for day and night conditions described in Sect. 4.1, operation numbers, and the times in
seconds (measured with the help of a timing routine within the code), which are required for each calculation part. We performed this study on two different machines, with x86_64 Intel and Intel Xeon
Gold processors operating at 2.2 and 2.5GHz, respectively. We compiled the ALI-ARMS code to be used in 64bit architecture with the help of a standard GNU Compiler Collection (GCC) compiler, and we
ran it on a single processor. We provide the results only for one 2.2GHz Intel processor; the timing for the second processor is roughly 1.4 times shorter.
Compared to the reference code, we ran the routine using the convergence criterion $\mathrm{1.0}×{\mathrm{10}}^{-\mathrm{2}}$ instead of $\mathrm{1.0}×{\mathrm{10}}^{-\mathrm{4}}$. This allowed a
reduction in the number of iterations by a factor of about 2 without sacrificing accuracy.
Similarly to the study by Hubeny and Mihalas (2015), we found that the time T required for any procedure like the radiative transfer equation solution or matrix inversion may be presented as
$\begin{array}{}\text{(4)}& T=CN,\end{array}$
where N is the number of operations. N is defined by the mathematical nature of the problem and the algorithm applied for its solution. C may, however, depend on many other factors like the quality
of programming, language used, operational system, interpreter, computer architecture and performance.
We found that the number of operations for the solution of the radiative transfer equation N[Rad] in the case of the non-overlapping lines may be approximated by the expression
$\begin{array}{}\text{(5)}& {N}_{\mathrm{Rad}}\simeq {N}_{\mathrm{D}}×{N}_{\mathrm{RT}}×{N}_{\mathrm{F}}×{N}_{\mathrm{A}}\end{array}$
and is the same for all algorithms compared. Here N[RT] is the total number of lines (or band branches in the ODF case), and N[F] and N[A] are the numbers of points in the frequency and angle
integrals used, respectively. The coefficient C in Eq. (4), which links radiative transfer operation numbers and corresponding times, was found to be $\simeq \mathrm{1.0}×{\mathrm{10}}^{-\mathrm{8}}$
We found that with the LI/ALI algorithms, the number of auxiliary operations N[Aux] is well approximated by the following expression:
$\begin{array}{}\text{(6)}& {N}_{\mathrm{Aux}}^{\mathrm{LI}/\mathrm{ALI}}\simeq {N}_{\mathrm{L}}^{\mathrm{2}}×{N}_{\mathrm{D}}.\end{array}$
This expression gives the number of terms to be filled in the block-diagonal matrix comprising N[D] blocks N[L]×N[L], where N[L] is the number of vibrational levels. In the case of the LI/ALI
techniques, these are the N[D] matrices generated and inverted one after another at each iteration step. The coefficient C in Eq. (4), which links auxiliary operations and corresponding times, is $C\
simeq \mathrm{1.7}×{\mathrm{10}}^{-\mathrm{7}}$s.
In the case of the MM technique, the matrix to be generated at each iteration is much larger; namely it has the size $\left({N}_{\mathrm{L}}×{N}_{\mathrm{D}}\right)×\left({N}_{\mathrm{L}}×{N}_{\
mathrm{D}}\right)$ and consists of N[L] fully filled diagonal blocks N[D]×N[D], which represent non-local radiative terms, whereas the same as for LI/ALI case collisional terms are now spread over
non-diagonal parts of this large matrix. We found that when we present the number of operations to fill this matrix as
$\begin{array}{}\text{(7)}& {N}_{\mathrm{Aux}}^{\mathrm{MM}}\simeq {N}_{\mathrm{D}}^{\mathrm{2}}×{N}_{\mathrm{L}},\end{array}$
then approximately the same coefficient $C\simeq \mathrm{1.7}×{\mathrm{10}}^{-\mathrm{7}}$s links this number with the time needed for its filling.
The number of operations needed for matrix inversion N[Inv] is approximately N^3, where N is the matrix dimension. Thus, we have the following expressions:
$\begin{array}{}\text{(8)}& {N}_{\mathrm{Inv}}^{\mathrm{MM}}\simeq \left({N}_{\mathrm{L}}×{N}_{\mathrm{D}}{\right)}^{\mathrm{3}}\end{array}$
for the MM algorithm and
$\begin{array}{}\text{(9)}& {N}_{\mathrm{Inv}}^{\mathrm{LI}/\mathrm{ALI}}\simeq \left({N}_{\mathrm{L}}×{N}_{\mathrm{L}}{\right)}^{\mathrm{3}}×{N}_{\mathrm{D}}\end{array}$
for the LI/ALI techniques. In the latter case the number of operations is ${N}_{\mathrm{D}}^{\mathrm{2}}$ times lower, since only N[D] matrices N[L]×N[L] are inverted one after another at each
iteration. This is the great advantage of these techniques compared to MM, where the entire huge matrix needs to be inverted at once, since it has non-zero elements outside the diagonal blocks.
In the ALI-ARMS code we use ludcmp (lower–upper decomposition), lubksb (back substitution) and mprove (iterative improvement) as matrix inversion routines (Press et al., 2002). We found that, for
these routines applied to the non-LTE problems studied here, the coefficient between the number of operations and time for matrix inversion depends on the matrix dimension N and may be approximated
by the following expression:
$\begin{array}{}\text{(10)}& C=\mathrm{1.0}×{\mathrm{10}}^{-\mathrm{7}}\cdot \left(\mathrm{0.04}+\mathrm{1.2}/N\right).\end{array}$
One may see in the upper part of Table 2 that, for night conditions, the application of the MM technique causes the matrix inversion to be the most time-consuming calculation part at each iteration,
although the number of iterations N[Iter]=2 is low. Applying LI–LBL techniques provides a strong reduction in the matrix inversion time per iteration but gives only a moderate reduction in the total
time (by a factor of ∼5) compared to MM–LBL due to a large number of iterations (60). The number of iterations for the LI/ALI techniques slightly depends on the atmospheric pressure and temperature
distribution. The numbers, which are given in the table, are mean values for a few hundreds of runs for different atmospheric conditions. Applying ALI instead of LI significantly reduces the number
of iterations (5 instead of 60), providing additional acceleration of calculations by a factor ≳10. We note that for LI/ALI–LBL, the most time-consuming part for each iteration is now the radiative
transfer solution, which is more than 15 times slower than two other parts of calculation together. The ODF technique allows a reduction in T[Rad] by a factor of 50–60. This provides total additional
acceleration by a factor of ≳10. The last column in the table gives the acceleration factor $K={T}_{\mathrm{tot}}/{T}_{\text{tot,\hspace{0.17em} ALI–ODF}}$, which shows how much faster the ALI–ODF
combination works compared to other techniques: it is about 900 and 160 times faster than the MM–LBL and LI–LBL techniques, respectively.
The lower part of Table 2 shows the number of operations and times for various parts of calculations for the daytime non-LTE model described in Sect. 4.1. Compared to the nighttime, daytime
calculations require about 2.5 times more time due to an increased number of vibrational levels and bands accounted for. Nevertheless, the main points discussed above in this section for the
nighttime runs remain valid for the daytime: (a) the main decelerating factor for the MM technique is the matrix inversion, notwithstanding the low number of iterations; (b) the LI technique,
although it reduces the matrix inversion time by a factor of ≳7000, provides only a moderate decrease in total time (by a factor of ∼10) because of the large number of iterations; (c) the ALI
technique is more than 100 times faster than MM, with the slowest part of calculations being the LBL solution of RTE; and (d) the ODF provides acceleration of radiative transfer calculations by a
factor of 50. Finally, the ALI–ODF technique appears to be over 1000 times faster than the MM–LBL approach.
We measured the time the F98 routine requires on x86_64 Intel 2.2GHz processors and found it to be around $\mathrm{3}×{\mathrm{10}}^{-\mathrm{4}}$s. This means that for the nighttime, KF23 is about
300 times slower than F98. In the daytime, when the solar heating parameterization of Ogibalov and Fomichev (2003)) is accounted for, our version of F98 requires about 30% more time. Still, it
remains about 600 times faster than the daytime KF23 routine. In the next section we discuss in detail the accuracy of both routines.
5Accuracy of cooling–heating rate calculations
To estimate the calculation errors, we compared the outputs of our new KF23 routine and the F98 routine for the non-LTE CO[2] cooling calculations with our non-LTE reference model, discussed in
Sect. 4.1. In these tests, we used the CO[2] volume mixing ratio (VMR) profiles with 400ppmv in their “well-mixed” part. We also tested the same profiles multiplied by factors of 2, 4 and 10. For
the CO[2](ν[2])+O(^3P) quenching rate, we used the temperature-dependent coefficient $k=\mathrm{3.0}×{\mathrm{10}}^{-\mathrm{12}}$s^−1cm^3×$\sqrt{\left(T/\mathrm{300}\right)}$ (see Sect. 5.4 for
more details). As we described before in Sect. 4.1, the vibrational levels and bands accounted for in KF23 keep their accuracy of ∼1Kd^−1 for any temperature profile, including those strongly
disturbed by various tidal and gravity waves. Here, we show the cooling rates calculated using both routines compared to the cooling rates obtained using the reference model only for “wavy”
temperature profiles. For mean profiles with a smooth structure, the errors of KF23 were about 0.1–0.3Kd^−1 (for 400ppmv of CO[2]). For the F98 routine, the errors for smooth profiles were around
1–3Kd^−1, confirming the results of Fomichev et al. (1998).
5.1The nighttime cooling–heating rates
In Fig. 2, we show five typical temperature profiles, which demonstrate the superposition of different meso-scale waves. These profiles, as well as corresponding pressure, O(^3P) and CO[2]
distributions (shown in Fig. 3), and other constituents from the Whole Atmosphere Community Climate Model Version 6 (WACCM6) (Gettelman et al., 2019) runs were kindly provided by Daniel Marsh
(personal communication, 2022). For the 15µm cooling calculations, this model uses the F98 parameterization. We show below the calculation results for these atmospheric model inputs because WACCM is
widely used by the model community. Generally, any pressure/temperature profiles disturbed by strong waves give similar results, as we observed in our tests. These may be p–T distributions generated
by modern GCMs, like in our case here; those retrieved from ground-based or space observations; or artificial wavy p–T distributions.
Figure 4 shows the CO[2] 15µm cooling rates for the new KF23 routine, for the F98 parameterization, and for our reference non-LTE model for temperatures in Fig. 2 and the 400ppmv CO[2] profiles.
One may see in this figure that the new routine errors do not exceed 0.5Kd^−1. On the other hand the F98 routine errors reach up to 13Kd^−1. The altitude range, where the F98 routine demonstrates
significant errors, is broad starting just above the altitude of 60km.
In Fig. 5 we show the same as Fig. 4 – cooling rates and their differences – but for the twice higher CO[2] of 800ppmv in the well-mixed range. We note here that both maximal absolute values of
cooling rates and the new and the F98 routine errors are roughly twice higher compared to those of Fig. 4. For the new routine they do not exceed 1Kd^−1, whereas for the F98 routine they reach up
to 23Kd^−1.
Finally, in Fig. 6, we show in panel (a) the cooling rates produced by the reference model and by the KF23 routine for the CO[2] VMR of 4000ppmv, which is 10 times higher than the reference one. The
new routine errors are shown in panel (b). The F98 routine was not tested for these inputs, since it was not designed to work with the CO[2] VMRs higher than 720ppmv. One may see in this figure that
absolute cooling rates (maximal values) are approximately 10 times higher than those for 400ppmv of CO[2]. The same is roughly true for the new routine errors, which now reach values of up to 8Kd^
−1 in upper parts of the tested region.
In Fig. 7 for night, we show the contributions of major and minor CO[2] isotopes included in the model (see Sect. 4.1), for a temperature at 33.5°N for 400 and 1600ppmv of CO[2]. One may see in
this figure that, for the 400 CO[2]ppmv, this contribution does not exceed ∼2 and ∼1Kd^−1 for the 626 hot bands and for all minor isotope bands, respectively. This effect of hot bands and minor
species is increasing with the CO[2] density; see the right panel of Fig. 7, particularly for altitudes affected by waves.
As we mentioned in Sect. 4.1, we use all CO[2] bands available in HITRAN2016 for the night set of levels in Table 1. This minimizes errors compared to reference calculations to ≤1Kd^−1 for 400ppmv
of CO[2]. The routine allows for using fewer levels and bands to accelerate calculations but at the expense of error increase (see Appendix A for more details). For instance, excluding (see Fig. 1)
weak first hot bands, (10001,02201,10002)→01101, and second hot bands, (11101,03301,11102)→(10001,02201,10002), of 626 and 636 isotopes makes the total number of bands twice lower. Our tests show
that in this case the routine works only about 10% (see also Table 2) faster; however the maximal cooling rate error for 400ppmv increases to up to 3Kd^−1.
5.2The daytime cooling–heating rates
In the daytime, the near-infrared heating due to the absorption of solar radiation in the CO[2] bands around 2.0–4.3µm represents a small but non-negligible reduction in the total CO[2] cooling (up
to 1–2Kd^−1 for the current CO[2]). The complex mechanisms of the absorbed solar energy assimilation into heat have been investigated in detail in a number of studies summarized by López-Puertas
and Taylor (2001). Ogibalov and Fomichev (2003) studied this heating for smooth temperatures for various CO[2] densities and solar zenith angles (SZAs) and suggested the use of a lookup table, which
allows a quick estimate of this heating in GCMs. Due to its reasonable accuracy (∼0.5Kd^−1 for current CO[2]), this table has been used as a daytime supplement to the F98 nighttime cooling
parameterization. Unfortunately, with increasing CO[2] density, the errors in this table increase rapidly above ∼70km: for 720ppmv CO[2] it underestimates the heating around the mesopause by more
than 50% (see Fig. 5 of Ogibalov and Fomichev, 2003, for daily averaged heating).
In Fig. 8 the heating of the atmosphere due to the daytime absorption of solar radiation in the CO[2] bands at 2.0–4.3µm for SZA=45° at 33.5°N produced by our new routine is shown. To prevent
increasing errors for the daytime due to an inadequate treatment of solar radiation absorption and assimilation, KF23 utilizes, in the daytime, an extended non-LTE model which, compared to the
nighttime, includes 10 more CO[2] vibrational levels (see Table 1) and a comprehensive system of radiative and collisional VT and VV energy exchanges as described by Shved et al. (1998) and Ogibalov
et al. (1998). A higher number of vibrational levels and more than twice the number of bands lead to a 2.5-fold longer time for the daytime cooling–heating calculation (see Table 2). However, the
daytime errors are of the same order of magnitude (less than 1Kd^−1 for 400ppmv) as those for the nighttime (Figs. 4–6) even for strongly perturbed temperatures and increased CO[2]. We do not
present these comparisons. As in the night case, removing half of bands (various hot and combinational bands) in the daytime model gives only about 10% speed gain; however, for 400ppmv, maximal
errors in cooling may reach 4Kd^−1.
5.3Cooling–heating rates for the diurnal tides at Equator
In Fig. 9, we compare the cooling rates obtained with the KF23 routine and with F98 parameterization with those produced by the reference model. For these tests, we used the temperature–pressure
distributions affected by the diurnal tides at the Equator; p, T and the atmospheric constituent densities as well as local zenith angles for local times at 1.0°N and 12.0°E (15 March 2019)
correspond to the WACCM-X (Whole Atmosphere Community Climate Model with thermosphere and ionosphere extension) equatorial simulations constrained by meteorological analyses of the NASA Goddard Earth
Observing System version 5 (GEOS-5) below the stratopause, as discussed in Yudin et al. (2020, 2022). The temperature distributions at various local times are shown in panel (a). Panel (b) presents
the CO[2] 15µm cooling rates calculated using the reference model (thick solid lines) and the F98 routine (thin solid lines with diamonds). We do not show here the cooling rates obtained using the
KF23 routine because the difference between them and the reference data does not exceed 1Kd^−1 ; see Fig. 9c. Figure 9d shows the differences between the cooling produced by the F98 routine and the
reference calculations. In contrast to Fig. 9c, these differences exceed 20Kd^−1 for some temperature profiles.
One may see in Fig. 9 that the accuracy of the F98 routine improves above 100km and below 80km. Above 100km, the F98 parameterization uses a recursive expression, the accuracy of which increases
with height (Kutepov, 1978). Below 80km, the F98 routine is based on the LTE matrix algorithm for cooling calculations. This algorithm accounts for the radiative interaction of the neighboring
levels and the escape of radiation to above and, therefore, provides a good accuracy of cooling in this optically thick layer. In the layer between 80 and 100km, the F98 routine merges the cooling
values calculated by the two methods outlined above. This merging works reasonably well for smooth temperature distributions tested by Fomichev et al. (1998), but it fails with wavy temperatures. One
also needs to keep in mind that this layer is the transition region between the LTE and non-LTE state of the CO[2](ν[2]) vibrations, where the physics of formation of the non-equilibrium vibrational
distribution must be considered in all aspects (e.g., Kutepov et al., 2006). Compared to the KF23 routine, which rigorously models the non-LTE, the F98 routine fails in this situation, which Fomichev
et al. (1993) warned about when they presented the first version of this parameterization.
5.4The CO[2](ν[2])+O(^3P) quenching rate coefficient
The results above in Sect. 5.1–5.3 were obtained for the temperature-dependent CO[2](ν[2])+O(^3P) quenching rate coefficient $k=\mathrm{3.0}×{\mathrm{10}}^{-\mathrm{12}}$s^−1cm${}^{\mathrm{3}}×\
sqrt{\left(T/\mathrm{300}\right)}$. The multiplier in this expression is the median value of the rate coefficient from the range of (1.5–6.0)$×{\mathrm{10}}^{-\mathrm{12}}$s^−1cm^3, which spans
the low laboratory data up to high data obtained from the space observations of the CO[2] 15µm emission; see, for example, Feofilov et al. (2012), and references therein. This value is currently
accepted for usage in the GCMs for calculation of the 15µm cooling. The dependence of MLT cooling on k has been investigated in many previous works summarized by López-Puertas and Taylor (2001, and
also references therein). It is known that the maximum value of cooling, which is usually reached at the altitudes of 100–140km, is roughly proportional to the k value used in calculations. The KF23
routine works well for any k value from the range given above, and it also allows varying the temperature dependence of the rate coefficient (e.g., Castle et al., 2012); see Appendix A. For our
calculations, we used the O(^3P) densities shown in Fig. 3.
5.5Upper and lower boundaries
The accuracy tests of the KF23 routine were performed with the upper boundary of the atmosphere at 130km. The routine may work with any upper boundary in the upper mesosphere and above. Putting the
upper boundary below ∼110km may cause, however, increasing cooling errors due to not accounting for the upper atmospheric layers. The lower boundary can be placed at any altitude below ∼50km
where all CO[2] 15µm bands are in LTE. This will justify the LTE lower boundary condition for the radiative transfer equation solution. However, it is not recommended to use the routine results
below ∼20km because of increasing errors by not accounting for the line overlapping in a current version of the ODF approach.
We present the new KF23 routine for calculating the non-LTE CO[2] 15µm radiative cooling–heating in the middle and upper atmosphere. The routine provides high-accuracy cooling rates above 20km in a
broad range of atmospheric input variations for any temperature distributions including those disturbed by strong micro- and meso-scale strictures and is the optimized version of the ALI-ARMS
reference model and research code (Feofilov and Kutepov, 2012), which rigorously solves the non-LTE in CO[2], N[2] and O[2] coupled by intensive vibrational–vibrational energy exchanges. The routine
relies on advanced techniques of exact non-LTE problem solutions (ALI algorithm) and the molecular band radiative transfer treatment (ODF technique). Using these algorithms, we have sped up the
cooling rate calculation by about 1000 times compared to the standard matrix and line-by-line technique of the same non-LTE problem solution. We show that the maximum error in calculations does not
exceed 1Kd^−1 for the current atmospheric CO[2] density and the median value of CO[2](ν[2])+O(^3P) quenching rate coefficient. This accuracy is ensured by a relatively large number of CO[2] levels
and bands used in the KF23 routine. We also allow the user to choose between the accuracy and calculation speed by adding or removing certain bands and levels (see Appendix A).
The KF23 routine provides accurate cooling calculations in a vast range of the CO[2](ν[2])+O(^3P) quenching rate coefficient and O(^3P) variations. It also works well for very broad variations in the
CO[2] VMR, both below and above the current density, up to 4000ppmv. Consequently, this allows the application of this routine in models of Earth's ancient atmospheres and models of climate changes
caused by increasing CO[2].
Recently López-Puertas et al. (2024) presented an updated version of the F98 routine. Detailed analysis of this work is given by Kutepov (2023). The main improvement of the revised routine (hereafter
F24) compared to F98 is an extended range of CO[2] abundances: whereas the F98 routine covered the range of CO[2] concentrations with tropospheric values from 150 to 720ppm, F24 goes up to 3000ppm
of tropospheric CO[2]. Another minor improvement is the finer altitude grid of revised parameterization. The authors show numerous tests of the routine accuracy but only for undisturbed individual
temperature distributions, for which its error does not exceed 0.5Kd^−1. They also tested the revised routine for the temperatures retrieved from MIPAS (Michelson Interferometer for Passive
Atmospheric Sounding) observations, which demonstrate large variability. These individual profiles are good inputs for the revised parameterization to show how it works for strongly disturbed
temperature profiles. However, the authors do not show these results. Instead, they present only zonal means of the differences. Obviously, this averaging washes out the errors obtained for
individual profiles, for which we observed (see Sect. 5) the F98 parameterization errors up to 25Kd^−1. Meanwhile these large errors are generally concentrated in the altitude region around 90km,
exactly where the root mean square errors (RMSEs) of F24 in López-Puertas et al. (2024) are maximized, reaching 8–9Kd^−1. In Sect. 5 we explain why F98 works badly in this altitude region. These
large RMSEs allow us to conclude that the revised routine presented by López-Puertas et al. (2024) has the same problems as the F98 parameterization. We compared F24 with our routine for the same
wavy profiles using the collisional quenching rates we applied this work for testing F98. We found that maximal errors of F24 for these profiles are about 30% lower than those of F98. However, this
improvement in the calculation accuracy of F24 was achieved by increasing the calculation time by approximately 2.4 times. López-Puertas et al. (2024) reported that F24 worked 6600 times faster than
our procedure KF23 comparing the calculation time of KF23 for night taken from Table 2 of this paper with the calculation time of F24, which they obtained when using a significantly faster processor.
We compared the performance of F24 and KF23 routines as described in Sect. 4.2. This comparison showed that F24 is only about 125 and 250 times faster than KF23 for night and day, respectively.
Appendix A:The NLTE15µmCool-E v1.0 routine (technical details)
The routine source code is written in C. The routine is available at https://doi.org/10.5281/zenodo.8005028 (Kutepov and Feofilov, 2023) and is ready for implementation into any general circulation
model usually written in Fortran through a small “wrapper”.
The routine has an interface, which allows efficiently receiving feedbacks from the model. These are inputs required for the cooling calculations such as pressure, temperature, CO[2], O(^3P) and
other atmospheric constituent densities. It returns the CO[2] 15µm radiative cooling–heating according to the altitude grid specified by the user. The routine works for day (SZA ≤110°) and night
(SZA>110°) conditions.
Following the discussion in Sect. 5, the routine may generally work with any upper and lower boundary; however it is not recommended to put the upper boundary below ∼110km, since this causes
increasing calculation errors due to not accounting for the upper atmospheric layers, or to place the lower boundary below ∼20km because of increasing errors caused by not accounting for the line
overlapping in the current version of the ODF approach.
The module requires geometrical altitudes to calculate radiative transfer and an equidistant altitude grid, which guaranties the exact solution of the radiative transfer equation. The user may define
any grid step including very fine ones, which allows the resolution of micro-scale temperature disturbances. This is an advantage of our routine, since its calculation time only linearly depends on
the number of grid points N[D]. Compared to this, the calculation time of matrix algorithms is $\sim {N}_{\mathrm{D}}^{\mathrm{3}}$; see Eqs. (4)–(10). Nevertheless, for those who want to account for
the additional cooling effect of the micro-scale sub-grid disturbances, we recommend using its parameterization described in Kutepov et al. (2007) and Kutepov et al. (2013b), which may be easily
implemented in the routine.
The routine includes all inputs required for its proper performance, among them all collisional rate coefficient parameterizations as described by Shved et al. (1998) as well as the HITRAN2016
spectroscopic data for all bands available for the CO[2] level set in Table 1. The latter are presented as temperature-dependent A(T) and B(T) Einstein coefficients for each band branch calculated in
accordance with Kutepov et al. (1998) and Gusev and Kutepov (2003). We compared A(T) and B(T) with those calculated for two earlier HITRAN versions and found the differences to be less than
0.1%–0.2% for bands included in our CO[2] nighttime model. For some hot 15 and 4.3µm and combinational bands included in the daytime model, these differences are of the order of 0.5%, since data
for these bands slightly vary from one HITRAN version to another.
The routine also includes a detailed table of basic ODF for a band branch in a broad range of temperature and pressure variations. This ODF is re-scaled in a special way onto the ODF for each
individual band branch.
The supplied set of levels and spectral band information ensures the cooling–heating calculation errors for day and night to be below 1Kd^−1 for any temperature profile. For smooth temperature
profiles, the calculation errors are around 0.1–0.3Kd^−1 (for 400ppmv of CO[2]).
Finally, the routine allows the user to switch on and off the vibrational levels and/or bands used in the model. This removing or adding of vibrational levels will also automatically add (or remove)
the bands related to these levels. If the user task can tolerate larger errors, the calculation speed can be increased at the cost of lowering the accuracy.
In more than 20 years of collaboration of both authors on the development of the ALI-ARMS code, whose optimized version NLTE15µmCool-E is presented here, AK contributed most in the development of the
ALI and ODF methodologies; he also wrote the manuscript draft supported by AF. AF contributed most to the development of the ALI-ARMS physical model, designed and implemented the ODF-based routines
for radiative transfer treatment in the code, and performed all calculations presented in this paper as well as the detailed analysis of the routine computational performance.
The contact author has declared that neither of the authors has any competing interests.
Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation
in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.
Alexander Kutepov and Artem Feofilov would like to express deep gratitude to the friends and colleagues who have made an invaluable contribution to the development of the ALI-ARMS code and promoted
its applications and who unfortunately have already left this world: David Hummer (†2015), who provided an advanced stellar atmosphere non-LTE code and invaluable support with its adaptation for
treating molecular bands in the planetary atmospheres; Gustav Shved (†2020), Rada Manuilova (†2021) and Valentine Yankovsky (†2021), who provided crucial contributions to the non-LTE physical
model development; Richard Goldberg (†2019), who stimulated the ALI-ARMS application to the analysis of the SABER observations; and Uwe Berger (†2019), who drew our attention to the need for
accurate radiative cooling calculation in GCMs and motivated development of the routine presented in this study. The authors also want to thank Rolf Kudritcki, who provided them with an opportunity
to work at Universitäts-Sternwarte München in the 1990–2000s and study advanced non-LTE techniques used in stellar atmosphere studies; Vladimir Ogibalov and Oleg Gusev, who provided significant
contributions to the code software development; and Ivan Hubeny, who introduced them to the ODF technique, which revolutionized the code performance. They are also thankful to Daniel Marsh and
Valery Yudin, who provided inputs for testing the new routine presented here.
The authors also are grateful to Emerson Damasceno de Oliveira and the anonymous reviewer for their thorough analysis of the manuscript and helpful comments and recommendations. We also thank
Ladislav Rezac for his interesting open discussion comment on the manuscript.
We also thank the journal technical staff (Melda Ohan and collaborators) for the manuscript copy-editing and typesetting, which significantly improved the text.
The work of Alexander Kutepov and Artem Feofilov in Germany was partly supported by the AFO-2000 (BMBF) and CAWSES (DFG) research programs. The work of Alexander Kutepov in the US was partly
supported by the NASA grants NNX15AN08G and NNX17AD38G and by the NSF grants AGS-1301762 and AGS-2125760. The work of Artem Feofilov in France was supported by the project “Towards a better
interpretation of atmospheric phenomena – 2016” of the French National Program LEFE/INSU.
This paper was edited by Volker Grewe and reviewed by Emerson Damasceno de Oliveira and one anonymous referee.
Acuña, L., Deleuil, M., Mousis, O., Marcq, E., Levesque, M., and Aguichine, A.: Characterisation of the hydrospheres of TRAPPIST-1 planets, Astron. Astrophys., 647, A53, https://doi.org/10.1051/
0004-6361/202039885, 2021.a
Akmaev, R. A. and Shved, G. M.: Parameterization of the radiative flux divergence in the 15µm CO[2] band in the 30–75km layer, J. Atmos. Terr. Phys., 44, 993–1004, https://doi.org/10.1016/0021-9169
(82)90064-2, 1982.a
Appleby, J. F.: CH[4] nonlocal thermodynamic equilibrium in the atmospheres of the giant planets, ICARUS, 85, 355–379, https://doi.org/10.1016/0019-1035(90)90123-Q, 1990.a
Avrett, E. H.: Source-Function Equality in Multiplets, Astrophys. J., 144, 59, https://doi.org/10.1086/148589, 1966.a
Berger, U.: Modeling of middle atmosphere dynamics with LIMA, J. Atmos. Solar-Terr. Phys., 70, 1170–1200, https://doi.org/10.1016/j.jastp.2008.02.004, 2008.a
Bougher, S. W., Hunten, D. M., and Roble, R. G.: CO[2] cooling in terrestrial planet thermospheres, J. Geophys. Res., 99, 14609–14622, https://doi.org/10.1029/94JE01088, 1994.a, b
Castle, K. J., Black, L. A., Simione, M. W., and Dodd, J. A.: Vibrational relaxation of CO[2](ν[2]) by O(^3P) in the 142–490K temperature range, J. Geophys. Res.-Space, 117, A04310, https://doi.org/
10.1029/2012JA017519, 2012.a
Curtis, A. R. and Goody, R. M.: Thermal Radiation in the Upper Atmosphere, Proc. R. Soc. Lon. Ser. A, 236, 193–206, https://doi.org/10.1098/rspa.1956.0128, 1956.a
Eckermann, D.: Matrix parameterization of the 15µm CO[2] band cooling in the middle and upper atmosphere for variable CO[2] concentration, J. Geophys. Res.-Space, 128, e2022JA030956, https://doi.org
/10.1029/2022JA030956, 2023.a
Feofilov, A. G. and Kutepov, A. A.: Infrared Radiation in the Mesosphere and Lower Thermosphere: Energetic Effects and Remote Sensing, Surv. Geophys., 33, 1231–1280, https://doi.org/10.1007/
s10712-012-9204-0, 2012.a, b, c, d, e, f, g, h, i, j, k
Feofilov, A. G., Kutepov, A. A., Pesnell, W. D., Goldberg, R. A., Marshall, B. T., Gordley, L. L., García-Comas, M., López-Puertas, M., Manuilova, R. O., Yankovsky, V. A., Petelina, S. V., and
Russell III, J. M.: Daytime SABER/TIMED observations of water vapor in the mesosphere: retrieval approach and first results, Atmos. Chem. Phys., 9, 8139–8158, https://doi.org/10.5194/acp-9-8139-2009,
2009.a, b
Feofilov, A. G., Kutepov, A. A., She, C.-Y., Smith, A. K., Pesnell, W. D., and Goldberg, R. A.: CO[2](ν[2])-O quenching rate coefficient derived from coincidental SABER/TIMED and Fort Collins lidar
observations of the mesosphere and lower thermosphere, Atmos. Chem. Phys., 12, 9013–9023, https://doi.org/10.5194/acp-12-9013-2012, 2012.a
Feofilov, A., Rezac, L., Kutepov, A., Vinatier, S., Rey, M., Nikitin, A., and Tyuterev, V.: Non-LTE diagnositics of infrared radiation of Titan's atmosphere, in: Titan Aeronomy and Climate, 2 pp.,
https://ui.adsabs.harvard.edu/abs/2016tac..confE...2F (last access: 1 July 2024)), 2016.a
Fomichev, V. I.: The radiative energy budget of the middle atmosphere and its parameterization in general circulation models, J. Atmos. Sol.-Terr. Phy., 71, 1577–1585, https://doi.org/10.1016/
j.jastp.2009.04.007, 2009.a, b
Fomichev, V. I., Kutepov, A. A., Akmaev, R. A., and Shved, G. M.: Parameterization of the 15-micron CO[2] band cooling in the middle atmosphere (15–115km), J. Atmos. Terr. Phys., 55, 7–18, https://
doi.org/10.1016/0021-9169(93)90149-S, 1993.a, b, c
Fomichev, V. I., Blanchet, J.-P., and Turner, D. S.: Matrix parameterization of the 15µm CO[2] band cooling in the middle and upper atmosphere for variable CO[2] concentration, J. Geophys.
Res.-Atmos., 103, 11505–11528, https://doi.org/10.1029/98jd00799, 1998.a, b, c, d, e, f
Frisch, H.: Radiative Transfer. An Introduction to Exact and Asymptotic Methods, Springer, https://doi.org/10.1007/978-3-030-95247-1, 2022.a
Frisch, U. and Frisch, H.: Non-LTE Transfer. $\sqrt{\mathit{ϵ}}$ Revisited, Mon. Not. R. Astron. Soc., 173, 167–182, https://doi.org/10.1093/mnras/173.1.167, 1975.a
Fu, Q. and Liou, K. N.: On the correlated k-distribution method for radiative transfer in nonhomogeneous atmospheres, J. Atmos. Sci., 49, 2139–2156, https://doi.org/10.1175/1520-0469(1992)049
<2139:OTCDMF>2.0.CO;2, 1992.a
Funke, B., López-Puertas, M., García-Comas, M., Kaufmann, M., Höpfner, M., and Stiller, G. P.: GRANADA: A Generic RAdiative traNsfer AnD non-LTE population algorithm, J. Quant. Spectrosc. Ra., 113,
1771–1817, https://doi.org/10.1016/j.jqsrt.2012.05.001, 2012.a, b
Gettelman, A., Mills, M. J., Kinnison, D. E., Garcia, R. R., Smith, A. K., Marsh, D. R., Tilmes, S., Vitt, F., Bardeen, C. G., McInerny, J., Liu, H. L., Solomon, S. C., Polvani, L. M., Emmons, L. K.,
Lamarque, J. F., Richter, J. H., Glanville, A. S., Bacmeister, J. T., Phillips, A. S., Neale, R. B., Simpson, I. R., DuVivier, A. K., Hodzic, A., and Randel, W. J.: The Whole Atmosphere Community
Climate Model Version 6 (WACCM6), J. Geophys. Res.-Atmos., 124, 12380–12403, https://doi.org/10.1029/2019JD030943, 2019.a
Goody, R. M.: Atmospheric Radiation. I. Theoretical Basis (Oxford Monographs on Meteorology), Clarendon Press: Oxford University Press, 1964.a, b, c
Goody, R. M. and Yung, Y. L.: Atmospheric radiation: Theoretical basis, second edn., Oxford University Press, ISBN 0-19-505134-3, 1995.a, b, c, d
Gordon, I. E., Rothman, L. S., Hill, C., Kochanov, R. V., Tan, Y., Bernath, P. F., Birk, M., Boudon, V., Campargue, A., Chance, K. V., Drouin, B. J., Flaud, J. M., Gamache, R. R., Hodges, J. T.,
Jacquemart, D., Perevalov, V. I., Perrin, A., Shine, K. P., Smith, M. A. H., Tennyson, J., Toon, G. C., Tran, H., Tyuterev, V. G., Barbe, A., Császár, A. G., Devi, V. M., Furtenbacher, T., Harrison,
J. J., Hartmann, J. M., Jolly, A., Johnson, T. J., Karman, T., Kleiner, I., Kyuberis, A. A., Loos, J., Lyulin, O. M., Massie, S. T., Mikhailenko, S. N., Moazzen-Ahmadi, N., Müller, H. S. P.,
Naumenko, O. V., Nikitin, A. V., Polyansky, O. L., Rey, M., Rotger, M., Sharpe, S. W., Sung, K., Starikova, E., Tashkun, S. A., Auwera, J. V., Wagner, G., Wilzewski, J., Wcisło, P., Yu, S., and Zak,
E. J.: The HITRAN2016 molecular spectroscopic database, J. Quant. Spectrosc. Ra., 203, 3–69, https://doi.org/10.1016/j.jqsrt.2017.06.038, 2017.a
Gusev, O. A. and Kutepov, A. A.: Non-LTE Gas in Planetary Atmospheres, in: Stellar Atmosphere Modeling, edited by: Hubeny, I., Mihalas, D., and Werner, K., vol. 288 of Astronomical Society of the
Pacific Conference Series, 318, ISBN 1-58381-131-1, 2003.a, b, c, d, e
Hartogh, P., Medvedev, A. S., Kuroda, T., Saito, R., Villanueva, G., Feofilov, A. G., Kutepov, A. A., and Berger, U.: Description and climatology of a new general circulation model of the Martian
atmosphere, J. Geophys. Res.-Planet., 110, E11008, https://doi.org/10.1029/2005JE002498, 2005.a
Hubeny, I. and Lanz, T.: Non-LTE Line-blanketed Model Atmospheres of Hot Stars. I. Hybrid Complete Linearization/Accelerated Lambda Iteration Method, Astrophys. J., 439, 875, https://doi.org/10.1086/
175226, 1995.a
Hubeny, I. and Mihalas, D.: Theory of Stellar Atmospheres, Princeton University Press, ISBN 9780691163291, 2015.a, b, c, d, e
Kutepov, A. A.: Parametrization of the radiant energy influx in the CO[2] 15 microns band for earth's atmosphere in the spoilage layer of local thermodynamic equilibrium, Akademiia Nauk SSSR Fizika
Atmosfery i Okeana, 14, 216–218, 1978.a, b, c, d
Kutepov, A. A.: Community Comment 1, Comment on egusphere-2023-2424, https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2424/egusphere-2023-2424-CC1-supplement.pdf (last access: 1 July
2024), 2023.a
Kutepov, A. and Feofilov, A.: A new routine for calculating 15um CO2 cooling in the mesosphere and lower thermosphere (1.0), Zenodo [code and data set], https://doi.org/10.5281/zenodo.8005028, 2023.
a, b
Kutepov, A. A. and Fomichev, V. I.: Application of the second-order escape probability approximation to the solution of the NLTE vibration-rotational band radiative transfer problem., J. Atmos. Terr.
Phys., 55, 1–6, https://doi.org/10.1016/0021-9169(93)90148-R, 1993.a
Kutepov, A. A. and Shved, G. M.: Radiative transfer in the 15-micron CO[2] band with the breakdown of local thermodynamic equilibrium in the earth's atmosphere, Academy of Sciences, USSR, Izvestiya,
Atmospheric and Oceanic Physics. Translation., 14, 18–30, 1978.a
Kutepov, A. A., Kunze, D., Hummer, D. G., and Rybicki, G. B.: The solution of radiative transfer problems in molecular bands without the LTE assumption by accelerated lambda iteration methods, J.
Quant. Spectrosc. Ra., 46, 347–365, https://doi.org/10.1016/0022-4073(91)90038-R, 1991.a
Kutepov, A. A., Oelhaf, H., and Fischer, H.: Non-LTE radiative transfer in the 4.7 and 2.3µm bands of CO: Vibration-rotational non-LTE and its effects on limb radiance , J. Quant. Spectrosc. Ra.,
57, 317–339, https://doi.org/10.1016/S0022-4073(96)00142-2, 1997.a
Kutepov, A. A., Gusev, O. A., and Ogibalov, V. P.: Solution of the non-LTE problem for molecular gas in planetary atmospheres: superiority of accelerated lambda iteration., J. Quant. Spectrosc. Ra.,
60, 199–220, https://doi.org/10.1016/S0022-4073(97)00167-2, 1998.a, b, c, d, e, f, g, h, i, j, k, l, m
Kutepov, A. A., Feofilov, A. G., Marshall, B. T., Gordley, L. L., Pesnell, W. D., Goldberg, R. A., and Russell, J. M.: SABER temperature observations in the summer polar mesosphere and lower
thermosphere: Importance of accounting for the CO[2] ν[2] quanta V-V exchange, Geophys. Res. Lett., 33, L21809, https://doi.org/10.1029/2006GL026591, 2006.a, b
Kutepov, A. A., Feofilov, A. G., Medvedev, A. S., Pauldrach, A. W. A., and Hartogh, P.: Small-scale temperature fluctuations associated with gravity waves cause additional radiative cooling of
mesopause the region, Geophys. Res. Lett., 34, L24807, https://doi.org/10.1029/2007GL032392, 2007.a
Kutepov, A., Vinatier, S., Feofilov, A., Nixon, C., and Boursier, C.: Non-LTE diagnostics of CIRS observations of the Titan's mesosphere, in: AAS/Division for Planetary Sciences Meeting Abstracts #
45, vol. 45 of AAS/Division for Planetary Sciences Meeting Abstracts, p. 72, abstract 207.05, https://aas.org/sites/default/files/2020-02/DPS_45_Abstract_Book.pdf (last access: 1 July 2024), 2013a.a
Kutepov, A. A., Feofilov, A. G., Medvedev, A. S., Berger, U., Kaufmann, M., and Pauldrach, A. W. A.: Infra-red Radiative Cooling/Heating of the Mesosphere and Lower Thermosphere Due to the
Small-Scale Temperature Fluctuations Associated with Gravity Waves, Springer Netherlands, Dordrecht, 429–442, ISBN 978-94-007-4348-9, https://doi.org/10.1007/978-94-007-4348-9_23, 2013b.a, b
Kutepov, A. A., Rezac, L., and Feofilov, A. G.: Evidence of a significant rotational non-LTE effect in the CO[2] 4.3µm PFS-MEX limb spectra, Atmos. Meas. Tech., 10, 265–271, https://doi.org/10.5194/
amt-10-265-2017, 2017.a
López-Puertas, M. and Taylor, F. W.: Non–LTE radiative transfer in the atmosphere, Singapore: World Scientific, ISBN 9810245661, https://doi.org/10.1142/9789812811493, 2001.a, b, c, d, e
López-Puertas, M., Fabiano, F., Fomichev, V., Funke, B., and Marsh, D. R.: An improved and extended parameterization of the CO[2] 15µm cooling in the middle and upper atmosphere (CO2_cool_fort-1.0),
Geosci. Model Dev., 17, 4401–4432, https://doi.org/10.5194/gmd-17-4401-2024, 2024.a, b, c, d, e
Manuilova, R. O., Gusev, O. A., Kutepov, A. A., von Clarmann, T., Oelhaf, H., Stiller, G. P., Wegner, A., López Puertas, M., Martín-Torres, F. J., Zaragoza, G., and Flaud, J. M.: Modelling of non-LTE
limb radiance spectra of IR ozone bands for the MIPAS space experiment, J. Quant. Spectrosc. Ra., 59, 405–422, https://doi.org/10.1016/S0022-4073(97)00120-9, 1998.a
Medvedev, A. S., González-Galindo, F., Yiǧit, E., Feofilov, A. G., Forget, F., and Hartogh, P.: Cooling of the Martian thermosphere by CO[2] radiation and gravity waves: An intercomparison study with
two general circulation models, J. Geophys. Res.-Planet., 120, 913–927, https://doi.org/10.1002/2015JE004802, 2015.a
Mihalas, D.: Stellar Atmospheres, Freeman, San Francisco, ISBN 0-7167-0359-9, 1978.a, b
Ogibalov, V. P. and Fomichev, V. I.: Parameterization of solar heating by the near IR CO[2] bands in the mesosphere, Adv. Space Res., 32, 759–764, https://doi.org/10.1016/S0273-1177(03)80069-8,
2003.a, b, c
Ogibalov, V. P., Kutepov, A. A., and Shved, G. M.: Non-local thermodynamic equilibrium in CO[2] in the middle atmosphere. II. Populations in the ν[1]ν[2] mode manifold states, J. Atmos. Sol.-Terr.
Phy., 60, 315–329, https://doi.org/10.1016/S1364-6826(97)00077-1, 1998.a
Panka, P. A., Kutepov, A. A., Kalogerakis, K. S., Janches, D., Russell, J. M., Rezac, L., Feofilov, A. G., Mlynczak, M. G., and Yiğit, E.: Resolving the mesospheric nighttime 4.3µm emission puzzle:
comparison of the CO[2](ν[3]) and OH(ν) emission models, Atmos. Chem. Phys., 17, 9751–9760, https://doi.org/10.5194/acp-17-9751-2017, 2017.a
Panka, P. A., Kutepov, A. A., Rezac, L., Kalogerakis, K. S., Feofilov, A. G., Marsh, D., Janches, D., and Yiğit, Erdal: Atomic Oxygen Retrieved From the SABER 2.0- and 1.6µm Radiances Using New
First-Principles Nighttime OH(ν) Model, Geophys. Res. Lett., 45, 5798–5803, https://doi.org/10.1029/2018GL077677, 2018.a
Panka, P. A., Kutepov, A. A., Zhu, Y., Kaufmann, M., Kalogerakis, K. S., Rezac, L., Feofilov, A. G., Marsh, D. R., and Janches, D.: Simultaneous Retrievals of Nighttime O(^3P) and Total OH Densities
From Satellite Observations of Meinel Band Emissions, Geophys. Res. Lett., 48, e91053, https://doi.org/10.1029/2020GL091053, 2020.a
Pollock, D. S., Scott, G. B. I., and Phillips, L. F.: Rate constant for quenching of CO[2](010) by atomic oxygen, Geophys. Res. Lett., 20, 727–729, https://doi.org/10.1029/93GL01016, 1993.a
Press, W. H., Teukolsky, S. A., Vetterling, W. T., , and Flannery, B. P.: Numerical Recipes: The Art of Scientific Coputing, Cambridge University Press, ISBN 978-0-521-88407-5, https://doi.org/
10.1142/S0218196799000199, 2002.a
Rezac, L., Kutepov, A., Russell, J. M., Feofilov, A. G., Yue, J., and Goldberg, R. A.: Simultaneous retrieval of T(p) and CO[2] VMR from two-channel non-LTE limb radiances and application to daytime
SABER/TIMED measurements, J. Atmos. Sol.-Terr. Phy., 130, 23–42, https://doi.org/10.1016/j.jastp.2015.05.004, 2015.a
Rybicki, G. B. and Hummer, D. G.: An accelerated lambda iteration method for multilevel radiative transfer. I. Non-overlapping lines with background continuum, Astron. Astrophys., 245, 171–181,
1991.a, b, c
Rybicki, G. B. and Hummer, D. G.: An accelerated lambda iteration method for multilevel radiative transfer. II. Overlapping transitions with full continuum., Astron. Astrophys., 262, 209–215, 1992.a
Sharma, R. D. and Wintersteiner, P. P.: Role of carbon dioxide in cooling planetary thermospheres, Geophys. Res. Lett., 17, 2201–2204, https://doi.org/10.1029/GL017i012p02201, 1990.a
Shved, G. M., Kutepov, A. A., and Ogibalov, V. P.: Non-local thermodynamic equilibrium in CO[2] in the middle atmosphere. I. Input data and populations of the ν[3] mode manifold states , J. Atmos.
Sol.-Terr. Phy., 60, 289–314, https://doi.org/10.1016/S1364-6826(97)00076-X, 1998.a, b, c
Unsöld, A.: Physik der Sternatmosphären, Springer, Berlin, ISBN 978-3-642-50445-7, https://doi.org/10.1007/978-3-642-50754-0, 1938. a
Valencia, D., Sasselov, D. D., and O'Connell, R. J.: Radius and Structure Models of the First Super-Earth Planet, Astrophys. J., 656, 545–551, https://doi.org/10.1086/509800, 2007.a
Wintersteiner, P. P., Picard, R. H., Sharma, R. D., Winick, J. R., and Joseph, R. A.: Line-by-Line Radiative Excitation Model for the Non-Equilibrium Atmosphere: Application to CO[2] 15-µm Emission,
J. Geophys. Res.-Atmos., 97, 18083–18117, https://doi.org/10.1029/92JD01494, 1992.a
Yudin, V., Goncharenko, L., Karol, S., and Harvey, L.: Perturbations of Global Wave Dynamics During Stratospheric Warming Events of the Solar Cycle 24, EGU General Assembly 2020, Online, 4–8 May
2020, EGU2020-6009, https://doi.org/10.5194/egusphere-egu2020-6009, 2020.a
Yudin, V., Goncharenko, L., Karol, S., Lieberman, R., Liu, H., McInerney, J., and Pedatella, N.: Global Teleconnections between QBO Dynamics and ITM Anomalies, EGU General Assembly 2022, Vienna,
Austria, 23–27 May 2022, EGU22-3552, https://doi.org/10.5194/egusphere-egu22-3552, 2022.a, b
Zhu, X.: Carbon dioxide 15-micron band cooling rates in the upper middle atmosphere calculated by Curtis matrix interpolation, J. Atmos. Sci., 47, 755–774, https://doi.org/10.1175/1520-0469(1990)047
<0755:CDBCRI>2.0.CO;2, 1990.a | {"url":"https://gmd.copernicus.org/articles/17/5331/2024/","timestamp":"2024-11-14T02:07:51Z","content_type":"text/html","content_length":"393991","record_id":"<urn:uuid:0a6b6cb5-f99f-4e66-a54c-2a58d59ce7d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00603.warc.gz"} |
Download Time
I kept reading about all the fun everyone was having with the
thing, so I thought I'd give it a try. I thought it was pretty easy until I got to that
one problem
where they ask you to add up all the multiples of 3 or 5 that are less than 1000. I hear that people are using Python and letting the computer do the number crunching.
So, I think to myself, "Self, you took Pascal [S:about 20 years ago:S] [S:back when RAM was measured in single digits and you said 'bless you' if someone said, 'gigahertz.':S] in college and [S:
absolutely hated that freaking class:S] came out a better person because of it. This may be [S:something you don't have time for, I mean you have 5 kids. Wake the heck up:S] an enriching experience.
And besides...Pascal...Python, they both start with Ps [S:yeah, that's probably the only thing they have in common:S]. You got this, man. G'head, give it a try."
I go to the
site and read every thing I can to figure out which version would best suit me. (There's a lot to choose from, you know.) I can't find anything anywhere. It's all so confusing. Please. Someone. Give
me a nudge in the right direction.
So I take a shot in the dark and go with v.2.7.
Anyway, this isn't really about Python. It's about the video I grabbed while downloading the program. I'm going to have my kids wrestle with it after break.
I may not ever find the solution to that really
difficult problem
about multiples of 3 and 5, but at least I got a lesson out of it.
6 comments:
I used to program, but it was a while ago. I don't know python (or any of the other languages people talk about these days). When I got to a problem in Project Euler that needed some serious
number crunching, I used Excel. I got though a number of problems that way.
You might be able to do the question you mentioned without any programming, though.
Great example. The progress bar brought up another idea. I've been thinking how in most Algebra I curricula I've seen functions are introduced but then sort of dropped with no real reason for
their use other than the f(x) notation. I've been thinking that introducing the need for composition of functions taking an idea from programming a progress bar might be useful.
As you know, much of what happens in code is transforming one piece of data to another using a series of logical blocks. In the case of the progress bar, you could model the entire thing using
mathematical functions.
Consider the mapping from the byte downloaded : total bytes ratio to the length of the progress bar. There is the decimal of that calculation, it's transformation into a percentage, the mapping
of that percentage onto a graphical bar whose length may not be a nice number, and maybe the transformation of the location of the bar to it's location in the window. Each of these steps could be
modeled in a different mathematical function, whose composition would give the location of the end of the current progress.
(For a vertical progress bar which goes up as progress occurs, you could toss in the idea that for computer screens 0,0 is generally the left,top and y increases downward, so the progress needs a
function to flip it.)
None of these transformations is all that difficult in themselves, and having the graphical representation gives concreteness to the example.
I wasn't getting the point - why would the progress bar be a good classroom topic? But I like what you suggest Pete. I want to focus on functions when I teach Intermediate Algebra next semester.
David, is this the sort of thing you had in mind, or do you have other directions you want to take this?
Hi there,
I maintain resources for getting started with Python especially for Education here http://brokenairplane.blogspot.com/p/programming-resources.html
You might want to start here http://brokenairplane.blogspot.com/2010/08/computer-science-programming-intro.html
You downloaded the correct Python especially if you decide to get into some cool stuff with VPython. Also Python and Pascal have many similarities so you should be fine.
I can show you how to do the multiples problem in Python but it sounded like you wanted to figure it out for yourself so let me know if you or your readers want any help with this or Programming
in the Classroom.
Great blog, I spent all morning checking out your posts, really enjoyed it. Keep up the good work!
I had my tongue firmly planted in cheek with regards to the multiples problem.
The question I had in mind for the download time was simply, "How long is it going to take?" which is the exact question I find myself asking every time I try to download something.
Thanks for the suggestion.
Hey David, I actually solved that problem using math alone. I did all the calc by hand with help of a four-function calc and some tricks about adding arithmetic series. The only other problem I
could do at a quick glance was the multiples-of-numbers-less-than-20 one.
I'm trying not to have to program to solve these other ones. Ambitious?? | {"url":"https://coxmath.blogspot.com/2010/12/download-time.html?showComment=1292465985579","timestamp":"2024-11-04T21:51:54Z","content_type":"text/html","content_length":"80437","record_id":"<urn:uuid:0d439e0b-96e2-494b-99e7-1263965184f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00173.warc.gz"} |
Integral of Sec x - Formula, Proof
Trigonometric functions play an essential role in many mathematical theories and uses. One of the essential trigonometric functions is the secant function, which is the reciprocal of the cosine
function. The secant function is broadly used in math, physics, engineering, and various other domains. It is an essential tool for analyzing and figuring out problems related to oscillations, waves,
and periodic functions.
The integral of sec x is an essential theory in calculus, a section of mathematics which works with the study of rates of change and accumulation. It is utilized to assess the area under the curve of
the secant function, which is a continuous function utilized to depict the working of waves and oscillations. Moreover, the integral of sec x is used to figure out a wide range of challenges in
calculus, for instance, figuring out the antiderivative of the secant function and evaluating definite integrals that include the secant function.
In this blog article, we will study the integral of sec x in detail. We will discuss its characteristics, formula, and a proof of its derivation. We will also look at some examples of how to utilize
the integral of sec x in various fields, including physics, engineering, and mathematics. By getting a grasp of the integral of sec x and its uses, students and working professionals in these domains
can obtain a deeper understanding of the complicated phenomena they study and develop enhanced problem-solving skills.
Importance of the Integral of Sec x
The integral of sec x is an important mathematical theory which has several applications in physics and calculus. It is applied to calculate the area under the curve of the secant function, which is
a continuous function which is widely used in math and physics.
In calculus, the integral of sec x is utilized to figure out a broad array of challenges, including figuring out the antiderivative of the secant function and evaluating definite integrals that
involve the secant function. It is further used to calculate the derivatives of functions which include the secant function, such as the inverse hyperbolic secant function.
In physics, the secant function is applied to model a broad range of physical phenomena, consisting of the motion of things in circular orbits and the working of waves. The integral of sec x is
utilized to calculate the possible energy of objects in circular orbits and to evaluate the mechanism of waves which consist if alterations in frequency or amplitude.
Formula for the Integral of Sec x
The formula for the integral of sec x is:
∫ sec x dx = ln |sec x + tan x| + C
At which point C is the constant of integration.
Proof of the Integral of Sec x
To prove the formula for the integral of sec x, we will use a approach called integration by substitution. Let's start by expressing the integral in terms of the cosine function:
∫ sec x dx = ∫ (cos x / sin x) dx
Next, we will make the substitution u = sin x, which states that du/dx = cos x. Using the chain rule, we can state dx in terms of du:
dx = du / cos x
Replace these expressions into the integral, we get:
∫ sec x dx = ∫ (1/u) (du / cos x) = ∫ (1/u) sec x du
Next, we can apply the formula for the integral of u^n du, which is (u^(n+1))/(n+1) + C, to integrate (1/u) sec x du:
∫ (1/u) sec x du = ln |u| sec x + C
Replacing back in for u = sin x, we obtain:
∫ sec x dx = ln |sin x| sec x + C
Still, this formula is not quite in similar form as the original formula we specified. To get to the wanted form, we will apply a trigonometric identity which links sec x and tan x:
sec x + tan x = (1 / cos x) + (sin x / cos x) = (1 + sin x) / cos x = csc x / (csc x - cot x)
Replacing this identity into the formula we derived prior, we get:
∫ sec x dx = ln |csc x / (csc x - cot x)| + C
Lastly, we can apply another trigonometric identity to simplify the expression:
ln |csc x / (csc x - cot x)| = ln |csc x + cot x|
Thus, the final formula for the integral of sec x is:
∫ sec x dx = ln |sec x + tan x| + C
Ultimately,the integral of sec x is an essential concept in calculus and physics. It is used to calculate the area under the curve of the secant function and is crucial for working out a wide array
of problems in calculus and physics. The formula for the integral of sec x is ln |sec x + tan x| + C, and its derivation involves the utilize of integration by replacing and trigonometric
Getting a grasp the characteristics of the integral of sec x and how to utilize it to figure out problems is essential for learners and professionals in fields such as engineering, physics, and
mathematics. By conquering the integral of sec x, anyone can utilize it to solve problems and obtain deeper insights into the complex workings of the world surrounding us.
If you require help understanding the integral of sec x or any other math concept, contemplate call us at Grade Potential Tutoring. Our adept teachers are accessible online or in-person to give
personalized and effective tutoring services to help you succeed. Connect with us today to schedule a tutoring lesson and take your math abilities to the next stage. | {"url":"https://www.westchesterinhometutors.com/blog/integral-of-sec-x-formula-proof","timestamp":"2024-11-05T12:22:18Z","content_type":"text/html","content_length":"75007","record_id":"<urn:uuid:a8bee66b-d517-4602-b966-f115d84d8547>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00605.warc.gz"} |
How To Determine Emissivity From Transmission Percentage And Wavelength - Piping Technology System
Emissivity is a critical property in thermal radiation analysis, defining how efficiently a material emits infrared energy relative to a perfect blackbody at the same temperature. This parameter
plays a vital role in a wide range of industries, from material science to thermal imaging, and is essential for accurate heat transfer calculations. Determining the emissivity of a material involves
understanding its interaction with electromagnetic waves, particularly how much energy it transmits, absorbs, and reflects across different wavelengths.
In many practical applications, such as infrared thermography and spectroscopy, transmission percentage and wavelength data are commonly available, making it possible to calculate emissivity
indirectly. This article will provide a detailed guide on how to determine emissivity using transmission percentage and wavelength information, including the necessary theoretical background and
practical steps.
By understanding the relationship between transmission, reflection, and absorption, we can leverage Kirchhoff’s Law of Thermal Radiation to calculate emissivity with precision. Accurate emissivity
measurements are crucial for designing thermally efficient materials, improving industrial processes, and ensuring reliable thermal imaging in scientific and engineering fields. | {"url":"https://pipingtechs.com/how-to-determine-emissivity-from-transmission-percentage-and-wavelength/","timestamp":"2024-11-12T18:13:36Z","content_type":"text/html","content_length":"195294","record_id":"<urn:uuid:fe7b1435-fb3a-4713-83a6-42bfbc7e1b11>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00889.warc.gz"} |
Evaluating Machine Learning Models using Hyperparameter Tuning
This paper focuses on evaluating the machine learning models based on hyperparameter tuning. Hyperparameter tuning is choosing a set of optimal hyperparameters for a learning algorithm. A
hyperparameter is a model argument whose value is set before the learning process begins. The key to machine learning algorithms is hyperparameter tuning.
Hyperparameter types:
• K in K-NN
• Regularization constant, kernel type, and constants in SVMs
• Number of layers, number of units per layer, regularization in neural network
Generalization (test) error of learning algorithms has two main components:
• Bias: error due to simplifying model assumptions
• Variance: error due to randomness of the training set
The trade-off between these components is determined by the complexity of the model and the amount of training data. The optimal hyperparameters help to avoid under-fitting (training and test error
are both high) and over-fitting (Training error is low but test error is high)
Workflow: One of the core tasks of developing an ML model is to evaluate its performance. There are multiple stages in developing an ML model for use in software applications.
Figure 1: Workflow
Evaluation: Model evaluation and ongoing evaluation may have different matrices. For example, model evaluation may include Accuracy or AUROC and ongoing evaluation may include customer lifetime
value. Also, the distribution of the data might change between the historical data and live data. One way to detect distribution drift is through continuous model monitoring.
Hyper-parameters: Model parameters are learned from data and hyper-parameters are tuned to get the best fit. Searching for the best hyper-parameter can be tedious, hence search algorithms like grid
search and random search are used.
Figure 2: Hyper-parameter tuning vs Model training
Model Evaluation
Evaluation Matrices: These are tied to ML tasks. There are different matrices for supervised algorithms (classification and regression) and unsupervised algorithms. For example, the performance of
classification of the binary class is measured using Accuracy, AUROC, Log-loss, and KS.
Evaluation Mechanism: Model selection refers to the process of selecting the right model that fits the data. This is done using test evaluation matrices. The results from the test data are passed
back to the hyper-parameter tuner to get the most optimal hyperparameters.
Figure 3: Evaluation Mechanism
Hyperparameter Tuning
Hyperparameters: Vanilla linear regression does not have any hyperparameters. Variants of linear regression (ridge and lasso) have regularization as a hyperparameter. The decision tree has max depth
and min number of observations in leaf as hyperparameters.
Optimal Hyperparameters: Hyperparameters control the over-fitting and under-fitting of the model. Optimal hyperparameters often differ for different datasets. To get the best hyperparameters the
following steps are followed:
1. For each proposed hyperparameter setting the model is evaluated
2. The hyperparameters that give the best model are selected.
Hyperparameters Search: Grid search picks out a grid of hyperparameter values and evaluates all of them. Guesswork is necessary to specify the min and max values for each hyperparameter. Random
search randomly values a random sample of points on the grid. It is more efficient than grid search. Smart hyperparameter tuning picks a few hyperparameter settings, evaluates the validation
matrices, adjusts the hyperparameters, and re-evaluates the validation matrices. Examples of smart hyper-parameter are Spearmint (hyperparameter optimization using Gaussian processes) and Hyperopt
(hyperparameter optimization using Tree-based estimators).
Responses From Readers | {"url":"https://www.analyticsvidhya.com/blog/2021/04/evaluating-machine-learning-models-hyperparameter-tuning/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2017/02/test-data-scientist-clustering/","timestamp":"2024-11-07T07:09:27Z","content_type":"text/html","content_length":"352950","record_id":"<urn:uuid:d6d42407-377d-4cea-a9c4-eaf02a191a90>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00519.warc.gz"} |
Find bar chart values
As a complete novice, I managed to create a bar chart of area measurements from two groups with the following code. Now I’d like to know the actual values (i.e., means) used for each bar and then
test for group differences. Any help or suggestions with these questions are greatly appreciated.
roi1 <- ggplot(SampleData, aes(Gp2, fill = Gp2))
roi1 + geom_bar()
roi1 + stat_summary_bin(aes(y = lh.area.frontal), fun.y = "mean", geom = "bar", show.legend = TRUE)
I'll go for something like
## some data
Data = data.frame(value = c(rnorm(10000),rnorm(10000,5)), groups = rep(c('a','b'), each = 10000))
## the mean values for each group
DataMeans <- sapply(split(Data$value, Data$groups), mean)
## checking that the output corresponds with the simulated data (mean of a = 0, mean of b = 5)
## output
> DataMeans
a b
0.004444622 5.013350561
1 Like
Thanks for the quick response Fer,
This was extremely helpful. For calculating group means on multiple variables (e.g., 10+), do I have to list out each variable or is there a quicker way to specify multiple variables?
As an aside, what is the 'c' in the first line of code used for, what does it mean? I frequently see this in a lot of answers but have no idea what it means...
If with multiple variables you mean that your grouping variable has several different values, the split function will do the job for you:
## some data
Data = data.frame(value = c(rnorm(10000),rnorm(10000,5), rnorm(10000,10)), groups = rep(c('a','b','d'), each = 10000))
## the mean values for each group
DataMeans <- sapply(split(Data$value, Data$groups), mean)
## checking that the output corresponds with the simulated data (mean of a = 0, mean of b = 5)
## output
> DataMeans
a b d
0.00673895 4.99842741 9.98537344
But if you mean more variables (columns), and want the mean value for all combinations between the variables values, then you need tto create a list of the columns (on the argument called 'f'):
## some data
Data = data.frame(value = c(rnorm(10000),rnorm(10000,5), rnorm(10000,10)), groups = rep(c('a','b','d'), each = 10000), gender = rep(c('M','F'), each = 15000))
## the mean values for each group
DataMeans <- sapply(split(Data$value, f = list(Data$groups, Data$gender)), mean)
## checking that the output corresponds with the simulated data (mean of a = 0, mean of b = 5)
## output
> DataMeans
a.F b.F d.F a.M b.M d.M
NaN 4.995547802 10.007088587 0.003088001 4.982541806 NaN
AS you can see, it reports NaN for the combinations that does not exists on the data set, so, if you want only those that exists, then:
> DataMeans[is.finite(DataMeans)]
b.F d.F a.M b.M
4.995547802 10.007088587 0.003088001 4.982541806
Edit: the 'c' cames from 'concatenate'. That means exactly this. I am concatenating three random generated sets of 10000 values with a normal distribution but different means 0,5 and 10 (that is,
adding one after another). It is used for creating vectors. So, if you want to create a vector with values 4,6,8,and 10, then you just type Vector <- c(4,6,8,10)
Thanks again for the helpful answers. As another concrete example, I have a dataset with variables (columns):
ID, Group, Sex, Age, ICV, lh.volume.1-- lh.volume.11
I want to calculate Group means of:
lh.volumes.1 to lh.volume.11
Then I want to test these means statistically. Right now there are 2 groups. So I plan to use t.test(variable~Group, dataset).
Regarding both of these goals, is there a way to choose multiple variables with a wildcard or regex? That way I can include all the lh.volumes at once?
Minor pedantic point: the c() function is actually named for “combine” since it “combines values into a vector or list” — same idea, slightly different vocab: c function - RDocumentation
You can find the documentation for a function in R by typing ?functionName or help("functionName") at the console.
You might be interested in this thread, which has lots of great resources for getting up to speed when you’re new to R: What's your favorite intro to R?
3 Likes
Thanks for the clarification and link. I will definitely check it out in the near future!
My biggest question at the moment is is how to code a t-test of Group with 11 other variables.? Do I have to code each t-test separately or is there a way to do this all at once?
Thanks for the point. I am sure I have read in some books that thing of concatenate long time ago (and in the function help, they call '...' the objects to be 'concatenated'). That "combine" sounds a
bit weird to me, as if I think of 'combining' A, B and C on a vector, first thing would came to me would be the vector {ABC, ACB, BAC, BCA, CAB, CBA}.
But it is how it is
Interesting! For me, “concatenation” is dominated by associations with string concatenation so I’d think of “concatenating” A, B, and C as resulting in “ABC” (a vector of length 1). I agree that
“combine” isn’t entirely felicitous either, due to the obvious associations with combinatorics, as you point out. Naming things is hard! Thankfully, after you’ve had R in your brain for a good long
while you stop really thinking about what c() means other than what it does.
2 Likes
Since this is a substantially separate question from the one that started this thread, can you please post it as a new topic? If you want your new topic to be automatically linked to this one, click
the little New Topic from the popup:
(There’s also some helpful guidelines for posting questions here in the FAQs: FAQ - Posit Community)
2 Likes | {"url":"https://forum.posit.co/t/find-bar-chart-values/12700","timestamp":"2024-11-07T07:21:41Z","content_type":"text/html","content_length":"39475","record_id":"<urn:uuid:037f2dcf-9359-4edf-820a-ee6038210c7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00472.warc.gz"} |
Self-Adjoint Linear Operators over Complex Vector Spaces
Self-Adjoint Linear Operators over Complex Vector Spaces
Recall from the Self-Adjoint Linear Operators page that if $V$ is a finite-dimensional nonzero inner product space and if $T \in \mathcal L (V)$ then $T$ is said to be self-adjoint if $T = T^*$.
In the following proposition we will see that if $V$ is a complex inner product space, that if $<T(v), v> = 0$ for all $v \in V$ then $T$ is identically equal to the zero operator.
Proposition 1: If $V$ is a complex inner product space and $T \in \mathcal L (V)$ is such that $<T(v), v> = 0$ for all vectors $v \in V$, then $T = 0$.
• Proof: Let $V$ be a complex inner product space. Then for all $u, w \in V$ we can write $<T(u), w>$ as:
\quad <T(u), w> = \frac{<T(u+w), u + w> - <T(u - w), u - w>}{4} + \frac{<T(u + iw), u + iw> - <T(u - iw), u - iw>}{4}
• Let $v_1 = u + w$, $v_2 = u - w$, $v_3 = u + iw$ and $v_4 = u - iw$. Then the equation above can be rewritten as:
\quad <T(u), w> = \frac{<T(v_1), v_1> - <T(v_2), v_2>}{4} + \frac{<T(v_3), v_3> - <T(v_4), v_4>}{4}
• Now suppose that $<T(v), v> = 0$ for all vectors $v \in V$. Then the righthand side of the equation above reduces to zero and $<T(v), w> = 0$ for all $u, w \in V$ which implies that $T = 0$. $\
With this proposition, we will see in the next corollary that if $V$ is complex inner product space then $T$ will be self-adjoint if and only if the inner product between $v$ and its image $T(v)$ is
zero for all vectors $v \in V$.
Corollary 1: If $V$ is a complex inner product space and $T \in \mathcal L (V)$ then $T$ is self-adjoint if and only if $<T(v), v> \in \mathbb{R}$ for all vectors $v \in V$.
• Proof: $\Rightarrow$ Let $V$ be a complex inner product space and let $v \in V$. Suppose that $<T(v), v> \in \mathbb{R}$ for all vectors $v \in V$. Then $<T(v), v> = \overline{<T(v), v>}$ and so:
\quad 0 = <T(v), v> - \overline{<T(v), v>} \\ \quad 0 = <T(v), v> - <v, T(v)> \\ \quad 0 = <T(v), v> - <T^*(v), v> \\ \quad 0 = <(T - T^*)(v), v>
• Therefore $<(T - T^*)(v), v> = 0$ for all $v \in V$. By Proposition 1, this implies that $T - T^* = 0$ and so $T = T^*$, that is, $T$ is self-adjoint.
• $\Leftarrow$ Suppose that $T$ is self-adjoint. Then $T = T^*$. From above we still have that:
\quad <T(v) - v> - \overline{<T(v), v>} = <(T - T^*)(v), v> \\ \quad <T(v) - v> - \overline{<T(v), v>} = <0(v), v> \\ \quad <T(v) - v> - \overline{<T(v), v>} = 0 \\
• Therefore $<T(v), v> = \overline{<T(v), v>}$ which implies that $<T(v), v> \in \mathbb{R}$ for all $v \in V$. $\blacksquare$ | {"url":"http://mathonline.wikidot.com/self-adjoint-linear-operators-over-complex-vector-spaces","timestamp":"2024-11-06T09:21:39Z","content_type":"application/xhtml+xml","content_length":"18327","record_id":"<urn:uuid:6a654c4b-571f-4790-bea7-613997f4e8d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00245.warc.gz"} |
Non-Gaussian limiting behavior of the percolation threshold in a large system
We study short-range percolation models. In a finite box we define the percolation threshold as a random variable obtained from a stochastic procedure used in actual numerical calculations, and study
the asymptotic behavior of these random variables as the size of the box goes to infinity. We formulate very general conditions under which in two dimensions rescaled threshold variables cannot
converge to a Gaussian and determine the asymptotic behavior of their second moments in terms of a widely used definition of correlation length. We also prove that in all dimensions the finite-volume
percolation thresholds converge in probability to the percolation threshold of the infinite system. The convergence result is obtained by estimating the rate of decay of the limiting distribution
function's tail in terms of the correlation length exponent v. The proofs use exponential estimates of crossing probabilities. Substantial parts of the proofs apply in all dimensions.
ASJC Scopus subject areas
• Statistical and Nonlinear Physics
• Mathematical Physics
Dive into the research topics of 'Non-Gaussian limiting behavior of the percolation threshold in a large system'. Together they form a unique fingerprint. | {"url":"https://experts.arizona.edu/en/publications/non-gaussian-limiting-behavior-of-the-percolation-threshold-in-a-","timestamp":"2024-11-05T12:53:45Z","content_type":"text/html","content_length":"54231","record_id":"<urn:uuid:cabbb2b9-5e33-43a4-b810-33e8f37885ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00318.warc.gz"} |
Waiting time distribution - Physics of Risk
Waiting time distribution
Last time we have seen that interarrival (or, more generally, inter-event) times in the Poisson process follow exponential distribution. Inter-event times tell us how much time has passed since the
last event, but we are often also interested in times till event given that \( T \) time has passed since previous event.
In the terms of the original problem we could ask the question: what is the expected time for the next student to come? Let us assume that \( 5 \) minutes has passed since the arrival of the last
student. Let us recall that \( 4 \) students arrive per hour (meaning on average \( 15 \) minutes between them). Intuitive and wrong answer would be \( 10 \) minutes.
Waiting time distribution of the Poisson process
Why \( 10 \) minutes is wrong answer should be clear from the microscopic model of the Poisson process: there is fixed probability for an event to occur during each time step. This probability
doesn't depend on anything but the time scale. This mean that for the Poisson process:
$$p \left( \tau | T \right) = p( \tau ) = \lambda \exp\left(- \lambda \tau\right).$$
Implying that the expected time is \( \frac{1}{\lambda} \).
Another way to obtain the same intuition is to look at the survival function of the exponential distribution:
$$P(X > t) = \exp(-\lambda t).$$
If we are stating that until time \( T \) no event has happened, we are effectively ignoring all events that could have happened in that time. We are interested in survival from time \( T \) onwards,
so we simply rescale the survival function, so that it would be equal unity at time \( T \). In case of the exponential distribution:
$$P(X > t > T) = \exp(-\lambda t) \cdot \exp(\lambda T) = \exp\left[-\lambda \cdot (t-T)\right].$$
Recalling that we care about the distribution of \( \tau = t-T \), we see that it is distributed exactly the same as the inter-event time.
Survival function of the uniform distribution
Survival function of the uniform distribution (let the interval of possible values be \( [ 0, 2 \langle\tau\rangle ] \)) is given by:
$$P(X > t) = 1 - \frac{t}{2\langle\tau\rangle} .$$
If we have waited for \( T \) until the event (let \( T < 2 \langle\tau\rangle \)), then the survival function:
$$P(X > t > T) = \frac{2\langle\tau\rangle - t}{2\langle\tau\rangle - T} = \frac{2\langle\tau\rangle - T - (t-T)}{2\langle\tau\rangle - T} .$$
Recalling that we care about the distribution of \( \tau = t-T \), we see that it is distributed almost the same as the inter-event time. Distribution appears to remain uniform distribution, but the
interval of possible values has shrunk to \( [ 0, 2 \langle\tau\rangle - T ] \).
Survival function of the normal distribution
Survival function of the normal distribution (let \( \sigma = 1 \)):
$$P(X > t) = \frac{1}{2} \left[ 1 - \operatorname{erf}\left(\frac{t - \langle\tau\rangle}{\sqrt{2}}\right) \right] .$$
Now the error function, \( \operatorname{erf} \), is a rather complicated thing. We won't be able to do the same analytical exploration we did with the exponential distribution or the uniform
distribution survival functions. But the core idea would be that \( P(X > T) \) is just a constant, which scales the survival function:
$$P(X > t > T) = \frac{1}{P(X > T)} \cdot P(X > t).$$
When we consider survival of \( \tau = t - T \) we simply shift the survival function to the left. For the exponential distribution such shift doesn't change anything, because exponential function
has translation symmetry (it doesn't experience any ageing effects - the mean waiting time is the same as the mean interarrival time). Other distributions, such as the normal distribution, do not
have this symmetry and thus exhibit ageing effects - the mean waiting time is always smaller than the mean interarrival time.
Interactive app
Use the app below to explore the waiting time distribution. You can choose time duration you have already waited for \( T \), the mean of the interarrival distribution \( \langle\tau\rangle \) and
the interarrival distribution itself. Note that for the normal and uniform distributions setting \( T > \langle\tau\rangle \) may not make much sense, but otherwise feel free to explore! Note that
the red curve corresponds to the interarrival time distribution (its PDF), while blue curve represents waiting time distribution (its PDF). | {"url":"https://rf.mokslasplius.lt/waiting-time-distribution/","timestamp":"2024-11-12T16:27:36Z","content_type":"text/html","content_length":"25892","record_id":"<urn:uuid:6c0262f2-fb00-4226-9738-e6cd01e0ceb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00709.warc.gz"} |
What is duality in Boolean?
Duality Principle The dual of a Boolean expression can easily be obtained by interchanging sums and products and interchanging 0 as well as 1. Duality Principle: The Duality principle states that
when both sides are replaced by their duals the Boolean identity remains valid.
What is the principle of duality in set theory explain with example?
If the statement is the same as its own dual, it will be known as self-dual. Example: In this example, we will use the complement operator to equality of sets, which contains intersections and
unions. All the sets will be replaced by their complement when we clear the dust after applying c.
What is dual of A +( B +( AC?
To get the dual of the given expression change OR to AND operator and AND to OR operator. A+[B+(AC)]+D=A[B(A+C)]D.
What is duality of equation a 1 1?
1 = 1 : it is a true statement asserting that “true and true evaluates to true”. (d) 0 + 0 = 0 : (d) is the dual of (c): it is a true statement asserting, correctly, that “false or false evaluates to
false”. The statement is the full equation, including the = sign.
What does dual mean in math?
In mathematics, a duality translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an
involution operation: if the dual of A is B, then the dual of B is A.
What is duality principle?
The principle of duality states that starting with a Boolean relation another Boolean relation can be derived by : 1. Changing each OR sign+ to an AND sign.. 2. Changing each AND sign. to an OR
sign+. Principle of duality is use in Boolean algebra to complement the Boolean expression.
What is a dual function?
Dual means having two parts, functions, or aspects.
What is the dual of boolean expression?
The dual of a Boolean expression is the expression one obtains by interchanging addition and multiplication and interchanging 0’s and 1’s.
What is duality in math?
duality, in mathematics, principle whereby one true statement can be obtained from another by merely interchanging two words. It is a property belonging to the branch of algebra known as lattice
theory, which is involved with the concepts of order and structure common to different mathematical systems.
How is the duality principle used in Boolean algebra?
Duality principle (or principle of duality) is an important property used mainly in proving various theorems available in boolean algebra. Duality principle states that in a two-valued boolean
algebra, the dual of an algebraic expression can be obtained by interchanging all the OR and AND operator and by replacing 1 by 0 and 0 by 1.
Which is the best description of the duality principle?
Duality Principle: The Duality principle states that when both sides are replaced by their duals the Boolean identity remains valid. Some Boolean expressions and their corresponding duals are given
in the table below:
How can Boolean algebra be converted to another type of Operation?
According to this principle, if we have postulates or theorems of Boolean Algebra for one type of operation then that operation can be converted into another type of operation (i.e., AND can be
converted to OR and vice-versa) just by interchanging ‘0 with 1’, ‘1 with 0’, ‘ (+) sign with (.) sign’ and ‘ (.) sign with (+) sign’.
When are operators and variables of an equation called duals?
“If the operators and variables of an equation or function that produce no change in the output of the equation, though they are interchanged is called “Duals”. | {"url":"https://bookriff.com/what-is-duality-in-boolean/","timestamp":"2024-11-05T17:06:38Z","content_type":"text/html","content_length":"37101","record_id":"<urn:uuid:e7b41052-7023-4105-a285-4fad68cb74ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00710.warc.gz"} |
Conservation laws for one-layer shallow water wave systems
The problem of correspondence between symmetries and conservation laws for one-layer shallow water wave systems in the plane flow, axisymmetric flow and dispersive waves is investigated from the
composite variational principle of view in the development of the study [N.H. Ibragimov, A new conservation theorem, Journal of Mathematical Analysis and Applications, 333(1) (2007) 311-328]. This
method is devoted to construction of conservation laws of non-Lagrangian systems. Composite principle means that in addition to original variables of a given system, one should introduce a set of
adjoint variables in order to obtain a system of Euler-Lagrange equations for some variational functional. After studying Lie point and Lie-Bäcklund symmetries, we obtain new local and nonlocal
conservation laws. Nonlocal conservation laws comprise nonlocal variables defined by the adjoint equations to shallow water wave systems. In particular, we obtain infinite local conservation laws and
potential symmetries for the plane flow case.
• Conservation laws
• Shallow water wave systems
• Symmetry groups
Dive into the research topics of 'Conservation laws for one-layer shallow water wave systems'. Together they form a unique fingerprint. | {"url":"https://research.itu.edu.tr/en/publications/conservation-laws-for-one-layer-shallow-water-wave-systems","timestamp":"2024-11-07T16:58:58Z","content_type":"text/html","content_length":"55752","record_id":"<urn:uuid:037fc523-2ade-4809-bfe4-31653f3c53c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00898.warc.gz"} |
On Optimal Linear Redistribution of VCG Payments in Assignment of Heterogeneous Objects
There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget
balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. Our main result is an impossibility theorem which rules out linear
rebate functions with non-zero efficiency in heterogeneous object assignment. Motivated by this theorem, we explore two approaches to get around this impossibility. In the first approach, we show
that linear rebate functions with non-zero are possible when the valuations for the objects are correlated. In the second approach, we show that rebate functions with non-zero efficiency are possible
if linearity is relaxed. | {"url":"https://www.sciweavers.org/publications/optimal-linear-redistribution-vcg-payments-assignment-heterogeneous-objects","timestamp":"2024-11-03T06:42:34Z","content_type":"application/xhtml+xml","content_length":"40017","record_id":"<urn:uuid:413093cd-d720-4fe5-b3bb-4c39514455c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00588.warc.gz"} |
How to generate lower bounding constraints ?
Awaiting user input
I would like to implement a branch-and-cut decomposition by minimizing a lower bounding variable z. The idea is to solve a relaxed version of my model (*) and then iteratively modify it by
integrating lower bounding constraints on z (**).
I have therefore looked into the Model.cbCut() but my problem is that when I solve the relaxation of my problem, I never enter the condition where == GRB.Callback.MIPNODE. How can I model this
feature ?
Thank you.
• Hi Sophie,
Could you share a code snippet of how you are currently trying to implement the callback?
Best regards,
• Hi Sophie,
Maybe the Gurobi example callback.py is helpful to see how a callback is included.
Best regards,
• Hi Jaromil, Marika,
I don't have difficulties to include a callback. It actually enters the callback, what it does not is to enter the condition where == GRB.Callback.MIPNODE. Anyways, here is my code snippet, but I
don't think the error comes from there:
def mycallback(model, where):
if where == GRB.Callback.MIPNODE:
status = model.cbGet(GRB.Callback.MIPNODE_STATUS)
if status == GRB.OPTIMAL:
rel = model.cbGetNodeRel(self.x_vars)
fx = f(rel)
model.cbCut(self.z >= fx)
model = createModel()
• Hi Sophie,
Could you also show the Gurobi log when you run your code? Are there any nodes in the B&B?
Best regards,
• Hi Jaromil,
Yes that's the thing. No nodes are displayed at all. The model finds an optimal solution and stops. Thank you for your follow up.
Gurobi 9.5.0 (linux64) logging started Thu Jun 9 17:52:36 2022
Set parameter LogFile to value "bb_cuts.log"
Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (linux64)
Thread count: 40 physical cores, 80 logical processors, using up to 8 threads
Optimize a model with 7211 rows, 6998 columns and 2425072 nonzeros
Model fingerprint: 0xd77be502
Variable types: 6637 continuous, 361 integer (361 binary)
Coefficient statistics:
Matrix range [6e-04, 2e+01]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 2e+01]
RHS range [6e+01, 6e+01]
Warning: Completing partial solution with 361 unfixed non-continuous variables out of 361
User MIP start did not produce a new incumbent solution
User MIP start violates constraint GTV2_minConstr[0] by 56.000000000
Presolve removed 6635 rows and 693 columns (presolve time = 5s) ...
Presolve removed 6635 rows and 693 columns (presolve time = 10s) ...
Presolve removed 6635 rows and 693 columns
Presolve time: 11.93s
Presolved: 576 rows, 6305 columns, 2411802 nonzeros
Variable types: 6305 continuous, 0 integer (0 binary)
Root barrier log...
Ordering time: 0.00s
Barrier statistics:
AA' NZ : 4.133e+04
Factor NZ : 4.162e+04 (roughly 3 MB of memory)
Factor Ops : 8.004e+06 (less than 1 second per iteration)
Threads : 8
Objective Residual
Iter Primal Dual Primal Dual Compl Time
0 0.00000000e+00 -4.69047754e+02 7.60e+03 0.00e+00 5.47e-01 17s
1 0.00000000e+00 -3.58632418e+02 7.86e+02 9.71e-17 7.41e-02 17s
2 0.00000000e+00 -8.63793196e+01 6.15e+01 2.57e-16 9.07e-03 17s
3 0.00000000e+00 -1.54655594e+01 5.22e+00 3.40e-16 1.31e-03 17s
4 0.00000000e+00 -8.33221405e-01 1.49e-13 2.03e-16 6.32e-05 17s
5 0.00000000e+00 -8.33221405e-04 4.26e-14 2.86e-17 6.32e-08 17s
6 0.00000000e+00 -8.33221405e-07 5.51e-14 2.69e-20 6.32e-11 17s
7 0.00000000e+00 -8.33221405e-10 6.39e-14 3.63e-23 6.32e-14 17s
Barrier solved model in 7 iterations and 16.84 seconds (3.49 work units)
Optimal objective 0.00000000e+00
Root crossover log...
3 DPushes remaining with DInf 0.0000000e+00 17s
6308 PPushes remaining with PInf 0.0000000e+00 17s
0 PPushes remaining with PInf 0.0000000e+00 17s
Push phase complete: Pinf 0.0000000e+00, Dinf 0.0000000e+00 17s
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
6311 0.0000000e+00 0.000000e+00 0.000000e+00 17s
6311 0.0000000e+00 0.000000e+00 0.000000e+00 17s
Root relaxation: objective 0.000000e+00, 6311 iterations, 2.70 seconds (1.36 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 0.0000000 0.00000 0.00% - 17s
Explored 1 nodes (6311 simplex iterations) in 17.26 seconds (3.66 work units)
Thread count was 8 (of 80 available processors)
Solution count 1: 0
Optimal solution found (tolerance 1.00e-04)
Best objective 0.000000000000e+00, best bound 0.000000000000e+00, gap 0.0000%
User-callback calls 499, time in user-callback 0.03 sec
Gurobi 9.5.0 (linux64) logging started Thu Jun 9 17:53:30 2022
Set parameter LogFile to value "bb_cuts.log"
Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (linux64)
Thread count: 40 physical cores, 80 logical processors, using up to 8 threads
Optimize a model with 7211 rows, 6998 columns and 2425072 nonzeros
Model fingerprint: 0xd77be502
Variable types: 6637 continuous, 361 integer (361 binary)
Coefficient statistics:
Matrix range [6e-04, 2e+01]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 2e+01]
RHS range [6e+01, 6e+01]
Warning: Completing partial solution with 361 unfixed non-continuous variables out of 361
User MIP start did not produce a new incumbent solution
User MIP start violates constraint GTV2_minConstr[0] by 56.000000000
Presolve removed 6635 rows and 693 columns (presolve time = 5s) ...
Presolve removed 6635 rows and 693 columns
Presolve time: 8.27s
Presolved: 576 rows, 6305 columns, 2411802 nonzeros
Variable types: 6305 continuous, 0 integer (0 binary)
Root barrier log...
Ordering time: 0.00s
Barrier statistics:
AA' NZ : 4.133e+04
Factor NZ : 4.162e+04 (roughly 3 MB of memory)
Factor Ops : 8.004e+06 (less than 1 second per iteration)
Threads : 8
Objective Residual
Iter Primal Dual Primal Dual Compl Time
0 0.00000000e+00 -4.69047754e+02 7.60e+03 0.00e+00 5.47e-01 12s
1 0.00000000e+00 -3.58632418e+02 7.86e+02 9.71e-17 7.41e-02 12s
2 0.00000000e+00 -8.63793196e+01 6.15e+01 2.57e-16 9.07e-03 12s
3 0.00000000e+00 -1.54655594e+01 5.22e+00 3.40e-16 1.31e-03 12s
4 0.00000000e+00 -8.33221405e-01 1.49e-13 2.03e-16 6.32e-05 12s
5 0.00000000e+00 -8.33221405e-04 4.26e-14 2.86e-17 6.32e-08 12s
6 0.00000000e+00 -8.33221405e-07 5.51e-14 2.69e-20 6.32e-11 12s
7 0.00000000e+00 -8.33221405e-10 6.39e-14 3.63e-23 6.32e-14 12s
Barrier solved model in 7 iterations and 12.30 seconds (3.49 work units)
Optimal objective 0.00000000e+00
Root crossover log...
3 DPushes remaining with DInf 0.0000000e+00 12s
6308 PPushes remaining with PInf 0.0000000e+00 12s
0 PPushes remaining with PInf 0.0000000e+00 12s
Push phase complete: Pinf 0.0000000e+00, Dinf 0.0000000e+00 12s
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
6311 0.0000000e+00 0.000000e+00 0.000000e+00 12s
6311 0.0000000e+00 0.000000e+00 0.000000e+00 13s
Root relaxation: objective 0.000000e+00, 6311 iterations, 2.56 seconds (1.36 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 0.0000000 0.00000 0.00% - 12s
Explored 1 nodes (6311 simplex iterations) in 12.74 seconds (3.66 work units)
Thread count was 8 (of 80 available processors)
Solution count 1: 0
Optimal solution found (tolerance 1.00e-04)
Best objective 0.000000000000e+00, best bound 0.000000000000e+00, gap 0.0000%
User-callback calls 444, time in user-callback 0.01 sec
• Hi Sophie,
Gurobi is able to get rid of all discrete variables in the presolve step.
Presolved: 576 rows, 6305 columns, 2411802 nonzeros
Variable types: 6305 continuous, 0 integer (0 binary)
Thus, your model is essentially an LP. Gurobi recognizes that and solves it is an LP. The logging is still MIP logging because the original problem is a MIP. You could try checking for the MIPSOL
callback instead of the MIPNODE callback. Alternatively, you could turn off Presolve.
Best regards,
• Hi Jaromil,
I tried both solutions you proposed and it does not change anything, it still solves the LP and stops there.
Using mipsol call back, it complains I don't use MIPNODE while using cbGetNodeRel and cbCut
Error code 10011: where != GRB.Callback.MIPNODE.
Setting presolve=0, did not impact , see log below
Gurobi 9.5.0 (linux64) logging started Fri Jun 10 14:49:01 2022
Set parameter LogFile to value "bb_cuts.log"
Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (linux64)
Thread count: 40 physical cores, 80 logical processors, using up to 8 threads
Optimize a model with 7211 rows, 6998 columns and 2425072 nonzeros
Model fingerprint: 0xd77be502
Variable types: 6637 continuous, 361 integer (361 binary)
Coefficient statistics:
Matrix range [6e-04, 2e+01]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 2e+01]
RHS range [6e+01, 6e+01]
Warning: Completing partial solution with 361 unfixed non-continuous variables out of 361
User MIP start did not produce a new incumbent solution
User MIP start violates constraint GTV2_minConstr[0] by 56.000000000
Variable types: 6637 continuous, 361 integer (361 binary)
Root relaxation: objective 0.000000e+00, 913 iterations, 0.56 seconds (0.59 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
H 0 0 0.0000000 0.00000 0.00% - 1s
0 0 0.00000 0 14 0.00000 0.00000 0.00% - 1s
Explored 1 nodes (913 simplex iterations) in 1.70 seconds (0.84 work units)
Thread count was 8 (of 80 available processors)
Solution count 1: 0
Optimal solution found (tolerance 1.00e-04)
Best objective 0.000000000000e+00, best bound 0.000000000000e+00, gap 0.0000%
User-callback calls 63, time in user-callback 0.00 sec
• Sure. Here is a link. Thank you for your help.
• The issue is indeed that your model is too easy to solve for the solver such that it terminates before getting to the first MIPNODE callback.
You could try taking the solution of the first solve, i.e., the solution point with objective value 0 you get right now, and add the first constraint \(z \geq f(x)\) to the model a priori.
Are you sure that the model should be such that all discrete variables can be presolved? Is it possible that the currently obtained solution is already the optimal one, which you would get from
your algorithm?
• I see. Well here is the thing, I have tried a model where I was trying to minimize a function \(f'\) directly representing a linear combination of binary variables associated with additional flow
constraints. However, this old model was really hard to solve for bigger instances and the MIP run indefinitely to find even one single feasible solution. This is why we thought we could instead
solve the relaxed problem (which is an easy problem for Gurobi) and then iteratively add constraints (optimality cuts) on this new variable \(z\) gradually tightened upward in the course of the
algorithm, approaching the objective function from below.
The currently obtained solution is optimal if we take into account only the continuous part of the problem but is definitely not the optimal one in terms of \(f\) result, that I am sure. I guess
I am missing something here but I don't know what.
Could you extrapolate when you write "add the first constraint to the model a priori" ? I am not able to translate my new function \(f\) in the model itself because it not linear anymore.. In
addition if I use the solution found with objective 0 and add the constraint, it will be violated directly and the mip start won't be processed.
Could you extrapolate when you write "add the first constraint to the model a priori" ?
My suggestion would be that you solve your current model to optimality first. Then extract the optimal solution point values via accessing the X attribute. Once you have the X values of the
optimal solution point, you can apply your nonlinear function \(f\) to compute whatever you are computing. Next, you add the next constraint \(z \geq f(x)\) to your model and re-solve the model.
A possible Python code might look something like
if m.Status == GRB.OPTIMAL:
rel = model.getAttr("X",model.getVars())
fx = f(rel)
model.addConstr(self.z >= fx)
The above can be done in a loop to iterate until some termination criterion is met.
I am not able to translate my new function in the model itself because it not linear anymore..
Could you briefly explain what your \(f\) function does? From your code, I would guess that the result of \(\texttt{f(rel)}\) is just some real number. If this is true, then you can apply my
suggestion as described above.
In addition if I use the solution found with objective 0 and add the constraint, it will be violated directly and the mip start won't be processed.
This is correct, but the same would happen if you would cut off the solution via lazy constraints. I think it might be worth a try, given the fact that your starting model seems to be easy to
Please sign in to leave a comment. | {"url":"https://support.gurobi.com/hc/en-us/community/posts/6694675224081-How-to-generate-lower-bounding-constraints?sort_by=created_at","timestamp":"2024-11-07T19:42:25Z","content_type":"text/html","content_length":"103064","record_id":"<urn:uuid:1e80575a-fd7b-4de9-9d92-7e55b48a28b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00131.warc.gz"} |