content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Yale Alumni Magazine - Dan Spielman
Infinite complexity
Sometimes, even mathematicians rely on intuition. Professor Daniel Spielman’s helped him solve an unsolvable problem.
Richard Panek teaches writing at Goddard College. He is the author of several books on science, most recently The Trouble with Gravity: Solving the Mystery beneath Our Feet.
One day in 2008, a visiting professor dropped by the office of mathematician Daniel Spielman ’92 and asked, as he usually did during the alternating semesters he was teaching at Yale, what Spielman
was working on.
Spielman told him. Gil Kalai, a mathematician himself then on leave from Hebrew University, said Spielman’s work reminded him of an old, enduring problem in theoretical physics. Kalai described it.
Spielman said he might be able to solve it.
You don’t understand, Kalai said. The question isn’t whether Daniel Spielman can solve the problem. The question isn’t even if anyone can solve the problem—because the consensus among theoretical
physicists had long ago coalesced around an answer of No. The problem has no solution.
Spielman listened politely. He was no theoretical physicist; he freely acknowledged his ignorance of the subject. What he was, instead, was a mathematician, and he had a mathematician’s hunch.
This past spring the National Academy of Sciences rewarded Spielman and two collaborators with the 2021 Michael and Sheila Held Prize, one of the highest honors in computer science. Spielman’s hunch
had paid off: one area of math can apply to disparate fields in ways you’d never anticipate.
“This happens a lot,” says Spielman, Sterling Professor of Computer Science and professor of statistics and data science. “There are all sorts of places where people have been burrowing really deeply
into tunnels in one area of mathematics, and at some point they come along and they hit another area of mathematics, and you discover connections.”
The tunnel that Spielman was digging when he had his fateful discussion with Kalai involved “networks”—social networks, computer networks, communication networks. In graphing a social network, for
instance, mathematicians will represent a single person as a dot, or a “node.” If they connect that node to another node, they draw a straight line, or an “edge.” Count the edges extending from any
single node and you know its number of connections. Mathematicians, Spielman says, “call this ‘degree.’ Physicists call it ‘valence.’ Other people call it ‘How many friends do you have?’”
If the network you’re studying is a small town, the graph might be relatively simple. But what if you’re graphing a shipping service’s possible routes? Plot the collection points (front porches,
outlet stores, drop boxes), then the airport hubs through which the packages pass, then the possible destinations (every street address and PO box in the United States and Canada). Then add up every
conceivable path a single package might take. The answer might not be an infinity, and the time an algorithm needs to plot the most efficient path might not be an eternity. But if you want to send a
birthday gift to Grandma that absolutely, positively has to be there overnight?
The possible solution that Spielman explored was “sparsificaton”—speeding up the algorithm by simplifying networks. “Which meant,” Spielman says, “dropping most connections”—or, in this metaphor,
rerouting traffic onto only a few major roads.
As a practical matter, shipping companies wouldn’t survive without such shortcuts. Rather than figure out the most efficient path of each package in advance, they can leave the local logistics to
regional dispatch centers, which might be operating according to algorithms of their own.
For Spielman, however, the challenge wasn’t to deliver the goods. It was more abstract: take a virtually infinitely complex system and see how far he could sparsify it without sacrificing its
integrity. In 2008—around the time he was meeting with Kalai—he and his student Nikhil Srivastava ’10PhD (now at the University of California–Berkeley) were finishing “Graph Sparsification by
Effective Resistances,” which appeared in Society for Industrial and Applied Mathematics Journal on Computing.
“We were able to show you can actually approximate any network,” Spielman says. “This was a sort of crazy result. It wasn’t necessarily practical”—it wouldn’t guarantee the most efficient route door
to door, so to speak—“but it was mathematically very intriguing.”
In using this word, Spielman is evoking the fundamental distinction in his field between math that is promising and math that is practical—between an algorithm that, after further burrowing, might
eventually solve real-life problems or intersect with another tunnel, and an algorithm that’s ready to use today.
Another tunnel is what his intuition whispered to him while Kalai was describing the long-standing problem that theoretical physicists believed had no solution. The Kadison-Singer problem—formulated
by mathematicians Richard Kadison and Isadore Singer in 1959—asks whether a mathematically complete description of a quantum subsystem might allow a mathematically complete description of the quantum
system as a whole.
Spielman himself struggles to find a layperson’s explanation. The best he can do, he says, is: the Kadison-Singer problem “asks whether a large number of measurements made on a part of a quantum
system uniquely determine the system as a whole.” (He also says, “If given a choice between attempting to explain the original KS problem and defending myself from a pride of hungry tigers, I’d give
myself better odds with the tigers.”)
One reason that theoretical physicists thought the answer must be No is that the core principle of quantum mechanics is uncertainty—for instance, the impossibility of simultaneously measuring a
particle’s position and velocity. You can’t use a subsystem of uncertainty to capture the system as a whole … can you?
Why not? thought Spielman. Wasn’t he doing the same thing, sort of? True, he wasn’t working in the quantum realm. But pruning a virtual infinity of choices in a system to describe a subsystem
couldn’t be all that different from using a subsystem to describe a system.
His confidence only grew when Kalai referred Spielman to a paper he thought might help. It included a statement about vectors (or, in their terminology, degrees) and matrices (systems) that Spielman
and his collaborators had already assumed was true. Tunnels were, indeed, converging.
“It looked like something we understood incredibly well,” Spielman says. Maybe he would never comprehend the implications for the quantum realm. “But at least this one version was something I could
understand. So it was at least clear that the problem was fundamental in many different areas.”
The decision was easy. We’ll work on this, he thought. It’ll go pretty fast.
Five years passed.
Five not altogether unpleasant years: in 2008, Spielman and his collaborator Shang-Hua Teng won the Gödel Prize; in 2010 he received the Nevanlinna Prize; in 2012 he joined the inaugural class of
Simons Investigators, a fellowship providing $660,000 in research funding over five years; in 2012, he received a MacArthur Fellowship. All these honors, though, recognized work that was receding
farther and farther into the past, while the Kadison-Singer problem threatened to swallow his (and his colleagues’) future.
Vacations became distractions. Holidays became hurdles. If his wife told him she was going out for drinks with some nodes of hers, Spielman would think, Good. I’ll stay home and work on the
Kadison-Singer problem, and I don’t have to worry about my wife having fun.
“It always felt like we were making progress,” he says. “We kept coming up with conjectures that were pretty and looked true.” Still, he couldn’t ignore the possibility that their tunnel was going in
circles, at least regarding the Kadison-Singer problem.
“Maybe we should find a way out,” Spielman finally decided. He and his collaborators, Adam Marcus (now at the École polytechnique fédérale de Lausanne in Switzerland) and Srivastava, collated the
hundreds of pages of emails they’d sent one another over the years and asked: Can we use them to do anything else that’s important?
Maybe, Spielman thought, they could use their math to find new applications for Ramanujan graphs—a sometime inspiration for Spielman’s past work, including his doctoral thesis at MIT. (He majored in
mathematics and computer science at Yale.) He knew that Ramanujan graphs were extremely efficient. “If you want to be able to transmit messages through a network, these are the networks you would
want,” Spielman says. “There’s no interference”—no bottlenecks where two messages block each other—“and there’s short paths between everything.” Srivastava soon located a paper that suggested a way
to generate Ramanujan graphs. It was still missing a crucial step—but Spielman’s team realized they could derive that step from the techniques they’d been developing on their own for the
Kadison-Singer problem.
“About a week later,” Spielman says, “we knew how to make Ramanujan graphs” using their own math. “It was a shockingly fast development. At which point I thought, ‘Okay, this is great. Even if we
haven’t solved the Kadison-Singer problem, we’ve developed all this new mathematics, and we’ve used it do something, and maybe someone else will use it to solve the Kadison-Singer problem.”
Fortunately for Spielman and his two collaborators, nobody else did.
Instead, they did. “It was only by taking what we thought was a diversion,” Spielman says, “that a few months later we realized we actually had enough to solve the Kadison-Singer problem.”
To a layperson the word solve can be misleading. Spielman didn’t find the answer to the Kadison-Singer problem itself. Rather, he discovered that the answer to the question he had addressed in that
long-ago office meeting with Kalai—the answer to the question of whether a solution to the Kadison-Singer problem was even possible—the answer that physicists had come to believe was No, was Yes.
Which is to say, mathematically, Spielman’s result is promising.
The citation for the Held Prize recognizes both achievements: generating new constructions of Ramanujan graphs and showing that the Kadison-Singer problem is potentially solvable. But the Held Prize
citation also places their work within a specific context—one that a scientist harboring a mathematician’s hunch might especially appreciate: the collaboration “uncovered a deep new connection
between linear algebra, geometry of polynomials, and graph theory that has inspired the next generation of theoretical computer scientists.”
Deeper tunnels. More degrees. New nodes.
Since the paper went online in 2013, mathematicians in multiple fields have been trying to wrest something practical from the promise inherent in the Kadison-Singer problem. Spielman and his
collaborators have themselves written a paper, still undergoing peer review, that they think might advance that discussion.
In the meantime, the burrowing continues. | {"url":"https://cpsc.yale.edu/news/yale-alumni-magazine-dan-spielman?page=5","timestamp":"2024-11-12T03:45:43Z","content_type":"text/html","content_length":"40136","record_id":"<urn:uuid:5febad1f-941e-4d9f-a768-c031877523a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00016.warc.gz"} |
FM Synthesis
Introduction to Computer Music
Carnegie Mellon University
Download a PDF of this document
Frequency modulation (FM) is a synthesis technique based on the simple idea of periodic modulation of the signal frequency. That is, the frequency of a carrier sinusoid is modulated by a modulator
sinusoid. The peak frequency deviation, AKA depth of modulation, expresses the strength of the modulator’s effect on the carrier oscillator’s frequency. FM synthesis was invented by John Chowning
(1973), and became very popular due to its ease of implementation and computationally low cost, as well as its (somewhat surprisingly) powerful ability to create realistic and interesting sounds.
To begin with, let’s look at the equation for a simple frequency controlled sine oscillator. Often, this is written
\[y(t) = A \sin(2 \pi f t)\]
where \(f\) is the frequency in Hz. However, this only works for fixed frequency and amplitude. To deal with a time-varying frequency, we must integrate the frequency function \(f\) to determine the
accumulated phase at time \(t\):
\[y(t) = A(t) \sin (\int_{0}^{t} 2 \pi f(x) {\rm d}x) \label{fmint}\]
Frequency modulation uses a rapidly changing function
\[f(t) = C + D \sin(2 \pi M t)\]
where \(C\) is the carrier, a frequency offset that is in many cases is the fundamental or “pitch”. \(D\) is the depth of modulation that controls the amount of frequency deviation (called
modulation), and \(M\) is the frequency of modulation in Hz. Plugging this into equation [fmint] and simplifying gives the equation for FM:
\[f(t) = A \sin (2 \pi C t + D \sin (2 \pi M t))\]
Note that this equation is not exactly right. We assume that the phase of the modulation does not matter. Thus, while the integral of the \(\sin\) function is \(\cos\), we keep the \(\sin\) term
because \(\sin\) is the same as \(\cos\) with a phase shift. In practice, the phase can make a subtle difference, but nearly everyone ignores it.
\(I = \frac{D}{M}\) is known as the index of modulation. When \(D \ne 0\), sidebands appear in the spectra of the signal; above and below the carrier frequency \(C\), at multiples of \(\pm M\). In
other words, we can write the set of frequency components as \(C \pm k M\), where k=0,1,2,.... The number of significant components increases with \(I\), the index of modulation.
Negative Frequencies
According to these formulas, some frequencies will be negative. This can be interpreted as merely a phase change: \(\sin(-x) = - \sin(x)\) or perhaps not even a phase change: \(\cos(-x) = \cos(x)\).
Since we tend to ignore phase, we can just ignore the sign of the frequency and consider negative frequencies to be positive. We sometimes say the negative frequencies “wrap around” (zero) to become
positive. The main caveat here is that when frequencies wrap around and add to positive frequencies of the same magnitude, the components may not add in phase. The complexity of all this tends to
give FM signals a complex behavior as the index of modulation increases, adding more and more components, both positive and negative.
Harmonic Ratio
The human ear is very sensitive to harmonic vs. inharmonic spectra. Perceptually, harmonic spectra are very distinctive because they give a strong sense of pitch. The harmonic ratio [Truax 1977] is
the ratio of the modulating frequency to the carrier frequency, such that \(H=\frac{M}{C}\). If \(H\) is a rational number, the spectrum is harmonic; if it is irrational, the spectrum is inharmonic.
Rational Harmonicity
If \(H=1\) the spectrum is harmonic and the carrier frequency is also the fundamental, i.e. \(F_0 = C\). To show this, remember that the frequencies will be \(C \pm k M\), where k=0,1,2,..., but if \
(H=1\), then \(M=C\), so the frequencies are \(C \pm k C\), or simply \(k C\). This is the definition of a harmonic series: multiples of some fundamental frequency \(C\).
When \(H = \frac{1}{m}\), and \(m\) is a positive integer, \(C\) instead becomes the \(m\)’th component (harmonic) because the spacing between harmonics is \(M = C/m\), which is also the fundamental:
\(F_0 = M = C/m\).
With \(H=2\), we will get sidebands at \(C \pm 2 k C\) (where k=0,1,2,...), thus omitting all even harmonics - which is ideal for modeling a clarinet.
Irrational Harmonicity
If \(H\) is irrational, the negative frequencies that wrap around at 0 Hz tend to land between the positive frequency components, thus making the spectrum denser. With \(H=\frac{1}{m}\), where \(m\)
is a positive irrational number, the harmonics will cluster more and more around \(C\) as \(m\) increases (because \(M\) will decrease and so will the spacing between components); yielding sounds
that have no distinct pitch and that can mimic drums and gongs.
FM Spectra & Bessel Functions
The sidebands infused by FM are governed by Bessel functions of the first kind and \(n\)th order; denoted \(J_n(I)\), where \(I\) is the index of modulation. The Bessel functions determine the
magnitudes and signs of the frequency components in the FM spectrum. These functions look a lot like damped sine waves, as can be seen in Figure 1.
Figure 1: Bessel functions of the first kind, orders 0 to 3.
A few insights as to how Bessel functions can help explain why FM synthesis sounds the way it does:
• \(J_0(I)\) decides the amplitude of the carrier.
• \(J_1(I)\) controls the first upper and lower sidebands.
• Generally, \(J_n(I)\) governs the amplitudes of the \(n\)th upper and lower sidebands.
• Higher-order Bessel functions start from zero more and more gradually, so higher-order sidebands only have significant energy when \(I\) is large.
• The spectral bandwidth increases with \(I\); the upper and lower sidebands grow toward higher and lower frequencies, respectively.
• As \(I\) increases, the energy of the sidebands vary much like a damped sinusoid.
Index of Modulation
The index of modulation, \(I=\frac{D}{M}\), allows us to relate the depth of modulation, \(D\), the modulation frequency, \(M\), and the index of the Bessel functions. In practice, this means that if
we want a spectrum that has the energy of the Bessel functions at some index \(I\), with frequency components separated by \(M\), then we must choose the depth of modulation according to the relation
\(I=\frac{D}{M}\) [F. R. Moore 1990]. As a rule-of-thumb , the number of sidebands is roughly equivalent to \(I + 1\) . That is, if \(I = 10\) we get \(10 + 1 = 11\) sidebands above, and 11 sidebands
below the carrier frequency. In theory, there are infinitely many sidebands at \(C \pm k M\), where k=0,1,2,... if the modulation is non-zero, but the intensity of sidebands falls rapidly toward zero
as \(k\) increases, so this rule of thumb considers significant sidebands.
Nyquist & FM
In Nyquist we can use the built-in function
fmosc(pitch, modulation, table, phase)
for FM synthesis - about which the manual says: “Returns a sound which is table oscillated at pitch plus modulation for the duration of the sound modulation.” The table and phase parameters are
optional and often omitted: the default table is a sinusoid, and the initial phase generally does not change the resulting sound.
When you create an FM instrument, keep in mind exactly how the modulation parameter given to fmosc() relates to the FM equation, \(f(t)=A \sin(2 \pi C t + D \sin(2 \pi M t))\). Namely, that
modulation denotes the term, \(modulation = D \sin(2 \pi M t)\).
Produce a harmonic sound with about 10 harmonics and a fundamental of 100 Hz. We can choose \(C = M = 100\). Since the number of harmonics is 10 we need 9 sidebands, and so \(I + 1 = 9\) or \(I = 8\)
. \(I=\frac{D}{M}\) or \(D=I M\), so \(D = 8 * 100 = 800\). Finally, we can write fmosc(hz-to-step(100), 800 * hzosc(100)).
Examples of FM Signals
Figures 1 & 2 show examples of FM signals. The X-axes on the plots represent time - here denoted in multiples of \(\pi\).
Figure 2: A = 1, C = 242 Hz, D = 2, M = 40 Hz.
Figure 3: A = 2, C = 210 Hz, D = 10, M = 35 Hz. | {"url":"http://www.cs.cmu.edu/~music/icm-online/readings/fm-synthesis/index.html","timestamp":"2024-11-14T10:22:06Z","content_type":"text/html","content_length":"12814","record_id":"<urn:uuid:0b6961aa-55af-4196-8e7e-3e995205e7f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00600.warc.gz"} |
JCPC 2019 Discussion - Problem C - Flow Sources I
Iterate through the nodes, then for each node you need to check two cases:
- item If node u has two edges of opposite direction then the flow can not come from one of these two edges, so you should mark all nodes that come through these two edges to u as bad.
- item if node u has two edges of the same direction then the flow should come to u through one of these two edges, so you should mark everything other than the nodes coming through these two edges
as bad.
Now the remaining part is to figure out how to mark the bad nodes efficiently using DFS. Let me know if you need help.
Thank you very much this was helpful, I got AC. | {"url":"https://thabit.io/posts/1008","timestamp":"2024-11-11T23:13:10Z","content_type":"text/html","content_length":"10978","record_id":"<urn:uuid:205d97fe-33c5-4af7-8665-b4c158b91be6>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00486.warc.gz"} |
For the most part, OptiStruct uses the same checks as HyperMesh. However, OptiStruct uses its own method of calculating Aspect Ratio, and it does not support 3D element checks.
Aspect Ratio
Ratio between the minimum and maximum side lengths.
3D elements are evaluated by treating each face of the element as a 2D element, finding the aspect ratio of each face, and then returning the most extreme aspect ratio found.
Chordal Deviation
Chordal deviation of an element is calculated as the largest distance between the centers of element edges and the associated surface. 2nd order elements return the same chordal deviation as 1st
order, when the corner nodes are used due to the expensive nature of the calculations.
Figure 1. Chordal Deviation
Interior Angles
Maximum and minimum values are evaluated independently for triangles and quadrilaterals.
Deviation of an element from its ideal or "perfect" shape, such as a triangle’s deviation from equilateral. The Jacobian value ranges from 0.0 to 1.0, where 1.0 represents a perfectly shaped
element. The determinant of the Jacobian relates the local stretching of the parametric space which is required to fit it onto the global coordinate space.
HyperMesh CFD evaluates the determinant of the Jacobian matrix at each of the element’s integration points, also called Gauss points, or at the element’s corner nodes, and reports the ratio
between the smallest and the largest. In the case of Jacobian evaluation at the Gauss points, values of 0.7 and above are generally acceptable. You can select which method of evaluation to use,
Gauss point or corner node, from the Check Element settings.
Length (min)
Minimum element lengths are calculated using one of two methods:
□ The shortest edge of the element. This method is used for non-tetrahedral 3D elements.
□ The shortest distance from a corner node to its opposing edge (or face, in the case of tetra elements); referred to as "minimal normalized height".
Figure 2. Length (Min)
Skew of triangular elements is calculated by finding the minimum angle between the vector from each node to the opposing mid-side, and the vector between the two adjacent mid-sides at each node
of the element.
Figure 3. Skew of Triangular Element
The minimum angle found is subtracted from ninety degrees and reported as its skew.
Amount by which an element, or in the case of solid elements, an element face, deviates from being planar. Since three points define a plane, this check only applies to quads. The quad is divided
into two trias along its diagonal, and the angle between the trias’ normals is measured.
Warpage of up to five degrees is generally acceptable.
Figure 4. Warpage | {"url":"https://help.altair.com/hwdesktop/cfd/topics/pre_processing/meshing/element_quality_calculations_optistruct_r.htm","timestamp":"2024-11-07T04:10:51Z","content_type":"application/xhtml+xml","content_length":"49511","record_id":"<urn:uuid:26201326-7eaa-4069-8757-49a8615660a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00710.warc.gz"} |
4 Of The Most Common Encryption Methods
Trustless, borderless, immutable. Cryptocurrency as we know it wouldn't exist without cryptography. And one of its use cases is blockchain encryption methods.
Deciding on one is the first step to achieving the three most important goals in crypto: security, decentralization, and scalability of the blockchain trilemma. So far, there's no absolute solution
that gets all three right. Which is why there are so many blockchains, each with different encryption methods.
While they might sound abstract and technical, we use them all the time with crypto wallets and payments.
Quick Takes
• Encryption is the application of cryptography that protects blockchain data from other users.
• Cryptography involves both encryption and decryption (such as wallet private keys). Encryption doesn't necessarily enable decryption, which makes immutability possible (e.g., for transaction
• Common encryption methods are AES, RSA, Triple DES, and ECDSA/EdDSA.
• Cryptography is an obfuscation method. Other common related methods include hashing, ZK-proofs, stealth addresses, ring signatures, and mixnets.
Cryptography 101: What Is Encryption?
Encryption refers to computing methods that transform plain text into unreadable content or ciphertext. This "cipher" consists of a message, a code word (or key), and a pattern. This pattern can be
as complex as letter-position swaps, math equations, or substitution tables. The goal is to send the message from party A to B without revealing it to other parties.
It might sound simple because you only need to share the key with both users. That's what's called symmetric encryption. It's a key shared with every user you want to get the message.
The problem is: the more people have this key, the easier it is for the wrong person to find it. It's the opposite of decentralization:
• Decentralized means that you can't control a system from a single entity. You need to control at least half of all "nodes" (users, devices, keys...)
• Symmetric encryption means that, because all use the same key, one compromised node is enough to compromise everything.
Imagine you share a Metamask wallet with nine trusted people, so you all use the same private key/seed phrase. Even if you all use it responsibly, there's ten times more risk that someone outside the
group finds it. Either by accident, fraud, or blockchain's lack of privacy.
If you know the key, you can decrypt or revert the cipher.
To prevent this, there's a second type: asymmetric encryption. It uses a complex math function to generate two linked keys. It's slower than the alternative, and the logic behind it is beyond the
scope of this post. (But you can check this video for a simple analogy)
It looks like this:
Message + Key1 = Ciphertext
Ciphertext + Key2 = Message
Where Key1 is shared and Key2 is secret.
And it basically means:
• There's a public key that you share when sending the message and a private key you keep secret.
• The message recipient also has a public key to respond and a private key to decrypt your message.
• If you receive a message, you just need their public key and your private key to decipher it. If you send a message, the recipient needs your public key but not your private one.
• You cannot decipher messages only knowing both private keys (unless you spend ridiculous time and computer power on trial and error).
Similarly in crypto, you can see anyone's wallet and transactions. But you can't access or use those funds without the private key. It's as secure as your ability to keep your key unknown.
Top 4 Most Used Encryption Methods
Symmetric or asymmetric, every encryption method falls into one of those categories. The latter is the most common in crypto, although symmetrical encryption is also used depending on the purpose.
Here are the best four with examples:
RSA is an asymmetric encryption method invented by three MIT computer scientists: Ronald Rivest, Adi, Shamir, and Leonard Adleman. It's based on complex mathematics and while secure, it's slower than
other methods. In a nutshell, it works like this:
• Suppose you want to send a message from User A to B, but an omnipresent User C can see all data sent in between. The goal is to keep the information confidential between A and B.
• The RSA algorithm generates a public and private key for each. User A can share its public key and also see B's public key. Neither of them ever knows nor shares each other's private keys.
• According to RSA, it's easy to multiply but hard to factorize large numbers. This creates a one-way equation that's very difficult to revert. Thus, users can derive public keys from private ones
without being reverted to private keys.
• First, the receiver user B generates his keys and shares the public one. User A uses User B's public key to encrypt his message. After sending it, User B will use the private key to decrypt User
A's message.
• User C can steal the ciphertext and public key. But because they don't have private ones, it's unreadable.
In asymmetric encryption, using the same public key to revert ciphertext results in decryption failure. You can try it yourself with this online RSA encryption tool:
RSA is used as the foundation of ECDSA, which is an improved and lighter version used for cryptocurrencies. Although a few use the original RSA (e.g., Hedera Hashgraph, HBAR), it helps understand the
next method:
The Elliptic Curve Digital Signature Algorithm (ECDSA) uses different math for the RSA logic. The function is harder to revert than RSA's factorization, and the values draw an elliptic curve when
expressed on the charts (hence the name). While more difficult, Elliptic Curve Cryptography (ECC) is a simpler equation, so you can achieve the same security with shorter keys.
ECDSA has the right balance of security and encryption speed. It's why it's used by Bitcoin, Ethereum, Ripple, Litecoin, BNB Coin, Cosmos, and a dozen others. Others like Cardano, Monero, and
Polkadot use the faster EdDSA variant (Edwards-curve Digital Signature Algorithm).
Triple DES
Triple DES (3DES) leverages one of the first-ever encryption methods: the 1970s Data Encryption Standard. It's the symmetrical encryption of a 64-bit key and was widely used until cracked in 1997.
Instead of one-way functions, it's a text conversion to 64-bit blocks and bit permutation. A bit like swapping alphabet letters.
Triple DES uses the same standard, except it applies three times using a different key each time. This new DES exponentially increased cracking difficulty, but it also tripled the encryption time.
While many tech companies have used it, 3DES was cracked in 2016 before most cryptocurrencies appeared.
A faster alternative to 3DES was AES.
The Advanced Encryption Standard hasn't been cracked yet, as it's exponentially safer than the other two.
It's different from DES in that it uses 128-bit blocks instead of 64 (even 256), and instead of "Feistel Networks," it uses SPN (Substitution Permutation Networks). It's the standard used on Bitcoin
Core to encrypt validator wallets (where the password is the decryption key).
AES is broadly used outside cryptos, such as in the NSA or the browser's HTTP protocol. If you want to see what it looks like, click on the padlock next to the search bar, then "Connection Is
Secure," and "Certificate is valid." Browser's AES uses two keys called "SHA-256 and SHA-1 fingerprints."
Common Encryption Alternatives in Crypto
Encryption is just one part of cryptography, which is one of the many obfuscation techniques. To encrypt is to make information unreadable. To obfuscate is to make the meaning or information intent
confusing or incoherent.
Blockchains use different obfuscation methods that can't be classified as encryption but fulfill similar goals. Here are some of them:
ZK Proofs
Zero-Knowledge proofs are essential in Ethereum to remain efficient while decentralized. Given a problem or puzzle, ZK-proofs allows one user to verify that they know the solution, without revealing
it to the user verifying it. This helps protect sensitive information like passwords, identities, or financial information.
For example, computers "know" when you entered the wrong or right password, even though the servers don't store any. That's because computers convert your input with a one-way function (irreversible)
into asymmetric ciphertext. This cipher preserves the same patterns from the original information, which means that by comparing both, you're going to find the same relationships.
Here's a detailed analogy of how ZK Proof works.
Encryption is a two-way method, either by using the same key or a private one. Hashing is a one-way method that takes block information and converts it to a fixed string of numbers and letters.
Entering the same data generates the same hash, but reverting a hash to the same data requires years of trial and error. Arguably impossible.
No matter the algorithm, every coin with a blockchain uses hashing. Bitcoin's is called SHA-256 and is a more complex version of this:
If we add just one space at the end of the message, it's a completely different string:
In order to add a new block to a cryptocurrency chain, this data block has to include the hash of the previous block.
Is Encryption Enough To Secure Blockchain?
It's hard to imagine a secure blockchain that doesn't use some encryption method. They're essential for secure decentralization, just as it is for Internet browsing. But it's not infallible.
Security in crypto involves more challenges beyond protecting information. Maybe one day, quantum computers will be able to crack most encryption methods within seconds. Maybe they don't scale well,
or they're too slow or expensive to be used.
EdDSA and ECDSA have worked wonderfully across different blockchains. But other problems threaten security such as centralization, faulty consensus models, and code vulnerabilities. With
cryptocurrencies, security and decentralization go hand in hand. That's why networks like Ethereum and its forks (Pulsechain) — while not the fastest — tend to be the most secure.
Join The Leading Crypto Channel | {"url":"https://www.liquidloans.io/vault/research/blockchain/4-common-encryption-methods","timestamp":"2024-11-11T16:01:10Z","content_type":"text/html","content_length":"171449","record_id":"<urn:uuid:10d7a3d2-a8d1-462e-b6d3-d0029ed64886>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00070.warc.gz"} |
IFIC Literature Database
Abbas, G., Abyaneh, M. Z., Biswas, A., Gupta, S., Patra, M., Rajasekaran, G., et al. (2016). High scale mixing relations as a natural explanation for large neutrino mixing. Int. J. Mod. Phys. A
, 31(17), 1650095–47pp.
Araujo Filho, A. A., Hassanabadi, H., Reis, J. A. A. S., & Lisboa-Santos, L. (2024). Fermions with electric dipole moment in curved space-time. Int. J. Mod. Phys. A, 39(19n20), 2450078–16pp.
Baker, M. J., Bordes, J., Hong-Mo, C., & Tsun, T. S. (2013). A comprehensive mechanism reproducing the mass and mixing parameters of quarks and leptons. Int. J. Mod. Phys. A, 28(16),
Baker, M. J., Bordes, J., Hong-Mo, C., & Tsun, T. S. (2012). Developing the Framed Standard Model. Int. J. Mod. Phys. A, 27(17), 1250087–45pp.
Baker, M. J., Bordes, J., Hong-Mo, C., & Tsun, T. S. (2011). Mass Hierarchy, Mixing, CP-Violation And Higgs Decay – Or Why Rotation Is Good For Us. Int. J. Mod. Phys. A, 26(13), 2087–2124.
Bordes, J., Chan, H. M., & Tsou, S. T. (2023). A vacuum transition in the FSM with a possible new take on the horizon problem in cosmology. Int. J. Mod. Phys. A, 38(25), 2350124–32pp.
Bordes, J., Chan, H. M., & Tsou, S. T. (2023). Search for new physics in semileptonic decays of K and B as implied by the g-2 anomaly in FSM. Int. J. Mod. Phys. A, 38, 2350177–24pp.
Bordes, J., Chan, H. M., & Tsou, S. T. (2021). delta(CP) for leptons and a new take on CP physics with the FSM. Int. J. Mod. Phys. A, 36, 2150236–22pp.
Bordes, J., Chan, H. M., & Tsou, S. T. (2021). Unified FSM treatment of CP physics extended to hidden sector giving (i) delta(CP) for leptons as prediction, (ii) new hints on the material
content of the universe. Int. J. Mod. Phys. A, 36, 2150238–19pp.
Bordes, J., Chan, H. M., & Tsun, S. S. (2018). A closer study of the framed standard model yielding testable new physics plus a hidden sector with dark matter candidates. Int. J. Mod. Phys. A,
33(33), 1850195–75pp. | {"url":"https://references.ific.uv.es/refbase/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%20FROM%20refs%20WHERE%20publication%20RLIKE%20%22International%20Journal%20of%20Modern%20Physics%20A%22%20ORDER%20BY%20author%2C%20year%20DESC%2C%20publication&submit=Cite&citeStyle=APA&citeOrder=&orderBy=author%2C%20year%20DESC%2C%20publication&headerMsg=&showQuery=0&showLinks=1&formType=sqlSearch&showRows=10&rowOffset=0&client=&viewType=","timestamp":"2024-11-13T22:03:36Z","content_type":"text/html","content_length":"94412","record_id":"<urn:uuid:273c6f2d-cc4c-48b1-a13b-aa588c3fb11b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00530.warc.gz"} |
Texas Go Math Grade 5 Lesson 11.1 Answer Key Polygons
Refer to our Texas Go Math Grade 5 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 5 Lesson 11.1 Answer Key Polygons.
Texas Go Math Grade 5 Lesson 11.1 Answer Key Polygons
Essential Question
How can you identify and classify polygons?
A polygon is a two-dimensional closed figure that has three or more straight sides. Any figure with straight edges, such as a triangle or rectangle, is a polygon. Figures that have any curved sides
or open sides are not classed as polygons
Polygons are classified according to their number of sides, according to their angles, and are also classified according to their equality or inequality of their angles and sides, that is, whether
they are regular or irregular
Unlock the Problem
The Castel del Monte in Apulia, Italy, was built more than 750 years ago. The fortress has one central building with eight surrounding towers. Which polygon do you see repeated in the structure? How
many sides, angles, and vertices does this polygon have?
A polygon is a closed plane figure formed by three or more line segments that meet at points called vertices. It is named by the number of sides and angles it has. To identify the repeated polygon in
the fortress, complete the tables below.
The “Octagon” is the repeated polygon in the Castel del Monte because
It has 8 sides, 8 angles, and 8 vertices.
Math Talk
Mathematical Processes
What pattern do you see among the number of sides, angles, and vertices a polygon has?
Based on the number of sides,
Polygons are classified into 2 types. They are:
a. Regular Polygons
b. Irregular Polygons
In Regular Polygons,
All the sides lengths and angle measures are equal
In Irregular Polygons,
The side lengths and angle measures are not equal
Try This! Label the Venn diagram to classify the polygons in each group. Then draw a polygon that belongs only to each group.
Share and Show
Name each polygon. Then tell whether it is a regular polygon or not a regular polygon.
Question 1.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are not equal
The parallel side lengths are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The given polygon is not a Regular Polygon
The name of the given Polygon is: Rectangle
Go Math 5th Grade Answer Key Lesson 11.1 Similar Polygons Question 2.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are equal
The parallel side lengths are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The given polygon is a Regular Polygon
The name of the given Polygon is: Octagon
Question 3.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are equal
The parallel side lengths are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The given polygon is a Regular Polygon
The name of the given Polygon is: Hexagon
Question 4.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are not equal
The parallel side lengths are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The given polygon is not a Regular Polygon
The name of the given Polygon is: Hexagon
Problem Solving
Go Math Answer Key Grade 5 Lesson 11.1 Geometry Practice Answers Question 5.
H.O.T. Compare the polygons shown in Exercises 3 and 4. Use Math Language to describe how they are alike and how they are different.
The figures that will be in Exercises 3 and 4 are:
From the given figures,
We can observe that
The shapes of the two figures are the same
The side lengths of the two figures are not the same
The angle measures of the two figures are not the same
One figure is a “Regular Polygon” whereas the other figure is an “Irregular Polygon”
Question 6.
Why do all regular pentagons have the same shape? Explain.
A polygon with five sides of equal length is called an equilateral pentagon. However, all the five internal angles of a pentagon can take a range of sets of values. They are thus allowing it to form
a family of pentagons. Therefore, the regular pentagon is unique up to similarity.
Problem Solving
For 7-8, use the Castel del Monte floor plan at the right.
Question 7.
Multi-Step Which polygons in the floor plan have four equal sides and four congruent angles? How many of these polygons are there?
The given floor plan is:
From the given floor plan,
We can observe that
The polygon in the given floor plan that has four sides and four angles congruent is: Octagon
The number of Octagons in the given floor plan are: 8 Octagons
Question 8.
Multi-Step Is there a quadrilateral in the floor plan that is not a regular polygon? Name the quadrilateral and tell how many of the quadrilaterals are in the floor plan.
The given floor plan is:
From the given floor plan,
We can observe that
There is a quadrilateral in the given floor plan that is not a regular polygon
The quadrilateral that is not a regular polygon in the given floor plan is: Trapezoid
The number of Trapezoids in the given floor plan are: 8
5th Grade Go Math Lesson 11.1 Practice Geometry Answers Question 9.
H.O.T. Look at the angles for all regular polygons. As the number of sides increases, do the measures of the angles increase or decrease? What pattern do you see?
Daily Assessment Task
Fill in the bubble completely to show your answer.
Question 10.
Marianna would like to create a mosaic using pieces that have only four angles. Which shapes could Marianna use?
(A) A and B
(B) B and C
(C) A and D
(D) B and D
It is given that
Marianna would like to create a mosaic using pieces that have only four angles
Hence, from the above,
We can conclude that
The shapes that Marianna could use is:
Question 11.
Lino found a small green piece of glass with this shape. What is the name of the shape? Is it regular?
(A) hexagon; regular
(B) octagon; regular
(C) hexagon; not regular
(D) octagon; not regular
It is given that
Lino found a small green piece of glass with this shape
The given small piece of glass is:
From the given figure,
We can observe that
There are 8 sides in the given small piece of glass
The shape is not regular
Hence, from the above,
We can conclude that
The shape of the small piece of glass is:
Go Math Lesson 11.1 Answer Key 5th Grade Question 12.
Multi-Step Lois began a mosaic using congruent regular triangles. She first placed a triangle as a centerpiece. She then placed three triangles so that one side of each triangle lined up with one
side of the original triangle. What shape was formed by the first four pieces?
(A) regular hexagon
(B) quadrilateral
(C) regular triangle
(D) nonagon
It is given that
Lois began a mosaic using congruent regular triangles. She first placed a triangle as a centerpiece. She then placed three triangles so that one side of each triangle lined up with one side of the
original triangle
Hence, from the above,
We can conclude that
The shape that was formed by the first four pieces is:
Texas Test Prep
Question 13.
Which of the following is a regular hexagon?
We know that,
A “regular Polygon” is a polygon that has equal side lengths and equal angle measures
Hence, from the above,
We can conclude that
The regular hexagon from the following given figures is:
Texas Go Math Grade 5 Lesson 11.1 Homework and Practice Answer Key
Name each polygon. Then tell whether It is a regular polygon or not a regular polygon.
Question 1.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are equal
The parallel side lengths are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The given polygon is a Regular Polygon
The name of the given Polygon is: Square
Question 2.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The given polygon is a Regular Polygon
The name of the given Polygon is: Triangle
Question 3.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are not equal
The parallel sides are equal
All the angle measures are not equal
Hence, from the above,
We can conclude that
The given polygon is not a Regular Polygon
The name of the given Polygon is: Pentagon
Question 4.
The given polygon is:
We know that,
In a Regular Polygon, the side lengths and the angle measures all must be equal
From the given figure,
We can observe that
All the side lengths are equal
The parallel sides are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The given polygon is a Regular Polygon
The name of the given Polygon is: Pentagon
Problem Solving
Question 5.
Chantal draws a polygon with seven congruent sides and angles. Name Chantai’s shape. Is the shape a regular polygon? Explain.
It is given that
Chantal draws a polygon with seven congruent sides and angles
According to the given information,
The shape drawn by Chantal is: Heptagon
Since there are congruent sides and angles, “Heptagon” is a regular polygon
Hence, from the above,
We can conclude that
The shape drawn by Chantal is: Heptagon
Since there are congruent sides and angles, “Heptagon” is a regular polygon
Go Math 5th Grade Lesson 11.1 Homework Answer Key Pdf Question 6.
Name the shapes that make up the Texas state flag. Are any of the shapes regular polygons? Explain.
The given Texas state flag is:
From the given Texas state flag,
We can observe that
The flag is made up of 3 Rectangles
We know that,
In a Rectangle,
All the sides are not equal
The parallel sides are equal
All the angle measures are equal
There are not any shapes that are “Regular Polygons” in the Texas state flag
Hence, from the above,
We can conclude that
There are not any shapes that are “Regular Polygons” in the Texas state flag
Texas Test Prep
Lesson Check
Fill in the bubble completely to show your answer.
Question 7.
Which of the following is a regular octagon?
We know that,
In a regular Octagon,
All 8 sides are equal and all the angle measures are equal
Hence, from the above,
We can conclude that
The regular Octagon from the following given figures is:
Question 8.
Bradley uses only regular polygons in his shape mobile. Which polygon could NOT be in Bradley’s mobile?
It is given that
Bradley uses only regular polygons in his shape mobile
We know that,
The “Parallelogram” has the parallel equal side lengths
Hence, from the above,
We can conclude that
The shapes that could not be in Bradley’s mobile is:
Question 9.
How many regular triangles are in this figure?
(A) 0
(B) 8
(C) 4
(D) 6
The given figure is:
Hence, from the above,
We can conclude that
The number of triangles in the given figure is:
Question 10.
An artist made a pendant with the shape shown below. What is the name of the shape? Is it regular?
(A) hexagon; regular
(B) hexagon; not regular
(C) octagon; regular
(D) octagon; not regular
It is given that
An artist made a pendant with the shape shown below
The given shape is:
From the given shape,
We can observe that
There are 6 sides
All the sides and the angles are equal
Hence, from the above,
We can conclude that
The name of the given shape is:
Question 11.
Multi-Step Which are the next two polygons
(A) triangle; square
(B) quadrilateral; triangle
(C) quadrilateral; pentagon
(D) pentagon; triangle
The given pattern is:
From the above pattern,
We can observe that
The figures after the given pattern are: Quadrilateral, Pentagon
Hence, from the above,
We can conclude that
The next two polygons in the given pattern are:
Question 12.
Multi-Step How can you classify the sides in the pattern? and angles of a square?
(A) 2 pairs of congruent sides; 2 pairs of congruent angles
(B) 4 congruent angles; 4 congruent sides
(C) 0 congruent angles; 4 congruent sides
(D) 2 pairs of congruent sides; 1 pair of congruent angles
We know that,
In a square,
All the side lengths are equal
All the angle measures are equal
Hence, from the above,
We can conclude that
The sides and the angles of a square in the pattern is:
Leave a Comment
You must be logged in to post a comment. | {"url":"https://gomathanswerkey.com/texas-go-math-grade-5-lesson-11-1-answer-key/","timestamp":"2024-11-05T04:30:32Z","content_type":"text/html","content_length":"270565","record_id":"<urn:uuid:4c1f7d24-d10e-4502-82de-b41bb5cfe9ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00254.warc.gz"} |
Simplifying Algebraic Fractions Using Properties of Exponents
Question Video: Simplifying Algebraic Fractions Using Properties of Exponents Mathematics • Second Year of Preparatory School
Simplify (π ₯^(π + 9) π ¦^(π + 4))/(π ₯^(π + 3) π ¦^π ).
Video Transcript
Simplify π ₯ to the π plus nine π ¦ to the π plus four over π ₯ to the π plus three times π ¦ to the π .
Here we notice that we have π ₯ and π ¦ in both the numerator and the denominator. Our first step to simplifying will be trying to get all of the π ₯ and π ¦s in the numerator. Remember that you
can move a base to a certain exponent and the denominator to the numerator by taking its negative exponent.
π ₯ to the π plus nine was already there. And if we take π ₯ to the negative π plus three, we can bring that value to the numerator. π ¦ to the π plus four was already in the numerator.
And we want to bring π ¦ to the π into the numerator. So we take its negative exponent, π ¦ to the negative π .
What is the mathematical operation happening here? Itβ s multiplication. All of these exponents are being multiplied together. And that means we need the exponent product rule. It tells us that π ₯
to the π power times π ₯ to the π power equals π ₯ to the π plus π . When your bases are the same and youβ re multiplying these two exponents together, you do that by adding the two
We have two exponents with the base of π ₯. We keep the base of π ₯, and then we add their exponents together, π plus nine. Be careful with your negative there. You need to distribute that
negative value across the π and the three. When we do that, we get π plus nine minus π minus three.
Combine like terms. Plus π minus π cancels out. Nine minus three equals six. π ₯ should be taken to the sixth power. We also have two exponents with a π ¦ as a base. Weβ ll take π ¦ to the
π plus four plus negative π , π plus four minus π . Combining like terms, π minus π cancels out, leaving us with four. π ¦ is being taken to the fourth power.
Thereβ s one more important thing. Remember that we were multiplying the π ₯s and the π ¦s together, and that means the final simplification is π ₯ to the sixth times π ¦ to the fourth. | {"url":"https://www.nagwa.com/en/videos/916160371857/","timestamp":"2024-11-04T02:28:33Z","content_type":"text/html","content_length":"250720","record_id":"<urn:uuid:95dc41d4-1f61-4623-ad9b-e9bd4897c3d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00231.warc.gz"} |
Solving Quadratics to Find the Interquartile Range
• Thread starter chwala
• Start date
In summary, the conversation involved finding the interquartile range using the given equation and solving for the values of m. The interquartile range was found to be 1.83 to 3 decimal places.
Homework Statement
A continous random variable, ##X## has the following probability density function;
##f_{x} =
\dfrac{2}{25} (5-x), 0≤t≤5 \\
\\0 , Otherwise
Relevant Equations
understanding of probability distribution.
I do not have solution for this; looking forward to your insight.
$$F_{X}=\int_0^m \dfrac{2}{25} (5-x)dx$$
... ending up with
$$\dfrac{2}{25} (5m-\dfrac{m^2}{2})=\dfrac{1}{4}$$ and
$$\dfrac{2}{25} (5m-\dfrac{m^2}{2})=\dfrac{3}{4}$$ we shall end up with two quadratic equations. Solving them gives us;
$$m=0.669$$ and
Therefore our interquartile range is given by;
$$IQR=2.5-0.669=1.83$$ to ##3## decimal places.
Last edited:
Science Advisor
Homework Helper
I think I would start with [tex]\frac{2}{25}\left(5m - \frac{m^2}2\right) = q \in [0,1][/tex] and rearrange it into [tex](m-5)^2 = 25(1- q).[/tex] It is obvious that we want the negative root, so
m = 5(1 - \sqrt{1 - q}).[/tex] Then the interquartile range is [tex]
5\left(1 - \sqrt{\tfrac14}\right) - 5\left(1 - \sqrt{\tfrac34}\right) = \frac52 \left(\sqrt{3} - 1\right) \approx 1.830.[/tex]
FAQ: Solving Quadratics to Find the Interquartile Range
1. What is a quadratic equation?
A quadratic equation is a mathematical equation in the form of ax^2 + bx + c = 0, where a, b, and c are constants and x is the variable. It can have one, two, or no real solutions.
2. How do you solve a quadratic equation?
To solve a quadratic equation, you can use the quadratic formula x = (-b ± √(b^2 - 4ac)) / 2a or factor the equation into two binomials. You can also use a graphing calculator or complete the square
3. What is the interquartile range (IQR)?
The interquartile range is a measure of variability in a set of data. It is the difference between the third quartile (Q3) and the first quartile (Q1) of a data set. It represents the middle 50% of
the data.
4. How do you find the interquartile range for a set of data?
To find the interquartile range, first arrange the data in ascending order. Then, find the median of the data set. Next, find the median of the lower half of the data (Q1) and the median of the upper
half of the data (Q3). Finally, calculate the IQR by subtracting Q1 from Q3.
5. How can solving quadratics help find the interquartile range?
Solving quadratics can help find the interquartile range by providing the values of Q1 and Q3, which are needed to calculate the IQR. Quadratic equations can also be used to model data and make
predictions about the variability of the data set. | {"url":"https://www.physicsforums.com/threads/solving-quadratics-to-find-the-interquartile-range.1047050/","timestamp":"2024-11-09T09:23:57Z","content_type":"text/html","content_length":"79096","record_id":"<urn:uuid:6321086f-c769-485f-ae89-52fd713b8b35>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00216.warc.gz"} |
The investigation of a nonlinear differential equation using numerical methods
Christie, Alan M (1966) The investigation of a nonlinear differential equation using numerical methods. MSc(R) thesis, University of Glasgow.
Full text available as:
The equation investigated was [equation] the parameters a and c being varied. The boundary conditions inposed upon the equation were [equation] where t[m] was the position of the first maximum after
the origin. It was most fully investigated for a 20, this being the region in which the solutions were exponentioally decaying. Although no analytic solution was discovered for the full equation,
full solutions were found when a = 0. By suitable transformations the solution for c>0 was [equation] where M, t[0], k and q were constants. For c<0 the solution was [equation]. These, as might be
expected were periodic solutions. The four numerical methods used were (1) Finite Difference (2) Step-by-Step (3) Picards (4) Perturbation The first two were purely numeric and the second two,
semi-analytic. The Finite Difference technique was used to dind the solution between the boundary values, and the Step-by-Step method then was used to integrate along the curve until the value of y
dropped to 0.01. The initial conditions for this latter method were found from the Finite Difference solution. Picard's Method and Perturbation which were used over the whole region both gave
solutions in terms of exponential series. This series was of the form [equation] where the A[rs]'s were constant coefficients and ? and ? were the exponents of the linear solution [equation]. In all
the methods except the Step-by-Step, the maximum had to be iterated onto by some means or another. In the Finite Difference method the second point was adjusted until this condition had been
satisfied. In the two semianalytic approaches, the coefficients were in effect, altered to suit the condition. There was good agreement in results between the boundary conditions for all methods, but
as might be expected for large values of c, the accuracy outside this region was not good, when the numerical methods were compared with the semi-analytic. This was due to the fact that the
semi-analytic solutions were essentially sollutions expanded about a point. In comparing the two numerical solutions when the Finite Difference methid was used over the whole region, there was good
Item Type: Thesis (MSc(R))
Qualification Level: Masters
Additional Information: Adviser: Gilles
Keywords: Mathematics
Date of Award: 1966
Depositing User: Enlighten Team
Unique ID: glathesis:1966-73640
Copyright: Copyright of this thesis is held by the author.
Date Deposited: 14 Jun 2019 08:56
Last Modified: 14 Jun 2019 08:56
URI: https://theses.gla.ac.uk/id/eprint/73640
Actions (login required)
Downloads per month over past year | {"url":"https://theses.gla.ac.uk/73640/","timestamp":"2024-11-05T08:44:34Z","content_type":"application/xhtml+xml","content_length":"37634","record_id":"<urn:uuid:bd7cb53b-9a1b-450b-bab0-999ac5efe6d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00094.warc.gz"} |
Frontiers | Cost effectiveness and optimal control analysis for bimodal pneumonia dynamics with the effect of children's breastfeeding
• Department of Mathematics, College of Natural and Computational Science, Wallaga University, Nekemte, Ethiopia
The global impact of exclusive versus inclusive nursing on particular baby mortalities and morbidities from conception to 6 months is examined in this study. Exclusive breastfeeding practices are
more crucial and effective in preventing illness outbreaks when there is no access to appropriate medications or vaccinations. Additionally, this study takes optimal control theory into account,
applying it to a system of differential equations that uses Pontryagin's Maximum Principle to describe a bimodal pneumonia transmission behavior in a vulnerable compartment. The proposed pneumonia
transmission model was then updated to include two control variables. These include preventing illness exposure in susceptible children through various preventative measures and treating infected
children through antibiotics, hospital care, and other treatments. If the threshold number (ℜ[0]) is less than one, then treatment and prevention rates are increased, and the disease will be wiped
out of the population. However, when (ℜ[0]) is greater than one, then the disease persists in the population, which indicates that prevention and treatment rates are low. To evaluate the
cost-effectiveness of all potential control techniques and their combinations, the incremental cost-effectiveness ratio (ICER) was determined. The simulation results of the identified model show that
the interventions of prevention and treatment scenarios were the most successful in eradicating the dynamics of the pneumonia disease's propagation during the epidemic, but they were ineffective from
a cost-saving perspective. Therefore, limiting pneumonia transmission to prevention alone during an outbreak is the most economical course of action.
1. Introduction
Infant (child) disability and death are the primary continuing public health issues worldwide. However, newborn (child) mortality and morbidity rates are greatly impacted by deaths caused by
infectious illnesses. Infectious diseases can take 7 from 10 childhood deaths throughout the world. Pneumonia is one of the most common causes of death worldwide among acute respiratory infections,
accounting for 30% of all child fatalities. Ninety-five percent of cases of pneumonia occur in developing countries. As a result, infectious illnesses are more likely to kill newborn babies in these
countries [1, 2].
Among acute respiratory infection (ARI) diseases, pneumonia is the one that affects children's lungs. Approximately 740,180 children aged 05 years died because of pneumonia in 2019, which accounts
for 14 and 22% of all deaths of children below 5 years and 1–5 year(s) old, respectively, and deaths are higher in Asia and Africa [3]. Hence, of infectious diseases, pneumonia causes the most
children's deaths worldwide [4]. Pneumonia can be caused either by viruses, bacteria, or fungi; among these, bacterial pneumonia is the leading cause of death for children under 6 years of age. By
immunizing against the disease, providing appropriate nutrition (EBF), and decreasing environmental variables, pneumonia can be avoided [5]. Additionally, using many control measures, such as
prevention, treatment, and reducing indoor air pollution, can halt the spread of pneumonia. The following research has been carried out to address non-exclusive EBF or a lack of EBF, one of the main
risk factors for infectious illnesses.
The first natural diet for infants is their mother's milk, which contains all the nutrients and energy required for a baby throughout the first 6 months of life [6]. According to WHO recommendations,
newborns should receive only breast milk for the first 6 months of their lives. Thereafter, additional (complementary) foods are allowed for 18 months or more, followed by breastfeeding. Hence,
infants (children) can achieve good growth and development [7]. Therefore, for children in the first months of life up to 6 months, any additional food or liquid (even water) is not permitted except
vitamins, mineral supplements, and medicine[7, 8].
An intervention of double control has been offered to help eliminate the mortality and disability rates among children because of infectious diseases. Breastfeeding is one of the most popular and
cost-effective strategies (interventions) for preventing pediatric pneumonia and all other causes of death [9–11]. Furthermore, the WHO, UNICEF, AAP, AAFP, and NNPE advocate starting breastfeeding
promptly within the first hour after birth and continuing to exclusively breastfeed with human milk for the following 6 months to reduce the baby (child) death and disability rate. Continual
breastfeeding with other appropriate foods will follow for the first 2 years of life to ensure that the children have healthy optimal growth and development [2].
Most of the studies assure that over two-thirds of the deaths occurring globally in the first year of life of children are often associated with a loss of exclusive breastfeeding or inappropriate
feeding exercises [10]. Sub-optimal breastfeeding contributes to 18% of acute respiratory disease deaths among children under 5 years old in low-income countries [6].
Evidence suggests that if the EBF length is properly maintained, it can significantly increase immunity and lower the risk of death and disability from communicable and non-communicable diseases in
both the early and advanced phases [12, 13]. EBF throughout the first 6 months of a baby's (or child's) life can typically lower the likelihood of developing any infectious diseases [14]. For the
first 6 months of their lives, infants (children) who were nursed exclusively had a higher risk of contracting infectious diseases than those who were not [9, 15].
According to [10, 16], 1.24 million or 96% of child deaths occur during the first 6 months of life due to inappropriate EBF practices, and the mortality rate is higher in Africa and Asia.
Additionally, poor breastfeeding results in more than 236,000 child deaths annually in a select few nations, including Nigeria, China, Mexico, Indonesia, and India [17]. Furthermore, in low- and
middle-income nations, inadequate breastfeeding was found to be responsible for 18 and 30% of acute respiratory and diarrheal mortalities, respectively [18]. To reduce child mortality among children
under the age of five, the WHO advises that an EBF of 90% is needed globally. Furthermore, the Sustainable Development Goals (SDGs) plan envisaged an increase in EBF of 50% by 2025 [19, 20].
According to the study by [12, 20], raising the EBF rate in middle-income and developing nations to an ideal level can reduce infant mortality among children under the age of five by 13 to 15%.
Mathematical models are frequently used to (i) analyse the dynamics of the spread of infectious diseases like cholera, bronchiolitis, pneumonia, and others; (ii) employ a variety of control methods
to reduce or stop the spread of infectious diseases; and (iii) predict the effects of these diseases on people's lives, socio-economic systems, and national health programmes and policies. However,
none of the aforementioned studies take into account a mathematical model method to illustrate the transmission behavior of infectious diseases, particularly pneumonia.
Several mathematical modeling studies have been conducted to estimate the potential burden of the endemic and the various control approaches for the endemic disease of pneumonia in children. Tilahun
et al. [21] considered a non-linear deterministic model for the transmission of the pneumonia disease in a population of variable size, together with optimal control and cost-effectiveness measures.
Agusto et al. [22] studied the advantage of isolation strategies and quarantine effectiveness measures against outbreaks of disease in the absence of appropriate medicines or vaccines.
Swai et al. [23] formulated an optimal control of pneumonia transmission in two strains by incorporating drug resistance. Additionally, how measures such as vaccination, public awareness campaigns,
and therapy can reduce pneumonia transmission patterns should be considered. Tessema et al. [24] also developed a deterministic mathematical model of drug-resistant pneumonia with ideal preventive
measures and cost-effectiveness evaluations. Based on the simulation values of optimal controls for the proposed model, they concluded that the combination of prevention, treatment, and screening of
infectious persons is the most efficient and cost-effective way to remove pneumonia infections from the community. The diagnostic problem of distinguishing between bacterial and non-bacterial
pneumonia is the main reason antibiotics are used to treat pneumonia in children. Consequently, Wu et al. [25] present causal Bayesian networks (BNs) in their model as useful tools for resolving this
problem because they provide succinct maps of the probabilistic relationships between variables and produce results in a way that is understandable and justified by incorporating domain expert
knowledge and numerical data.
Kotola and Mekonnen [26] created a deterministic model to demonstrate the efficacy of interventions for pneumonia and meningitis co-infection and provide a reasoned recommendation to public health
officials, decision-makers in government policy, and programme implementers. Owing to their shared clinical characteristics and significant effects on human morbidity and mortality, pneumonia and
tuberculosis are two of the most frequent airborne infections. Therefore, in a community of populations with both diseases, co-infection of the two diseases becomes inevitable. Owing to a lack of
resources, the significant illness burden that these endemics together impose necessitates an efficient intervention to mitigate the impact. Thus, the authors in Gweryina et al. [27] use a pragmatic
approach to create an SEIR model for the co-dynamics of tuberculosis and pneumonia. Using a variety of parameters, Naveed et al. [28] investigated the dynamics of delayed pneumonia-like infectious
illnesses. Kassa et al. [29] and Rafiq et al. [30] offer a mathematical model of COVID-19 that includes bimodal virus transmission in a susceptible compartment.
Until now, only Legesse et al. [31] formulated a S[1]S[2]CIR deterministic mathematical model by grouping susceptible children as inclusively and exclusively breastfeed children and verify that
inclusive breastfeeding children are more exposed to pneumonia than those children breastfeed exclusively. However, they did not take into account optimal control analysis in their research.
Furthermore, no research has been carried out so far to assess the impact of EBF practice on child mortality rates and the efficacy of EBF practice in lowering pediatric mortality due to infectious
disease (pneumonia). With this as a backdrop, the study's objective is to apply mathematical models with optimal control and accessible methods to treat pneumonia in infants between the ages of 0 and
6 months who do not participate in EBF. By increasing the prevalence of EBF and stepping up efforts to reduce non-exclusive breastfeeding, the findings of this study will help in making decisions
that will reduce child mortality and impairment from pneumonia.
The article is organized as follows. The proposed model is formulated in the Construction of a Bimodal Pneumonia Model section and its analysis is presented in the Analyzing the Model Qualitatively
section. Stability analysis of the equilibria is then discussed in the Equilibrium Point Stability section. Extension of the proposed model into optimal control is presented in The Proposed Model
Under Optimal Control section. Numerical simulations are performed to support the analytical results discussed in the Analyzing the Model Qualitatively section and are presented in the Results and
Discussion. Cost-effective analysis is performed in the subsequent section followed, finally, by the Conclusion.
2. Construction of the bimodal pneumonia model
In this model, the overall population size N(t) is divided into five mutually exclusive compartments based on the disease condition of the population as a whole. Furthermore, the total population
size N(t) at any given time t is given by:
$N(t)=SI(t)+SE(t)+E(t)+I(t)+R(t) (1)$
At any time instant t ∈ [0, ∞), the real valued differentiable state variables S[I](t),S[E](t),E(t),I(t), and R(t) represent the number of susceptible children that are not exclusively breastfed,
susceptible children that are exclusively breastfed, children exposed to the disease, children that are seriously infected, and children who have obtained temporary immunity from pneumonia,
respectively. This research assumes that the two susceptible classes S[I](t) and S[E](t) are enlisted into the population at rates of Λ[1] and Λ[2], respectively. They acquire pneumonia infection
through effective contact with the infected humans I(t) or via inhalation of contaminated air droplets at a force of infection given by
${f}_{i}=\frac{{\beta }_{i}I}{N}$, where i = 1, 2 and β[j] = kP[j], where j = 1, 2.
Here β[j] = kP[j] for j = 1, 2 denotes the transmission rates. However, k stands for the number of contacts, and P[j] is the probability of close contact rates between two susceptible humans with the
infected individuals causing infection.
Humans exposed to pneumonia advance at a γ rate to the infected compartment I(t). The sub-populations are all reduced at the same time because a consistent natural mortality rate of μ is taken into
account for each compartment. The parameters σ and α at the infected stage indicate the mortality rate from pneumonia disease, which only falls in the infected class, and the percentage of children
who recover due to therapy or innate immunity, respectively. Those individuals that have recovered from pneumonia are assumed to have partial immunity and again become susceptible at a rate of δ.
This study also assumes that a child who has obtained partial immunity does not again join exclusively breastfed children because as one individual is infected with infectious diseases they cannot
regain their original immunity [31]. Using the parameter values, basic model assumptions, and state variables described above, we have generated a systematic diagram (Figure 1), and the corresponding
model equation is given by Equation (2).
${dSIdt=Λ1+δR-f1SI-μSIdSEdt=Λ2-f2SE-μSEdEdt=f1SI+f2SE-(γ+μ)EdIdt=γE-(σ+α+μ)IdRdt=σI-(μ+δ)R (2)$
With the following initial conditions:
$SI(0)≥0,SE(0)≥0,E(0)≥0,I(0)≥0,R(0)≥0 (3)$
FIGURE 1
3. Analyzing the model qualitatively
This subsection explains the qualitative behavior of the model being considered for the long run.
3.1. Positivity and boundedness of solution
To ensure that the generated dynamical system's (2) positivity of solution is both epidemiologically meaningful and theoretically well-posed, we must show that all the state variables of the
dynamical systems are non-negative.
Theorem 3.1. All the solutions of Equation (2) with the positive initial condition given on Equation (3) are non-negative.
Proof. From Equation (3), all the state variables are positive or zero at the initial time, then T > 0. To show the positivity of all the state variables select any equation of Equation (2), randomly
let it be
$ddt[e∫0T(f1+μ)dt′SI]=(e∫0T(f1+μ)dt′)[Λ1+δR] (4)$
where t′ ∈ [0, T] and each state variable are non-negative at t′.
Equation (4) is integrated with regard to time to produce
$SI(t)=k1SI(0)+k1[∫0T(e∫0T(f1+μ)dt′)[Λ1+δR]dt]≥0 (5)$
where ${k}_{1}={e}^{-{\int }_{0}^{T}\left({f}_{1}+\mu \right)d{t}^{\prime }}$. From Equation (4), we observe that S[I](t) is non-negative for all t > 0. In a similar fashion, one can show S[E](t) ≥
0, E(t) ≥ 0, I(t) ≥ 0 and R(t) ≥ 0.
Theorem 3.2. The closed positive invariant set Ω is a biologically and mathematically well-posed region of the initial value problems defined on Equations (2), (3), where
$Ω={(SI,SE,C,E,R)∈R+5:0<N(SI,SE,E,I,R)≤Λ1+Λ2μ} (6)$
Proof. For convenience, we let S[1] = S[I], S[2] = S[E], r[1] = γ + μ, r[2] = σ + α + μ, r[3] = μ + δ throughout this study. Differentiating Equation (1) with respect to t gives
$dNdt=Λ1+Λ2-μN-αI (7)$
In the absence of infectious rate Equation (7) reduced to
$dNdt≤Λ1+Λ2-μN. (8)$
Integrating both sides of Equation (8) with regard to t and taking the limit of Equation (8) as t → ∞, we obtain
$N(t)≤Λ1+Λ2μ-Λ1+Λ2-μN0μe-μt (9)$
$N(t)≤Λ1+Λ2μ. (10)$
Therefore, each solution of the initial value problems on Equations (2) and (3) remains in Equation (6) for all t > 0. This result can be summarized as lemma below.
Lemma 3.1. Ω is a positively invariant region for the Equation (2) with initial condition Equation (3) in ${R}_{+}^{5}$.
3.2. Threshold parameter
Before calculating the expression for threshold quantity (ℜ[0]), determine pneumonia free of Equation (2). For this aim, equate the right hand side of Equation (2) to zero. After that, substitute in
S[1](t) = S[I](0) > 0, S[E](t) = S[E](0) > 0, E(t) = E[0] = I(t) = I[0] and R(t) = R[0] = 0. Thus,
$E0=(Λ1μ,Λ2μ,0,0,0). (11)$
Hence, E[0] is the pneumonia free-equilibrium of Equation (2). By using DFE we can account the threshold number (ℜ[0]) following the work in Agusto [22], and we used the method of next generation
matrix to obtain the required threshold number and from the transmission matrix
Where F[1](t) = f[1]S[I] + f[2]S[E] and F[2](t) = γE
$F=[0Λ1β1+Λ2β2Λ1+Λ2γ0] (12)$
and the transition matrix V is given by
where V[1] = r[1]E and V[2] = r[2]I.
$⇒V=[r100r2] (13)$
Hence, using the next generation matrix calculated from Equations (12), (13) we get
$FV-1=1r1r2[0r1β1Λ1+β2Λ2Λ1+Λ2γr20] (14)$
Now, the governing eigenvalue of Equation (14) represents ℜ[0] of Equation (2), which is given by
$ℜ0=γ(β1Λ1+β2Λ2)r1r2(Λ1+Λ2) (15)$
The threshold number (ℜ[0]) is a quantity that determines how pneumonia spreads within the population or fades out of the society. If ℜ[0] < 1 then the disease will fade out of the community. This
shows that more exclusivity in breastfeeding children is added to the susceptible class. Because exclusively breastfed individuals have high natural immunity, they are less exposed to the diseases. ℜ
[0] > 1 shows that there is a continuation of disease spread within the population.
3.3. Existence of the model's endemic equilibrium point
In this part, we examine the condition known as EE of Equation (2). The fundamental motivation for this equilibrium is that it is utilized to estimate how long pneumonia will continue to affect the
population. To identify the prerequisites for an equilibrium in which community pneumonia is endemic (that is, at least one of E^* ≠ 0 or I^* ≠ 0), denoted by ${E}_{e}=\left({S}_{I}^{*},{S}_{E}^{*},
{E}^{*},{I}^{*},{R}^{*}\right)$. To find E[e], equate each equation in Equation (2) to zero and express each state variable in terms of the force of infection at the steady state (${f}_{i}^{*}$ where
i=1,2), given by
$f1*=β1IN*, f2*=β2IN*SI*=Λ1r3+δσI*r3(f1*-μ),SE*=Λ2f1*-μ=Λ2(Λ1+Λ2)μ(β2I*-(Λ1+Λ2)),E*=r2I*γℜ02,I*=γE*r2,R*=σr3ℜ02 (16)$
Therefore, the existence of E[e] in Equation (16) depends on ℜ[0], meaning that E[e] from Equation (2) exists if ℜ[0] > 1.
4. Equilibrium point stability analysis
The two equilibria of Equation (2) are shown in this subsection to have both local and global asymptotic stability. We employ the Jacobian matrices of system on Equation (2) at DFE and EE for local
stability and the Lyapunov function for the global stability of both equilibria to confirm this stability.
4.1. Local stability analyses
Theorem 4.1. The disease free-equilibrium point (E[0]), of Equation (2) corresponding to the considered model is locally asymptotically stable if ℜ[0] < 1 and not stable otherwise.
Proof. To prove this, first determine the Jacobian matrix evaluated at E[0] becomes
$J(Eo)=[-μ 0 0 -β1Λ1Λ1+Λ2 δ0 -μ 0 -β2Λ2Λ1+Λ2 00 0 -r1β1Λ1+β2Λ2Λ1+Λ2 00 0 γ -r2 00 0 0 σ -r3] (17)$
The characteristic polynomial of Equation (17) becomes
$Ψ(λ)=(λ+μ)2(λ+r3)(λ2+D1λ+D2) (18)$
The first three eigenvalues of Equation (18) are λ = −μ a double root, λ = −r[3]. All are negative, and we use the RouthHurwitz criterion to confirm the presence of the remaining eigenvalues in the
manner described below:
As a result, the RouthHurwitz criteria's required condition is confirmed whenever ℜ[0] < 1. Therefore, the DFE (E[0]) of Equation (2) is locally asymptotically stable (LAS) when ℜ[0] < 1.
Theorem 4.2. The disease endemic equilibrium point (E[e]), of Equation (2) is LAS in Ω if ℜ[0] > 1 and unstable otherwise.
Proof. To prove the local stability of E[e], first determine the desired Jacobean matrix J(E[e]) of system (2) at the endemic equilibrium, which is given as Equation (19)
$J(Ee)=[-(f1*+μ) 0 0 -β1SI*N*δ0 -(f2*+μ) 0 -β2SE*N* 0f1* f2* -r1 β1SI*+β2SE*N* 00 0 γ -r2 00 0 0 σ -r3] (19)$
The characteristics polynomial corresponding to Equation (19) is
$(λ+(f1*+μ))(λ+(f2*+μ))(λ+r1)(λ+r2)(λ+r3)=0 (20)$
The first three roots of Equation (20) are λ = −r[1] < 0, λ = −r[2] < 0, and λ = −r[3] < 0 and the remaining roots can be calculated from
$a1=f1*+f2*+2μ and a2=f1*f2*+2μ(f1*+f2*)+μ2$
and ${f}_{1}^{*},{f}_{2}^{*}$ are defined as the force of infection at the endemic equilibrium.
As ${\lambda }^{2}+{a}_{1}\lambda +{a}_{2}$ has both roots with a negative real part (and the system with characteristic equation $P\left(\lambda \right)={\lambda }^{2}+{a}_{1}\lambda +{a}_{2}=0$ is
stable) if and only if a[1], a[2] > 0, clearly a[1], a[2] > 0. Hence by RouthHurwitz criteria, for ℜ[0] > 1, the endemic equilibrium (E[e]) is LAS.
4.2. Global stability analysis
In this section, we use LaSalle's invariant principle to analyse the global stability of both equilibria of Equation (2) by creating suitable Lyapunov functions.
Theorem 4.3. If ℜ[0] < 1, then the disease free-equilibrium (E[0]) of Equation (2) is GAS in Ω and unstable otherwise.
Proof. We first create a suitable Lyapunov function of the type
$L(t)=k1E(t)+k2I(t) (21)$
where k[i], i = 1, 2 are positive real numbers to be chosen later. Upon differentiating Equation (21) along its trajectories with respect to t and simplifying, the result yields
$dLdt=k1(f1SI+f1SE-r1E)+k2(γE-r2I) (22)$
Now, we choose k[1] = γ and k[2] = r[1], and simplification of Equation (22) yields
$dLdt=γ(f1SI+f1SE)-r1r2I,dLdt=[γ(β1Λ1+β2Λ2Λ1+Λ2)-r1r1]I (23)$
Simplification and some rearrangement of Equation (23) will give:
$dLdt=-r1r2(1-ℜ02)I (24)$
Thus, $\frac{dL}{dt}<0$ whenever ℜ[0] < 1. Additionally, $\frac{dL}{dt}=0$ if and only if E(t) = 0 and I(t) = 0. Hence, the largest compact invariant set $\left\{\left({S}_{I},{S}_{E},E,I,R\right)\in
\Omega :\frac{dL}{dt}=0\right\}$ is the singleton E[0], which is the disease-free equilibrium. Therefore, using LaSalle's invariant principle [32], we conclude that the point E[0] is globally
asymptotically stable in Ω if ℜ[0] < 1.
Theorem 4.4. The disease endemic equilibrium point (E[e]) of Equation (2) is GAS in the invariant region stated in Theorem 3.2 as Ω if ℜ[0] > 1.
Proof. To prove the global behavior of E[e], we systematically construct a Lyapunov function V of the form as Legesse et al. [31]
$V(xi)=∑1n(xi-xi*-xi*ln(xixi*)) (25)$
where x[i] represents the compartments in the model and i = 1,...5 and ${x}_{i}^{*}$ is the endemic equilibrium point. This is defined as
Then, after differentiating V with regard to time t, the following is obtained.
$dVdt=(1-SI*SI)dSIdt+(1-SE*S2)dSEdt+(1-E*E)dEdt+ (1-I*I)dIdt+(1-R*R)dRdt (26)$
Next, substituting $\frac{d{S}_{I}}{dt},\frac{d{S}_{E}}{dt},\frac{dE}{dt},\frac{dI}{dt},\frac{dR}{dt}$ in Equation (26) using Equation (2) gives
SE*)+(E-E*E)(f1SI+f2SE-(γ+μ)(E-E*)-(γ+μ)E*)+(I-I*I)(γE-(σ+α+μ)(I-I*)-(σ+α+μ)I*)+(R-R*R)(σI-(μ+δ)(R-R*)-(μ+δ)R*). (27)$
We can put $\frac{dV}{dt}$ as $\frac{dV}{dt}={\text{Ψ}}_{1}-{\text{Ψ}}_{2}$ where
Thus, if P < N, then $\frac{dV}{dt}\le 0$. Hence, $\frac{dV}{dt}\le 0$ when ℜ[0] > 1. Clearly, $\frac{dV}{dt}=0$ if and only if ${S}_{I}={S}_{I}^{*},{S}_{E}={S}_{E}^{*},E={E}^{*},I={I}^{*}$, and R =
R^*. Therefore, the largest compact positive invariant in set $\left\{\left({S}_{I},{S}_{E},E,I,R\right)\in \Omega :\frac{dV}{dt}=0\right\}$ is the singleton E[e], which is a disease endemic
equilibrium of Equation (2). Generally, by LaSalle's invariant principle, E[e] is GAS in the biologically feasible region when ℜ[0] > 1.
5. The proposed model under optimal control
This section focuses on using optimum control techniques with the model under consideration from Equation (2). In a short amount of time, we were able to manage or reduce the diseases in the
community with the use of these strategies. The pneumonia model is expanded to include the following two control variables, each of which is defined as follows:
u[1]: a campaign to prevent the spread of the disease among people who are vulnerable.
u[2]: by treating infectious diseases, a treatment effort is made to minimize infection or maximize recovery.
After incorporating u[1] and u[2] in Equation (2), we obtain the following optimal control model Equation (28).
${dSIdt=Λ1+δR-(1-u1)f1SI-μSIdSEdt=Λ2-(1-u1)f2SE-μSEdEdt=(1-u1)(f1SI+f2SE)-(γ+μ)EdIdt=γE-(σ+u2)I-(α+μ)IdRdt=(σ+u2)I-(μ+δ)R (28)$
The control set U is Lebesgue measurable and has the following definition in order to explore the optimal levels of the controls: U = {(u[1](t), u[2](t))}: {0 ≤ u[1] < 1, 0 ≤ u[2] < 1, 0 ≤ t ≤ T}
where {0 ≤ u[1] < 1, 0 ≤ u[2] < 1, 0 ≤ t ≤ T} is the set of admissible controls. Our goal is to find a control u and S[I], S[E], E, I, and R that minimize the proposed objective function J given
below, while maintaining the lowest cost of control implementation in Equation (2). The proposed objective functional J should follow the epidemic Equation (2), which is given by
$J(u1,u2)=minu1,u2∫0tf(b1E+b2I+12∑i=12wiui2)dt (29)$
subject to Equation (3), where b[1] and b[2] are the weight positive constants associated with the number of exposed children and infected children, respectively, while w[1] and w[2] are positive
constants, present the relative cost weight, which is associated with control measures u[1] and u[2], respectively. We assume costs are non-linear in nature; hence, the control variables in J are in
second degree polynomial form [21, 23]. The major thing that is required of us is to reduce the number of exposed and affected children while maintaining a low cost. Thus, we are going to find
optimal controls $\left({u}_{1}^{*},{u}_{2}^{*}\right)$, such that
where U = (u[1], u[2]): each u[i] is measurable with 0 ≤ u[i] < 1 ,i = 1, 2 for t ∈ [0, t[f]].
5.1. The Hamiltonian and optimality system
Here, applying the principle of Pontryagin [34], Maximum Principle, we can drive the necessary conditions that the optimal control solution must satisfy [35]. Therefore, this principle converts the
model Equations (28), (29) into a problem of minimizing a Hamiltonian, H, point-wise with respect to u[1] and u[2], and we obtained a Hamiltonian (H) defined as:
Hence the Hamiltonian becomes
$H=blE+b2I+12w1u12+12w2u22+λ1g1+λ2g2+λ3g3+λ4g4+λ5g5 (30)$
where $f\left(E,I,{u}_{1},{u}_{2},t\right)={b}_{1}E+{b}_{2}I+\frac{1}{2}{\sum }_{i=1}^{2}{w}_{i}{u}_{i}^{2},{\lambda }_{i},i=1,2$ are the adjoint variable functions which are determined by using
Pontryagin's maximal principle [34] and use Swai et al. [23] for verification of existence of the optimal control pairs.
Theorem 5.1. There exists adjoint variable λ[i], where i = 1, ..., 5 with transversality conditions λ[i](t[f]) = 0, i = 1, ..., 5 for an optimal control $\left({u}_{1}^{*},{u}_{2}^{*}\right)$ that
minimizes J(u[1], u[2]) such that:
where $X={\left({S}_{I},{S}_{E},E,I,R\right)}^{T}$ and $\lambda ={\left({\lambda }_{1},{\lambda }_{2},{\lambda }_{3},{\lambda }_{4},{\lambda }_{5}\right)}^{T}$ λ(T) = 0 transidentality condition.
In a similar manner, we obtained the controls by solving the equation $\frac{\partial H}{\partial {u}_{i}}=0$ at ${u}_{i}^{*}$, for i = 1, 2 in accordance with Pontryagin [34]'s methodology and
Similarly $\frac{\partial H}{\partial {u}_{2}}=0$
From this
This implies that
$u2*={u2,if 0<u2<10,if u2<01,if u2>1$
The above equation in compact notation is
Considering the bounds of the control quintuple, we have
$u1*=max{0,min{1,(λ3-λ1)β1Λ1+(λ3-λ2)β2Λ2w1(Λ1+Λ2)I}}u2*=max{0,min{1,(λ4-λ5)Iw2}} (31)$
The optimality system is obtained from the state Equation (29) together with adjoint variables and the transversality condition in Theorem 5.1 by including the characterized control set and initial
(1-u1)(λ1-λ3)β1SIN+(1-u1)(λ2-λ3)β2SEN+λ4(r2+u2)-λ5(σ+u2)dλ5dt=-λ1δ+λ5(μ+δ) (32)$
Therefore, using the optimality system 32, it is possible to calculate the optimal control. Consequently, the optimal problem is minimal at control ${u}_{1}^{*}$ and ${u}_{2}^{*}$, as shown by the
fact that the second derivatives of the Lagrangian with regard to u[1] and u[2], respectively, are positive.
6. Results and discussion
To analyse the dynamics of pneumonia disease with or without control measures, numerical simulations are performed on the suggested model and optimality system using the parameter values indicated in
Table 1. In addition, we assumed the initial population size to be S[I](0) = 40; S[E](0) = 100; E(0) = 50; I(0) = 15; and R(0) = 1 for the purpose of numerical simulation. The weight constant values
are chosen as b[1] = 3; b[2] = 3; w[1] = 0.05 and w[2] = 0.03. First, we simulate the pneumonia model for the case R[0] = 0.8513 < 1, which indicates that the pneumonia disease dies out from the
society. As a result, the pneumonia model's solution trajectory moves toward a disease-free equilibrium point. The disease-free equilibrium point is demonstrated to be locally asymptotically stable
as all the trajectories of the model converge to DFE, see Figure 2A. Next, we plotted the graphics for the case R[0] = 1.4232 > 1, which implies that the disease is endemic. In this case, the
solution curves are converging to the endemic equilibrium point, which verifies the linear stability of the EE point (see Figure 2B).
TABLE 1
FIGURE 2
Now, to extend the proposed model to optimal control, we focus on the parameter values and initial population, which give R[0] = 1.4232 > 1 to analyze the model. In light of the fact that diseases
are still prevalent in society, adding control factors to the mode is appropriate. Figures 3–6 demonstrate the impact of prevention and treatment on the dynamics of pneumonia.
FIGURE 3
Figure 3. (A) Dynamics of sub-populations for the DFE point. (B) The phase portrait for S[I](t) and S[E](t) vs. E(t).
The plot in Figure 3A illustrates that subpopulations converge to the DFE point, which indicates that pneumonia has been eliminated from the community. Moreover, it can be observed that the two
susceptible populations decrease while the exposed and infected children increase for a few years and decrease rapidly afterward to the DFE point. Figure 3B reveals that even if controls are applied,
non-exclusively breastfed children are more exposed to pneumonia than exclusively breastfed children. In general, from Figures 2A, 3A, we can easily see the impact of control variables on the
transmission dynamics of pneumonia.
6.1. Contingency plans
We utilized the following scenarios to assess how each regulation would affect the dynamics of pneumonia spread:
(i) Optimal use of prevention (u[1] only).
(ii) Optimal use of treatment (u[2] only).
(iii) Optimal use of prevention (u[1]) and treatment (u[2]) intervention.
6.1.1. Scenario A: control of pneumonia with prevention only
This scenario shows the use of only one control measure, prevention (u[1]), and the other controls were set to zero. As clearly observed from Figures 4A, B, with the optimal use of a prevention
strategy, the two susceptible individuals increase due to the prevention strategy, and when we compare it with the case free of prevention, the number of susceptibilities of individuals to the
diseases is less. Moreover, the number of total exposed humans decreases more with control than when there is no control, as depicted in Figure 4C. Since the number of infection averted human from
pneumonia disease due to this strategy is less in number, hence additional intervention is required.
FIGURE 4
Figure 4. Simulation of the optimal model showing the effect of prevention on (A) not exclusively breastfeed individuals, (B) exclusively breastfeed individuals, and (C) exposed individuals.
6.1.2. Scenario B: control of pneumonia with treatment only
Scenario B is shown in Figures 5A, B, which illustrate that treatment has a significant impact in reducing the number of children infected with pneumonia after 14 years. It can be noted that the
number of infected individuals slightly decreases and becomes effective after some time; hence, more interventions are needed to eliminate the disease from the community.
FIGURE 5
Figure 5. Simulations showing (A) the optimal use of treatment only (u2) and (B) its control profile.
6.1.3. Scenario C: optimal use of the two controls
This strategy demonstrates the effect of the optimal use of prevention for the exposed humans and treatment for the infectious humans to decrease the number of exposed and infected individuals in the
society. Additionally, this intervention reduces the spread of pneumonia dynamics governed by model (2) in the population. The numbers of exposed individuals and infectious individuals decrease more
rapidly when the two control scenarios are in use compared with when controls are not used or one control is used, as depicted in Figures 6A, B. Figure 6F reveals that the optimal use of prevention.
u[1](t) is maximum at 100% throughout the proposed days until reaching the final time that maximum prevention is applied to control pneumonia. Optimal use of treatment u[2](t) is kept at the maximum
level for 48 days before arriving at the minimum at the final intervention time. Figures 6C–E reveal, respectively, the size of S[I], S[E], and R increases compared with non-control and one control
intervention. This confirms that a maximum number of children's pneumonia diseases are averted due to the intervention of the two controls.
FIGURE 6
Figure 6. Simulations demonstrating optimal use of prevention (u[1]) and treatment (u[2]) on (A) E (B) I (C) S[E] (D) S[I] (E) R and (F) its control profile.
7. Cost-effectiveness analysis
In this section, we present cost-effectiveness analysis, which is used to evaluate the benefits related to a health intervention(s) or strategy (strategies) (for instance treatment and prevention),
to elaborate the strategy's costs [22]. The number of infections averted is given as the difference between total infectious individuals without control and total infectious individuals with control.
Using the parameter values in Table 1 and initial conditions of state variables with the weight constant values chosen, the ICER is determined for each intervention labeled as prevention, treatment,
and a combination of both. The prevention strategy includes vaccination (immunization), personal hygiene, avoiding exposure to people who are ill, covering a cough, and adequate nutrition (scenario
A), while the treatment intervention involves antibiotics that stop the infection from progressing (these medicines are used to treat bacterial pneumonia), hospital treatment (allowed for more severe
cases), rest, etc. (scenario B). The combination of prevention and treatment scenario C. This is obtained by balancing the change between the costs and health outcomes of these intervention
strategies; usually obtained by using the incremental cost-effectiveness ratio (ICER), which is described as:
$ICER=change in total costs between strategieschange in health benefits between strategies (33)$
where the numerator of the ICER represents the difference in cost-benefit and the denominator measures the change in health benefit.
According to the simulation outcomes of the optimality system, the control scenarios are then ranked in ascending order of total number of infections averted, i.e., prevention of infections in
susceptible children using vaccines, personal hygiene and others (strategy A), treatment of infected individuals with antibiotics (strategy B), and a combination of prevention and treatment (strategy
C), as shown in Table 2.
TABLE 2
The ICER is obtained through the following computation:
Now, comparing strategy A and B incrementally, the ICER for the two competing strategies is calculated as above and it shows that ICER (B) > ICER (A). From this, we can see that strategy A saves
0.7827 more than strategy B, and strategy B is a bit more expensive. Hence, we excluded strategy B from the set of competing strategies, and finally, we compared strategies A and C as depicted in
Table 3. From ICER (A) and ICER (C) in Table 3 we can see that strategy C saves 2.741 than strategy A. Hence, we exclude strategy C, because it is a bit expensive. Therefore, we conclude that
strategy A the cheapest of all compared strategies, that meant it is the most cost-effective for pneumonia disease control intervention strategies.
TABLE 3
8. Conclusion
This study is concerned with the mathematical analysis of a pneumonia transmission model with naturally acquired immunity in the presence of effective exclusively breastfed infants and a lack of
naturally acquired immunity due to the loss of exclusively breastfed infants. This work also shows that if the threshold number is smaller than unity, then the pneumonia-free equilibrium point is
both locally and globally asymptotically stable, which means pneumonia is wiped out of the community. If the threshold number is greater than unity, then an endemic equilibrium of the model occurs,
which shows the persistence of the diseases in the population.
To control pneumonia spread dynamics in a population, multiple time-dependent control variables, including prevention using vaccines, personal hygiene, etc., treatment of infectious humans using
antibiotics, hospital treatment, and rest are considered. An analysis of the optimal control model is carried out theoretically, and the model is simulated to determine the effects of combining the
two control intervention strategies on the spread dynamics of pneumonia in the community. It is shown that the number of infected children is minimized through prevention and treatment intervention
strategies. Throughout this work, based on the results in Table 3, we recommend the prevention of susceptible children from being exposed to the diseases using vaccination, public health education,
etc., to reduce new exposed cases and the number of infected children due to pneumonia in our society with the least cost.
In general, we considered cleanliness as a method of preventing pneumonia in children under the age of five in the earlier studies on the dynamics of bimodal pneumonia [31]. However, in the present
study, we considered the extension of the bimodal pneumonia model to optimal control using two time-dependent control measures, namely prevention and treatment. In addition, we analyzed the
cost-effectiveness of intervention strategies. The result of the analysis reveals that prevention strategies are the most cost-effective way of eradicating pneumonia. Therefore, the present study is
more effective and cost-effective in preventing pneumonia transmission than the previous study.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
FL conceptualized, planned, and prepared the article as well as the figures. The final manuscript's review, modification, and literature search were all aided by the efforts of all writers. The
article's submission was reviewed and approved by all authors.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
1. Elyas L, Mekasha A, Admasie A, Assefa E. Exclusive breastfeeding practice and associated factors among mothers attending private pediatric and child clinics, Addis Ababa, Ethiopia: a
cross-sectional study. Int J Pediatr. (2017) 2017:8546192. doi: 10.1155/2017/8546192
2. Abdulla F, Hossain MM, Karimuzzaman M, Ali M, Rahman A. Likelihood of infectious diseases due to lack of exclusive breastfeeding among infants in Bangladesh. PLoS ONE. (2022) 17:e0263890. doi:
3. Morty RE. World health day observances in November 2021: advocating for adult and pediatric pneumonia, preterm birth, and chronic obstructive pulmonary disease. Am J Physiol Lung Cell Mol Physiol.
(2021) 321:L954L957. doi: 10.1152/ajplung.00423.2021
4. Otoo D, Opoku P, Charles S, Kingsley AP. Deterministic epidemic model for (SVCSyCAsyIR) pneumonia dynamics, with vaccination and temporal immunity. Infect Dis Model. (2020) 5:42–60. doi: 10.1016/
5. Kizito M, Tumwiine J, A. mathematical model of treatment and vaccination interventions of pneumococcal pneumonia infection dynamics. J Appl Mathem. (2018) 2018:2539465. doi: 10.1155/2018/2539465
6. Alebel A, Tesma C, Temesgen B, Ferede A, Kibret GD. Exclusive breastfeeding practice in Ethiopia and its association with antenatal care and institutional delivery: a systematic review and
meta-analysis. Int Breastfeed J. (2018) 13:1–12. doi: 10.1186/s13006-018-0173-x
7. Rajeshwari K, Bang A, Chaturvedi P, Kumar V, Yadav B, Bharadva K, et al. Infant and young child feeding guidelines: 2010. Indian Pediatr. (2010) 47:995–1004.
8. Arage G, Gedamu H. Exclusive breastfeeding practice and its associated factors among mothers of infants less than six months of age in Debre Tabor town, Northwest Ethiopia: a cross-sectional
study. Adv Public Health. (2016) 2016:3426249. doi: 10.1155/2016/3426249
9. Turin CG, Ochoa TJ. The role of maternal breast milk in preventing infantile diarrhea in the developing world. Curr Trop Med Rep. (2014) 1:97–105. doi: 10.1007/s40475-014-0015-x
10. Tewabe T, Mandesh A, Gualu T, Alem G, Mekuria G, Zeleke H. Exclusive breastfeeding practice and associated factors among mothers in Motta town, East Gojjam zone, Amhara Regional State, Ethiopia,
2015: a cross-sectional study. Int Breastfeed J. (2016) 12:1–7. doi: 10.1186/s13006-017-0103-3
11. Mgongo M, Mosha MV, Uriyo JG, Msuya SE, Stray-Pedersen B. Prevalence and predictors of exclusive breastfeeding among women in Kilimanjaro region, Northern Tanzania: a population based
cross-sectional study. Int Breastfeed J. (2013) 8:1–8. doi: 10.1186/1746-4358-8-12
12. Mogre V, Dery M, Gaa PK. Knowledge, attitudes and determinants of exclusive breastfeeding practice among Ghanaian rural lactating mothers. Int Breastfeed J. (2016) 11:1–8. doi: 10.1186/
13. Ajetunmobi OM, Whyte B, Chalmers J, Tappin DM, Wolfson L, Fleming M, et al. Breastfeeding is associated with reduced childhood hospitalization: evidence from a Scottish Birth Cohort (1997-2009).
J Pediatr. (2015) 166:620–5. doi: 10.1016/j.jpeds.2014.11.013
14. Martin CR, Ling PR, Blackburn GL. Review of infant feeding: key features of breast milk and infant formula. Nutrients. (2016) 8:279. doi: 10.3390/nu8050279
15. Arifeen S, Black RE, Antelman G, Baqui A, Caulfield L, Becker S. Exclusive breastfeeding reduces acute respiratory infection and diarrhea deaths among infants in Dhaka slums. Pediatrics. (2001)
108:e67–e67. doi: 10.1542/peds.108.4.e67
16. Sefene A, Birhanu D, Awoke W, Taye T. Determinants of exclusive breastfeeding practice among mothers of children age less than 6 month in Bahir Dar city administration, Northwest Ethiopia; a
community based cross-sectional survey. Sci J Clin Med. (2013) 2:153–9. doi: 10.11648/j.sjcm.20130206.12
17. Collective GB UNICEF. Nurturing the health and wealth of nations: the investment case for breastfeeding. Technical Report, World Health Organization. (2017).
18. Bhandari N, Chowdhury R. Infant and young child feeding. Proc Indian Nat Sci Acad. (2016) 82:1507–17. doi: 10.16943/ptinsa/2016/48883
19. Organization WH. Tracking universal health coverage: first global monitoring report. World Health Organization. (2015).
20. Sajjad S, Roshan R, Tanvir S. Impact of maternal education and source of knowledge on breast feeding practices in Rawalpindi city. MOJCRR. (2018) 1:212–42. doi: 10.15406/mojcrr.2018.01.00035
21. Tilahun GT, Makinde OD, Malonza D. Modelling and optimal control of pneumonia disease with cost-effective strategies. J Biol Dyn. (2017) 11:400–26. doi: 10.1080/17513758.2017.1337245
22. Agusto FB. Optimal isolation control strategies and cost-effectiveness analysis of a two-strain avian influenza model. Biosystems. (2013) 113:155–64. doi: 10.1016/j.biosystems.2013.06.004
23. Swai MC, Shaban N, Marijani T. Optimal control in two strain pneumonia transmission dynamics. J Appl Mathem. (2021) 2021:8835918. doi: 10.1155/2021/8835918
24. Tessema FS, Bole BK, Rao PK. Optimal control strategies and cost effectiveness analysis of Pneumonia disease with drug resistance. Int J Nonl Analy Appl. (2023) 14:903–17. doi: 10.22075/
25. Wu Y, Mascaro S, Bhuiyan M, Fathima P, Mace AO, Nicol MP, et al. Predicting the causative pathogen among children with pneumonia using a causal Bayesian network. PLoS Comput Biol. (2023)
19:e1010967. doi: 10.1371/journal.pcbi.1010967
26. Kotola BS, Mekonnen TT. Mathematical model analysis and numerical simulation for codynamics of meningitis and pneumonia infection with intervention. Sci Rep. (2022) 12:1–22. doi: 10.1038/
27. Gweryina RI, Madubueze CE, Bajiya VP, Esla FE. Modeling and analysis of tuberculosis and pneumonia co-infection dynamics with cost-effective strategies. Results Control Optimiz. (2023) 10:100210.
doi: 10.1016/j.rico.2023.100210
28. Naveed M, Baleanu D, Raza A, Rafiq M, Soori AH, Mohsin M. Modeling the transmission dynamics of delayed pneumonia-like diseases with a sensitivity of parameters. Adv Differ Equat. (2021)
2021:1–19. doi: 10.1186/s13662-021-03618-z
29. Kassa SM, Njagarah JB, Terefe YA. Analysis of the mitigation strategies for COVID-19: from mathematical modelling perspective. Chaos, Solitons Fractals. (2020) 138:109968. doi: 10.1016/
30. Rafiq M, Ali J, Riaz MB, Awrejcewicz J. Numerical analysis of a bi-modal COVID-19 sitr model. Alexandria Eng J. (2022) 61:227–35. doi: 10.1016/j.aej.2021.04.102
31. Legesse FM, Rao KP, Keno TD. Mathematical Modeling of a Bimodal Pneumonia Epidemic with Non-breastfeeding Class. Appl Math. (2023) 17:95–107. doi: 10.18576/amis/170111
32. Dano LB, Rao KP, Keno TD. Modeling the combined effect of hepatitis b infection and heavy alcohol consumption on the progression dynamics of liver cirrhosis. J Mathematics. (2022) 2022:6936396.
doi: 10.1155/2022/6936396
33. Otieno O, Joseph M, John O. “Mathematical Model for Pneumonia Dynamics among Children,” in The 2012 southern Africa mathematical sciences association conference (SAMSA 2012). (2012).
34. Pontryagin LS. Mathematical Theory of Optimal Processes. London: CRC press. (1987).
35. Lenhart S, Workman JT. Optimal Control Applied to Biological Models. New York: Chapman and Hall/CRC. (2007). doi: 10.1201/9781420011418
Keywords: inclusive and exclusive, cost-effectiveness, pneumonia, optimal control, S[1]S[2]EIR model, ICER, breastfeeding
Citation: Legesse FM, Rao KP and Keno TD (2023) Cost effectiveness and optimal control analysis for bimodal pneumonia dynamics with the effect of children's breastfeeding. Front. Appl. Math. Stat.
9:1224891. doi: 10.3389/fams.2023.1224891
Received: 18 May 2023; Accepted: 02 August 2023;
Published: 31 August 2023.
Copyright © 2023 Legesse, Rao and Keno. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other
forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice.
No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Fekadu Mosisa Legesse, fekadumosisa22@gmail.com | {"url":"https://www.frontiersin.org/journals/applied-mathematics-and-statistics/articles/10.3389/fams.2023.1224891/full","timestamp":"2024-11-12T19:38:45Z","content_type":"text/html","content_length":"1049027","record_id":"<urn:uuid:c7f57568-b0c0-4000-8fa8-7f224811edc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00851.warc.gz"} |
Sparse Polynomial Arithmetic 3: Multiplication and Division
In our previous article we described a packed representation for sparse polynomials is designed for scalability and high performance. The expand and divide commands in Maple 14 use this
representation internally to multiply and divide polynomials with integer coefficients, converting to and from Maple's generic data structure described here. In this post I want to show you how these
algorithms work and why they are fast. It's a critical stepping stone for our next topic, which is parallelization.
Consider the problem of multiplying two sparse polynomials f = f[1] + f[2] + ... + f[n] and g = g[1] + g[2] + ... + g[m] in distributed form. In general we must compute all products f[i] ⋅ g[j] and
combine like terms to produce the result. This typically involves sorting, although a hash table could be used instead.
The problem with sorting all the f[i] ⋅ g[j] is that memory is slow. The table below should give you some idea of what to expect. If we generate a large dataset and sort it we will inevitably be
working in memory, not in cache. Likewise, if we insert all the terms into a hash table and too many of them are distinct, we will also run up against the memory bottleneck. Large hash tables are
especially slow because of their inherent random access.
The challenge then is to combine all the terms f[i] ⋅ g[j] in cache and write only the final result to main memory. This is accomplished by an algorithm of Johnson (1974), which is illustrated at the
top of this post. Johnson's algorithm requires f and g to be sorted. It uses a binary heap to merge the largest outstanding f[i] ⋅ g[j] in each iteration, producing a new term of the result each
time. This requires space for only #f monomials and pointers into g.
This order of magnitude reduction in "working storage" means that Johnson's algorithm will almost always run in cache. Another useful property is that it generates terms of the product one at a time
in descending order, so we can divide polynomials with no intermediate result. Recall that the classical algorithm to divide f by g creates an intermediate polynomial p := f, computes a new term of
the quotient q[i] := lt(p)/lt(g), and subtracts p := p - q[i]g in each iteration. If the division is sparse then the polynomial p may be large and once again we are working in memory. There is also a
complexity problem: since each merge is O(#p) the resulting algorithm can be O(#q#g).
Johnson's division algorithm solves these problems by using a heap to merge Σ q[i] g. This requires only O(#q) working storage instead of O(#f + #q #g), and produces a complexity of O(#f + #q #g log
#q), which is optimal for #q < #g. Our work has produced a better complexity of O(#f + #q #g log min(#q,#g)), which is the cost to multiply the quotient and the divisor and subtract their product
from the dividend. It is this algorithm that has been added to Maple.
The new code is substantially faster when the divisor is small and the quotient is large, as is often the case for polynomial gcd problems where we compute g = gcd(a,b) and divide a/g and b/g. It is
also much faster when the problem is sparse. The division below runs about 250 times faster in Maple 14.
f := randpoly(x, degree=10^5, terms=1000):
g := randpoly(x, degree=10^5, terms=1000):
p := expand(f*g):
time(divide(p, f));
For more details see our paper on sparse polynomial division.
Tags are words are used to describe and categorize your content. Combine multiple words with dashes(-), and seperate tags with spaces. | {"url":"https://mapleprimes.com/posts/35637-Sparse-Polynomial-Arithmetic-3-Multiplication","timestamp":"2024-11-08T02:00:16Z","content_type":"text/html","content_length":"116296","record_id":"<urn:uuid:82f2572b-3cb8-486a-a6e0-c91148fa33f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00099.warc.gz"} |
Linear Pair of Angles—Definition, Axiom, Examples - Grade Potential Santa Monica, CA
Linear Pair of AnglesDefinition, Axiom, Examples
The linear pair of angles is an important concept in geometry. With so many real-life uses, you'd be astonished to find how relevant this figure can be. While you may believe it has no relevant in
your life, we all must learn the ideas to nail those examinations in school.
To save you time and offer this info readily accessible, here is an preliminary insight into the characteristics of a linear pair of angles, with images and examples to assist with your personal
study sessions. We will also talk about some real-life and geometric uses.
What Is a Linear Pair of Angles?
Linearity, angles, and intersections are concepts that exist to be applicable as you progress in geometry and more complex theorems and proofs. We will answer this question with a easy definition in
this single point.
A linear pair of angles is the name designated to two angles that are positioned on a straight line and the total of their measurement of angles is 180 degrees.
To put it easily, linear pairs of angles are two angles that are aligned on the same line and pair up to create a straight line. The total of the angles in a linear pair will at all times create a
straight angle equivalent
times to 180 degrees.
It is essential to bear in mind that linear pairs are always at adjacent angles. They share a common apex and a common arm. This suggests that at all times form on a straight line and are always
supplementary angles.
It is important to make clear that, even though the linear pair are at all times adjacent angles, adjacent angles aren't always linear pairs.
The Linear Pair Axiom
With the definition clarified, we will study the two axioms seriously to completely grasp every example thrown at you.
Initiate with definition of what an axiom is. It is a mathematical postulate or hypothesis that is accepted without proof; it is considered clear and self-evident. A linear pair of angles has two
axioms connected with them.
The first axiom states that if a ray is located on a line, the adjacent angles will create a straight angle, namely called a linear pair.
The second axiom states that if two angles create a linear pair, then uncommon arms of both angles makes a straight angle between them. This is also known as a straight line.
Examples of Linear Pairs of Angles
To imagine these axioms better, here are some figure examples with their respective explanations.
Example One
As we can see in this instance, we have two angles that are adjacent to each other. As you can notice in the figure, the adjacent angles form a linear pair because the total of their measurement
equals 180 degrees. They are also supplementary angles, since they share a side and a common vertex.
Angle A: 75 degrees
Angle B: 105 degrees
Sum of Angles A and B: 75 + 105 = 180
Example Two
Here, we possess two lines intersect, producing four angles. Not all angles form a linear pair, but respective angle and the one adjacent to it form a linear pair.
∠A 30 degrees
∠B: 150 degrees
∠C: 30 degrees
∠D: 150 degrees
In this example, the linear pairs are:
∠A and ∠B
∠B and ∠C
∠C and ∠D
∠D and ∠A
Example Three
This instance presents an intersection of three lines. Let's take note of the axiom and characteristics of linear pairs.
∠A 150 degrees
∠B: 50 degrees
∠C: 160 degrees
None of the angle combinations sum up to 180 degrees. As a consequence, we can conclude that this example has no linear pair unless we expand one straight line.
Applications of Linear Pair of Angles
At the moment we have learned what linear pairs are and have observed some examples, let's see how this concept can be used in geometry and the real world.
In Real-World Scenarios
There are several applications of linear pairs of angles in real-world. One such example is architects, who apply these axioms in their daily job to establish if two lines are perpendicular and
creates a straight angle.
Construction and Building professionals also utilize experts in this subject to make their work easier. They employ linear pairs of angles to assure that two close walls create a 90-degree angle with
the ground.
Engineers also uses linear pairs of angles regularly. They do so by calculating the weight on the beams and trusses.
In Geometry
Linear pairs of angles additionally play a function in geometry proofs. A regular proof that employs linear pairs is the alternate interior angles theorem. This concept states that if two lines are
parallel and intersected by a transversal line, the alternate interior angles made are congruent.
The proof of vertical angles additionally depends on linear pairs of angles. Although the adjacent angles are supplementary and sum up to 180 degrees, the opposite vertical angles are at all times
equivalent to each other. Because of previously mentioned two rules, you are only required to determine the measurement of one angle to determine the measurement of the rest.
The theorem of linear pairs is also employed for more complicated implementation, such as working out the angles in polygons. It’s critical to grasp the fundamentals of linear pairs, so you are
prepared for more progressive geometry.
As demonstrated, linear pairs of angles are a relatively easy concept with few engaging uses. Next time you're out and about, take note if you can spot some linear pairs! And, if you're attending a
geometry class, take notes on how linear pairs may be helpful in proofs.
Improve Your Geometry Skills with Grade Potential
Geometry is fun and beneficial, especially if you are interested in the field of architecture or construction.
Despite that, if you're having difficulty understanding linear pairs of angles (or any other theorem in geometry), consider signing up for a tutoring session with Grade Potential. One of our expert
teachers will assist you comprehend the material and ace your next test. | {"url":"https://www.santamonicainhometutors.com/blog/linear-pair-of-angles-definition-axiom-examples","timestamp":"2024-11-11T14:34:14Z","content_type":"text/html","content_length":"78516","record_id":"<urn:uuid:9524d190-cdcc-4525-a08c-b3d6edb28616>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00187.warc.gz"} |
B-spline Surfaces: Important Properties
Several important properties of B-spline surfaces are listed here. These properties can be proved easily by applying the same techniques used for Bézier curves. Please compare these important
properties with those of B-spline curves'. Please recall that the equation of a B-spline surface is the following
where the degrees in the u- and v-directions are p and q, respectively, and there are m+1 rows and n+1 columns of control points.
• Nonnegativity: N[i,p](u) N[j,q](v) is nonnegative for all p, q, i, j and u and v in the range of 0 and 1.
This is obvious.
• Partition of Unity: The sum of all N[i,p](u) N[j,q](v) is 1 for all u and v in the range of 0 and 1.
More precisely, this means for any pair of u and v in the range of 0 and 1, the following holds:
• Strong Convex Hull Property: if (u,v) is in [u[i],u[i+1]) x [v[j],v[j+1]), then p(u,v) lies in the convex hull defined by control points p[h,k], where i-p <= h <= i and j-q <= k <= j.
This strong convex hull property for B-spline surfaces follows directly from the strong convex hull property for B-spline curves. For the u-direction, if u is in [u[i],u[i+1]), then there are at
most p+1 non-zero basis functions, namely, N[i,p](u), N[i-1,p](u), ..., and N[i-p,p](u). Thus, only the control points on row i-p to row i have non-zero basis functions in the u-direction.
Similarly, if v is in [v[j],v[j+1]), there are at most q+1 non-zero basis functions on this knot span, namely N[j,q](v), N[j-1,q](v), ..., and N[j-q,q](v). Thus, only the control points on column
j-q to column j have non-zero basis functions in the v-direction. Combining these two facts together, only the control points in the range of row i-p to row i and column j-q to q have non-zero
basis functions. Since these basis functions are nonnegative and their sum is one (i.e., the partition of unity property), p(u,v) lies in the convex hull defined by these control points.
As a result, the surface patch defined on rectangle [u[i],u[i+1]) x [v[j],v[j+1]) lies completely in the same convex hull.
• Local Modification Scheme: N[i,p](u)N[j,q](v) is zero if (u,v) is outside of the rectangle [u[i],u[i+p+1]) x [v[j],v[j+q+1])
From the local modification scheme property, we know that in the u-direction N[i,p](u) is non-zero on [u[i],u[i+p+1]) and zero elsewhere. The local modification scheme property of B-spline
surfaces follows directly from the curve case. If control point p[3,2] is moved to a new location, the following figures show that only the neighboring area on the surface of the moved control
point changes shape and elsewhere is unchanged.
• p(u,v) is C^p-s (resp., C^q-t) continuous in the u (resp., v) direction if u (resp., v) is a knot of multiplicity s (resp., t).
• Affine Invariance
This means that to apply an affine transformation to a B-spline surface one can apply the transformation to all control points and the surface defined by the transformed control points is
identical to the one obtained by applying the same transformation to the surface's equation.
• Variation Diminishing Property:
No such thing exists for surfaces.
• If m = p, n = q, and U = { 0, 0, ..., 0, 1, 1, ...., 1 }, then a B-spline surface becomes a Bézier surface. | {"url":"https://pages.mtu.edu/~shene/COURSES/cs3621/NOTES/surface/bspline-properties.html","timestamp":"2024-11-15T04:27:49Z","content_type":"text/html","content_length":"7179","record_id":"<urn:uuid:90c68838-09f6-4250-b1ea-a3ab89c66ae2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00581.warc.gz"} |
Election-Attack Complexity for More Natural Models
Rochester Institute of Technology
RIT Scholar Works
Theses Thesis/Dissertation Collections
Election-Attack Complexity for More Natural
Zack Fitzsimmons
Follow this and additional works at:http://scholarworks.rit.edu/theses
This Dissertation is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator
of RIT Scholar Works. For more information, please [email protected].
Recommended Citation
Election-Attack Complexity for
More Natural Models
Zack Fitzsimmons
A dissertation submitted in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy
in Computing and Information Sciences
Ph.D. Program in Computing and Information Sciences
B. Thomas Golisano College of Computing and
Information Sciences
Rochester Institute of Technology
Rochester, New York
Election-Attack Complexity for More Natural Models
Zack Fitzsimmons
Committee Approval:
We, the undersigned committee members, certify that we have advised and/or supervised the candidate on the work described in this dissertation. We further certify that we have reviewed the
dissertation manuscript and approve it in partial fulfillment of the requirements of the degree of Doctor of Philosophy in Computing and Information Sciences.
Dr. Edith Hemaspaandra, Dissertation Advisor Date
Dr. Ivona Bez´akov´a, Dissertation Committee Member Date
Dr. Lane A. Hemaspaandra, Dissertation Committee Member Date
Dr. Robert Parody, Dissertation Committee Member Date
Dr. Richard Zanibbi, Dissertation Committee Member Date
Dr. James Vallino, Dissertation Chair Date
Certified by:
Election-Attack Complexity for More Natural Models
Zack Fitzsimmons
Elections are arguably the best way that a group of agents with preferences over a set of choices can reach a decision. This can include political domains, as well as multiagent systems in
artificial-intelligence settings. It is well-known that every reasonable election system is manipulable, but determining whether such a manipulation exists may be computationally infeasible. We build
on an exciting line of research that considers the complexity of election-attack problems, which include voters misrepresenting their preferences (manipulation) and attacks on the structure of the
election itself (control). We must properly model such attacks and the preferences of the electorate to give us insight into the difficulty of election attacks in natural settings. This includes models
for how the voters can state their preferences, their structure, and new models for the election attack itself.
We study several different natural models on the structure of the voters. In the compu-tational study of election attacks it is generally assumed that voters strictly rank all of the candidates from
most to least preferred. We consider the very natural model where voters are able to cast votes with ties, and the model where they additionally have a single-peaked structure. Specifically, we
explore how voters with varying amounts of ties and structure in their preferences affect the computational complexity of different election attacks and the complexity of determining whether a given
electorate is single-peaked.
For the representation of the voters, we consider how representing the voters succinctly affects the complexity of election attacks and discuss how approaches for the nonsuccinct case can be adapted.
Control and manipulation are two of the most commonly studied election-attack prob-lems. We introduce a model of electoral control in the setting where some of the voters act strategically (i.e., are
manipulators), and consider both the case where the agent controlling the election and the manipulators share a goal, and the case where they have competing goals.
First and foremost I would like to thank my advisor Edith Hemaspaandra, who has been the source of endless advice, encouragement, and patience. Over the years she has become not just an advisor, but
a good friend. She has shaped me into the academic that I am today, and from her I have gained a deep appreciation for the structure of hard problems.
I would like to thank Lane A. Hemaspaandra for his guidance and helpful insight on research and on writing. I have really enjoyed collaborating on our joint papers.
Thank you to Pengcheng Shi for all of his valuable advice, and for helping to guide me through my graduate studies.
Thank you to Ivona Bez´akov´a, Robert Parody, and Richard Zanibbi for serving on my dissertation committee and for all of the discussions that have helped me in my research and to improve this
I would like to thank the Computer Science Department at the Rochester Institute of Technology for the opportunity to teach during my graduate studies, which helped me to develop as an educator.
For all of our interesting discussions that have made my time at RIT a great experience, I would also like to thank Zack Butler, Hadi Hosseini, Peizhao Hu, Matthew Le, Erika Mesh, David Narv´aez,
Stanis law Radziszowski, and Carlos Rivero.
For the many miles we’ve both traveled, by train and by car, thank you to Kristen Perrella for all of her love and support throughout my graduate studies.
And finally, I would like to thank my parents Jack and Linda, my brother John, and my sister Heather, for everything they have done for me, and for always being available for advice, encouragement,
and support.
1 Introduction 1
1.1 Background . . . 1
1.2 Computational Study of Election-Attack Problems . . . 3
1.3 More Natural Models . . . 4
1.4 List of Contributions . . . 7
1.5 List of Publications . . . 8
2 Preliminaries 9 2.1 Elections and Preferences . . . 9
2.2 Election Systems . . . 10
2.2.1 Scoring Rules . . . 10
2.2.2 Extensions for Votes with Ties . . . 11
2.2.3 Pairwise Rules . . . 12
2.2.4 Approval Voting . . . 14
2.3 Manipulative Attacks . . . 14
2.3.1 Manipulation . . . 14
2.3.2 Control . . . 15
2.3.3 Bribery . . . 17
2.4 Computational Complexity . . . 17
3 Models for the Votes: Votes with Ties 21 3.1 Introduction . . . 21
3.2 Complexity Goes Up . . . 23
3.3 Complexity Goes Down . . . 24
3.4 Complexity Remains the Same . . . 26
3.4.1 Majority-Graph Result . . . 29
3.4.2 Irrational-Voter Copeland Results . . . 30
3.5 Axioms . . . 32
3.6 Related Work . . . 33
3.7 Conclusions . . . 34
4 Models for the Electorate: Single-Peakedness with Ties 35 4.1 Introduction . . . 35
4.1.1 Single-Peakedness with Ties . . . 36
4.2 Models of Single-Peakedness for Votes with Ties . . . 38
4.2.1 Social-Choice Properties . . . 41
4.3 Detecting Single-Peakedness . . . 43
4.3.1 Possibly Single-Peaked Consistency . . . 43
4.3.2 Single-Plateaued Consistency . . . 46
4.3.3 Single-Peaked Consistency with Outside Options . . . 50
4.3.4 Single-Peaked Consistency . . . 53
4.4 Effect of Different Models on the Complexity of Manipulation . . . 55
4.4.1 Complexity Goes Up . . . 56
4.4.2 Complexity Remains the Same . . . 59
4.5 Conclusions . . . 65
5 Models for the Representation: Succinct Elections 66 5.1 Introduction . . . 66
5.2 Adapting Approaches . . . 69
5.2.1 Dynamic Programming . . . 72
5.2.2 Fixed Numbers of Candidates . . . 73
5.3 Kemeny Elections . . . 74
5.4 Conclusions . . . 82
6 Models for the Attack: Control with Manipulation 84 6.1 Introduction . . . 84
6.2 Specification of the Model . . . 85
6.3 Inheritance Results . . . 87
6.4 General Upper Bounds and Matching Lower Bounds . . . 88
6.5 Specific Systems . . . 101
6.5.1 Plurality . . . 106
6.5.2 Condorcet . . . 110
6.5.3 Approval . . . 114
6.6 Weighted Voters . . . 117
6.7 Related Work . . . 119
6.8 Conclusions . . . 120
7 Conclusions 122
Chapter 1
Elections are a flexible, widely-used framework for preference aggregation. They are used to seek fair outcomes in domains ranging from the human (often in political elections) to the electronic
(where they play a central role in multiagent systems, a subfield of artificial intelligence). Examples of such applications include a meta-search engine that combines the webpage rankings of several
search-engines to return the best decision on a search [DKNS01], an agent-based movie recommender system [GMHS99], and aggregating information in hu-man computation systems [MPC13]. There is a rich
literature in economics and political science that studies the use of elections, and different properties that different election sys-tems have (see, e.g., [Bla58, RO73]).
Let’s first discuss how an election is modeled. Below we have an election with the candidates a, b, c, and d, and the following four voters, where each voter strictly ranks all of the candidates from
most to least preferred.
• v1 voting (b > c > d > a).
• v2 voting (a > b > d > c).
• v3 voting (d > b > c > a).
• v4 voting (a > d > b > c).
One of the most commonly used ways of scoring an election is to award one point to each voter’s top preference, and the candidate(s) with the highest score win. When we score the election above in
this way a wins with a score of 2. However, notice that although a
is the most-preferred candidate for voters v2 and v4, a is the least-preferred candidate for
with four candidates, each voter’s most-preferred candidate receives 3 points, their second-preferred candidate receives 2 points, their third-second-preferred candidate receives 1 point, and their
least-preferred candidate receives 0 points, then b wins with a score of 8. And we can see that b seems to better represent the preferences of the voters, since voter v1 gets her top
choice, voters v2 and v3 get their second choice, and voter v4 gets her third choice.
It is natural to wonder whether there exists an election system that will always elect winners that properly represent the preferences of the electorate. Unfortunately, an impor-tant result by Arrow,
referred to as Arrow’s Impossibility Theorem [Arr50], states that given a very basic set of desirable properties, there exists no election system that satisfies all of them—there will always be
tradeoffs. Since this early work by Arrow, there has been a long line of research in social-choice theory that studies different properties for election systems (see, e.g., [Smi73, You75, YL78, Tid87,
Another very desirable property for an election system is strategyproofness, i.e., it is not possible for a voter to cast a strategic vote that results in a more-preferred outcome for that voter.
Consider the scenario where voterv4has knowledge of the preferences of the other voters.
Naturally, she wants a better outcome for herself, and can accomplish this by casting the strategic vote (d > a > c > b) so thatdwins with a score of 8. Sov4 gets a personally better
outcome by voting strategically (by getting her second choice instead of her third choice), but notice that this is a worse outcome overall, since now two voters (v1 and v2) get their
third choice.
Computational Study of Election-Attack Problems
Although every reasonable election system can be manipulated, it may be computation-ally infeasible to determine if a successful manipulation exists. Bartholdi, Tovey, and Trick [BTT89a] (see also,
Bartholdi and Orlin [BO91]) introduced this notion of examin-ing the computational complexity of the manipulation problem, where an election system that is computationally hard to be manipulated is
said to be “resistant” to manipulation, and this is an example of a computational property of an election system. This seminal work on the computational study of manipulation was very influential in
the creation of the research area of computational social choice, which is an interdisciplinary field that looks at problems in social choice theory, such as elections, through a computational lens
(see [BCE+[16] for]
a general overview of the field). And the increasing use of elections in artificial intelligence applications encourages us to study these computational properties in the same way as the aforementioned
social-choice properties.
The models of manipulation, control, and bribery are the three most-commonly studied election attacks, and we describe each of these problems below.
The influential work by Bartholdi, Tovey, and Trick considered the case of manipulation for a single manipulative voter [BTT89a]. However, in real-world settings if is likely that several voters
collude to manipulate the outcome of an election. Conitzer, Sandholm, and Lang extended the single-manipulator model to this coalitional case, where a coalition of manipulators wants to set their
votes to ensure that a preferred candidate wins. They also considered the destructive case, where instead of wanting to ensure that a preferred candidate wins, the manipulators want to ensure that a
despised candidate loses, and the case where voters can have associated weights (where a weight ω voter can be thought of as a coalition ofω unweighted voters with the same vote) [CSL07]. A large
line of research followed which explores the computational complexity of manipulation for various different election systems, and considers variants of the manipulation problem itself to better model
natural scenarios that may occur.
destructive control, where the chair wants to ensure that a despised candidate does not win [HHR07]. And like manipulation, control has also been studied for the case of weighted elections [FHH15].
Faliszewski, Hemaspaandra, and Hemaspaandra introduced the model of bribery, which is closely related to manipulation, but instead of asking if voters can cast strategic votes to ensure a preferred
outcome, bribery asks whether it is possible to change the votes of a subcollection of the voters to ensure a preferred outcome [FHH09]. Bribery is often motivated in a more positive way where the
cost of changing the votes of the selected voters represents the campaign costs of an election organizer that seeks to change their votes.
Overall, the study of the complexity of such election-attack problems has been one of the most important directions of research in computational social choice (see, e.g., [FHH10, FP10]).
More Natural Models
It is important that we study these election-attack problems under natural assumptions, since this may affect the computational complexity of a given attack. And the focus of this thesis is on
studying just that. We examine how different natural models for the votes, the electorate, the representation, and even the attack itself affect the computational complexity of election-attack
A prominent direction of this thesis is the study of elections that allow voters to state ties between candidates in their preferences. Allowing votes with ties is very natural and such votes are
seen in real-world preference data (see, e.g., the datasets available on
PrefLib [[MW13]), and election systems such as Kemeny [Kem59] and Schulze [Sch11] are]
defined for votes with ties.
Another natural model that we look at, this time for the preferences of the electorate, is a restriction on preferences called single-peakedness [Bla48]. Restricting the preferences of an electorate
may sound counter-intuitive, and results such as Arrow’s theorem consider allowing unrestricted preferences to be a reasonable and desirable property. But single-peakedness is not a restriction in
the sense of preventing voters from voting their true preferences, but rather modeling how preferences are structured in certain natural scenarios. Intuitively, single-peaked preferences can be
thought of as modeling the preferences of an electorate with respect to a single issue, where there exists a one-dimensional ordering of the candidates (an axis) and candidates farther to the
leftmost and rightmost points on the axis represent the extremes of the issue.
In Chapter 4 we consider single-peaked preferences in the setting where voters can cast votes with ties. The standard model of single-peakedness due to Black [Bla48] is defined for such votes, but
other models have also been introduced that generalize single-peakedness in different ways for votes with ties. In addition to Black’s standard model, we consider the models of single-plateaued
preferences [Bla58], single-peaked preferences with outside options [Can04], and possibly single-peaked preferences [Lac14], and we compare how these models relate to one another for different types
of votes with ties.
Single-peaked electorates have desirable social-choice properties. For example, there are reasonable strategyproof election systems when the voters in an election are single-peaked (see, e.g.,
[Bar01]). So one could argue that we should use only strategyproof election systems when we have single-peaked electorates. However, as mentioned by Faliszewski et al. [FHHR11], it is not always the
case that one can choose the election system in a given situation. In addition, there may be other properties that one wants in an election system even more than strategyproofness, and furthermore
strategyproofness does not imply that different types of electoral control are not possible [FHHR11]. Thus we also study the complexity of election-attack problems for single-peaked votes. For
tie-free votes, it has been shown that different computational problems often become easier when the votes in an election are single-peaked [FHHR11, BBHH15].
case of tie-free single-peaked votes to single-peaked votes with ties, when the standard model of single-peakedness is used. However, when the model of possibly single-peaked preferences is used we
observe an anomalous increase in complexity. Most of these results previously appeared in Fitzsimmons [Fit15] and Fitzsimmons and Hemaspaandra [FH16c].
Since our results concern computational complexity, it is important that we consider the representation of our problems. In most of the computational study of elections, the voters in an election are
represented as a list of their individual votes. However, many voters may have the same vote and it is natural to represent them in a succinct way, where the voters are represented by the distinct
votes cast and their corresponding counts. Though this representation can be exponentially smaller, we find in Chapter 5 that the computational complexity of different election-attack problems rarely
increases from the nonsuccinct to the succinct case, which is in contrast to the case of unweighted to weighted voters. We explain this behavior by showing that several common proof techniques that
show that election-attack problems are in P can be adapted for the case of succinct votes. These results previously appeared in Fitzsimmons and Hemaspaandra [FH17] and in its corresponding technical
report [FH16b].
So far we have discussed different natural models that concern the voters in an election, but the model of the election-attack itself is important to consider as well.
List of Contributions
We briefly summarize the main contributions of this thesis below.
• We consider the natural model of allowing voters to state varying amounts of ties in their preferences, and examine how this can affect the computational complexity of election-attack problems.
(Chapter 3)
• We show that for natural election systems, allowing votes with ties can both increase and decrease the complexity of bribery, and we state a general result on the effect of votes with ties on the
complexity of control. (Chapter 3)
• We consider the four most natural models of single-peakedness for votes with ties and show that for each model it is in P to determine when a given collection of votes satisfies that model. (Chapter
• We expand our results on the complexity of manipulation for votes with ties by con-sidering the complexity of single-peaked votes with ties, and find that the complexity can depend on the model
used. (Chapter 4)
• We consider how the succinct representation of the voters can affect the complexity of different election problems. Even though the succinct representation can be expo-nentially smaller than the
nonsuccinct, we find that the complexity of election attacks (in the length of the input) rarely increases, and explain this behavior by showing how to adapt different techniques for showing election
problems to be in P from the nonsuccinct to the succinct case. (Chapter 5)
• We find one natural case where the complexity increases when moving from the non-succinct to the non-succinct representation of the voters, namely the complexity of winner determination for Kemeny
elections. (Chapter 5)
• We model the setting of control attacks on elections in which there are manipulators. We consider both the case where the chair and the manipulators have the same goal and where they have directly
conflicting goals. (Chapter 6)
• We show for the important election systems approval, Condorcet, and plurality that the complexity of control in the presence of manipulators can be much lower than those upper bounds, even falling
as low as polynomial time. (Chapter 6)
List of Publications
These are the publications that form much of the material in this thesis.
• Z. Fitzsimmons and E. Hemaspaandra. The Complexity of Succinct Elections. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (Student Abstract), pages 4921–4922, February 2017.
• Z. Fitzsimmons, E. Hemaspaandra, and L. Hemaspaandra. Manipulation Complexity of Same-System Runoff Elections. Annals of Mathematics and Artificial Intelligence, 77(3–4): 159–189, 2016.
• Z. Fitzsimmons and E. Hemaspaandra. Modeling Single-Peakedness for Votes with Ties. In Proceedings of the 8th European Starting AI Researcher Symposium, pages 63–74, August 2016.
– Also appears in Workshop Notes of the 10th Workshop on Advances in Preference Handling, July 2016.
• Z. Fitzsimmons and E. Hemaspaandra. Complexity of Manipulative Actions When Voting with Ties. In Proceedings of the 4th International Conference on Algorithmic
Decision Theory, pages 103–119, September 2015.
– Also appears in Workshop Notes of the 6th International Workshop on Compu-tational Social Choice, June 2016.
• Z. Fitzsimmons. Single-Peaked Consistency for Weak Orders Is Easy. In Proceedings
of the 15th Conference on Theoretical Aspects of Rationality and Knowledge, pages
127–140, June 2015.
• Z. Fitzsimmons. Realistic Assumptions for Attacks on Elections. InProceedings of the
29th AAAI Conference on Artificial Intelligence (Doctoral Consortium), pages 4235–
4236, January 2015.
• Z. Fitzsimmons, E. Hemaspaandra, and L. Hemaspaandra. Control in the Presence of Manipulators: Cooperative and Competitive Cases. In Proceedings of the 23rd International Joint Conference on
Artificial Intelligence, pages 113–119, August 2013.
Chapter 2
Elections and Preferences
Anelectionis a pair (C, V) whereC is a finite set of candidates andV is a finite collection of voters (a preference profile). Each voter in an election has a corresponding vote (preference order) over
the set of candidates. In most of the computational study of elections, votes are assumed to be a total order, i.e., a strict ordering of the candidates from most to least preferred. Formally, a
total order is a complete, reflexive, transitive, and antisymmetric binary relation. We use “>” to denote strict preference between two candidates, e.g., given the candidate set{a, b, c}a vote could
be (b > a > c), which means thatbis strictly preferred toa,b toc, and a toc.
In this thesis we also consider other types of preference orders, and it will be clear from context when we use each type of preference order.
In Chapters 3 and 4 we consider voters with varying amounts of ties in their preferences; the most general being weak orders. (See Example 2.1.1 for an example of each of the pref-erence orders we
consider.) Aweak order is a total order without antisymmetry. So, a voter with weak-order preferences can state transitive indifference (“∼”) among the candidates, in addition to strict preference. We
use “∼” to denote the indifference relation. In general, a weak order can be viewed as a total order with ties. So we sometimes refer to weak orders as votes with ties, and informally refer to
indifference as ties throughout this thesis.
We consider two restrictions to weak orders: top orders and bottom orders. A top order is a weak order with all tied candidates ranked last. Similarly, abottom orderis a weak order with all tied
candidates ranked first. We also will sometimes discuss partial orders, where a partial order over a set of candidates is transitive, reflexive, and antisymmetric.
Example 2.1.1 Given the candidate set {a, b, c, d}, (a > b ∼ c > d) is a weak order, (a∼b > c > d) is a bottom order, (a > b > c ∼d) is a top order, (a > b) is a partial order, and (a > b > c > d) is
a total order. Notice that every weak order is also a partial order, every bottom order and every top order is also a weak order and partial order, and that every total order is also a top order,
bottom order, weak order, and partial order.
Many of our results are for weighted elections, where each voter has an associated positive integral weight, and a voter with weightω counts as a coalition of ω unweighted voters that all vote the
same. Weighted elections are a very natural scenario for the real-world use of elections. For example, in an election among shareholders for a given company, the weight of a shareholder’s vote may
correspond to the number of shares that she holds.
Election Systems
An election system (an election rule), E, is a mapping from an election to a set W, referred to as the winner(s), where W can be any subset of the candidate set. This is referred to as the nonunique
winner model. In the unique winner model the winner of an election must be a single candidate. When it is not otherwise specified we use the nonunique winner model.
The winner problem for an election systemE is defined by the following decision problem.
Name: E-winner
Given: An election (C, V) and a candidate p∈C.
Question: Is pa winner of the election (C, V) using election system E?
One reasonable computational property that we can require from an election system is that it is computationally easy to determine the winner, i.e., the winner problem is in P. However, there exist
election systems with desirable properties in social choice theory, e.g., the Kemeny rule [Kem59, KS60], which have an NP-hard winner problem [BTT89b] and thus is not in P (unless P = NP). Note that
with the exception of the results in Section 5.3, we consider election systems with polynomial-time winner problems.
We now discuss three important families of election systems, the first being scoring rules.
Scoring Rules
A scoring rule denotes a set of scoring vectors of the form hs1, . . . , smi, where for each
i, si ∈ Q and si ≥ si+1, and when given an election with m candidates, uses the vector of
a candidate at position i in the preference order of a voter receives a score of si from that
voter. The candidate(s) with the highest total score win. Four important scoring rules are plurality, veto, Borda, and t-Approval.
Plurality: with scoring vector h1,0, . . . ,0i.
Veto: with scoring vector h1, . . . ,1,0i.
Borda: with scoring vector hm−1, m−2, . . . ,1,0i.
t[-Approval:] [with scoring vector] [h1][, . . . ,][1]
| {z }
,0, . . . ,0i.
Extensions for Votes with Ties
To use a scoring rule to determine the outcome of an election containing votes with ties we must extend the definition of scoring rules given in the previous section. The scoring-rule extensions for
weak orders defined below generalize the extensions introduced by Baumeister et al. [BFLR12] and by Narodytska and Walsh [NW14] which in turn generalizes extensions used for the Borda count (see
[Eme13] for a discussion of such extensions).
Write a preference order with ties as G1 > G2 > · · · > Gr where each Gi is a set of
tied candidates. For each set Gi, let ki =Pi[j]−[=1]1kGjk be the number of candidates strictly
preferred to every candidate in the set. See the caption of Table 2.1 for an example.
We now introduce the following scoring-rule extensions, which as stated above, generalize previously used scoring-rule extensions [BFLR12, NW14]. In Table 2.1 we present an example of each of these
extensions for Borda.
Min: Each candidate in Gi receives a score of ski+kGik.
Max: Each candidate in Gi receives a score of ski+1.
Round-down: Each candidate in Gi receives a score of sm−r+i.
Average: Each candidate in Gi receives a score of
Pki+kGik j=ki+1 sj
Borda score(a) score(b) score(c) score(d)
Min 3 1 1 0
Max 3 2 2 0
Round-down 2 1 1 0
Average 3 1.5 1.5 0
Table 2.1: The score of each candidate for preference order (a > b ∼ c > d) using Borda with each of our scoring-rule extensions. We write this order as {a} > {b, c} > {d}, i.e.,
G1 ={a},G2 ={b, c}, and G3 ={d}. Note that k1 = 0, k2 = 1, and k3 = 3.
realized by our definitions above, with our round-down and average extensions yielding the same scores for top orders as their round-down and average extensions. With the additional modification thatsm
= 0 our min scoring-rule extension yields the same scores for top orders
as round-up in the work by Narodytska and Walsh [NW14].
Pairwise Rules
In addition to scoring rules, election systems can be defined by the pairwise majority elections between the candidates, i.e., for a pair of candidates a, b∈C a beats b by majority if more than half
of the voters state a > b.
One important example is the Copeland rule [Cop51], where given an election, each candidate receives one point for each pairwise majority election she wins and receives 0.5 points for each tie. The
Copeland rule was later parameterized by Faliszewski et al. [FHHR09] as Copelandα [(where][α][is a rational number between 0 and 1), and instead of each candidate]
receiving 0.5 points for each tie, they receiveαpoints. We mention here that it was somewhat recently discovered that an election system that is the same as Copeland1 [was proposed in]
the thirteenth-century by Ramon Llull, a Catalan mystic and philosopher (see [HP01]). So, as is now common, we refer to Copeland1 [as Llull.]
As with the case of scoring rules, we also consider the case of Copelandα [elections for]
votes with ties. We extend the definition for Copelandα [in the obvious way (i.e.,] [a > b] [by]
majority if more voters state a > b than b > a), as was done for the case of top orders by Baumeister et al. [BFLR12] and Narodytska and Walsh [NW14].
When discussing elections defined by pairwise majority elections we sometimes refer to
the induced majority graph of an election. An election (C, V) can be represented by an
induced majority graph where each candidate in the election corresponds to a vertex in the graph, and for every pair of candidates a, b∈ C the graph contains the edge a →b if a > b
Example 2.2.1 Given the election (C, V)where C ={a, b, c, d} and the collection of votes,
V, consists of one voter voting (a > b > c > d), one voter voting (c > b > a > d), and one voter voting (a > c > b > d), we have the following induced majority graph.
ab bb
cb bd
When discussing pairwise election systems it is important to mention the notion of the Condorcet winner of an election, i.e., the candidate that beats every other candidate pair-wise [Con85]. Notice
that a is the Condorcet winner in Example 2.2.1, since for each can-didate b ∈C − {a} there is the edge a →b in the induced majority graph. The Condorcet winner of an election seems like a good
choice for an outcome that represents the electorate; unfortunately a Condorcet winner is not guaranteed to exist for a given election. And it is not difficult to construct an election that realizes
this situation. In Example 2.2.2 we present such an example. There is also the notion of a weak Condorcet winner, which ties-or-beats every other candidate pairwise. So, in contrast to the case of
the Condorcet winner, an election can have multiple weak Condorcet winners, though it is possible to have neither.
Example 2.2.2 Given the election (C, V)where C ={a, b, c, d} and the collection of votes,
V, consists of one voter voting (b > a > c > d), one voter voting (c > b > a > d), and one voter voting (a > c > b > d), we have the following induced majority graph, which contains a cycle, and thus
does not have a Condorcet winner or even a weak Condorcet winner.
a[b] bb
cb bd
An election system that elects exactly the Condorcet winner when one exists is called Condorcet consistent (or weak Condorcet consistent for the case of electing exactly the weak Condorcet winner(s)
when they exist). Copelandα [is an example of a Condorcet consistent]
Approval Voting
Approval voting is an election system that is somewhat differently structured than the two families stated above. In approval voting, instead of each voter voting a total-order prefer-ence, each voter
votes a 0-1 vector of length kCk and indicates approval (1) or disapproval (0) for each candidate, and the candidates with the most approvals win [BF83].
Manipulative Attacks
A major research direction discussed in this thesis is the computational study of how hard different manipulative attacks are for a given election system. In this section we define the three main
families of manipulative attacks: manipulation, bribery, and control. We mention here that in each of the manipulative attacks discussed we assume that the manipulative agent(s) (either the
manipulators, the chair, or the briber) has complete information of the preferences of the voters.1 [We briefly describe each general family of attacks and present a]
formal definition of each below. Note that we define each of these problems for the nonunique winner model (our standard model). For the unique winner model it will generally be enough to change
instances of “a winner” to “a unique winner” in the definitions.
In each of our manipulative attacks we formally define the constructive unweighted cases, where “constructive” means that the preferred outcome of the strategic agent(s) is to ensure that a preferred
candidate wins, and “unweighted” means that we are considering elections where voters do not have corresponding weights. In the corresponding destructive cases the preferred outcome of the strategic
agent(s) is to ensure that a despised candidate does not win, and in the corresponding weighted cases each voter has a corresponding weight.
The manipulation problem, first introduced by Bartholdi, Tovey, and Trick [BTT89a], asks when given an election, a manipulator, and a preferred candidate, if the manipulator can set her vote to ensure
that her preferred candidate wins. It is reasonable to assume that in an election, especially when there are many voters, that there are multiple manipulators who act as a coalition. Conitzer,
Sandholm, and Lang [CSL07] introduced the coalitional manipulation problem, which we formally define below.
Name: E-Constructive Unweighted Coalitional Manipulation (CUCM)
Given: An election (C, V), a collection of manipulative votersW, and a preferred candidate
Question: Is there a way to set the votes of the manipulators such thatpis a winner of the
election (C, V ∪W) using election system E?
We will also consider the cases of destructive unweighted coalitional manipulation (DUCM) and the cases of manipulation for weighted elections (CWCM and DWCM), which were both introduced by Conitzer,
Sandholm, and Lang [CSL07].
Electoral control is the problem of determining if it is possible for an election organizer with control over the structure of an election, whom we refer to as the election chair, to ensure a
preferred outcome. This preferred outcome can either be ensuring that a preferred candidate wins (the constructive case [BTT92]) or that a despised candidate loses (the destructive case [HHR07]).
The standard models of control can be split into two groups. The first being the nonpar-tition control types where the election chair can control the election by adding or deleting the voters or
candidates. The second being the partition control types where either the candi-dates or voters are partitioned and subelections are held restricted to these partitions before a final runoff is held
among the candidates that survive. In these cases we consider different models for what it means to survive, the ties-eliminate (TE) model and the ties-promote (TP) model. In the ties-eliminate (TE)
model, only a unique winner of a subelection pro-ceeds to the runoff (if multiple candidates tie as winners then no candidates proceed to the runoff), and in the ties-promote (TP) model all of the
winning candidates in a subelection proceed to the runoff.2
We formally define the constructive versions of each of the standard control actions below.
Name: E-Constructive Control by Adding Candidates (CCAC)3
Given: An election (C, V), a set of unregistered candidates D, a preferred candidatep∈C,
and a limit k∈N.
2[Recent work by Hemaspaandra, Hemaspaandra, and Menton shows that in the nonunique winner model]
two pairs of the standard control models collapse. Specifically, the models of destructive control by par-titioning candidates and destructive control by runoff parpar-titioning candidates, in each
of the tie-breaking models [HHM13].
Question: Does there exist a subset of the unregistered candidates D′ ⊆ D such that kD′[k ≤][k] [and] [p][is a winner of the election (][C][∪][D]′[, V][) using election system] [E][?]
Name: E-Constructive Control by Deleting Candidates (CCDC)
Given: An election (C, V), a preferred candidate p∈C, and a limitk ∈N.
Question: Does there exist a subset of the candidatesC′ ⊆C such thatkC′k ≤k and pis
a winner of the election (C−C′[, V][) using election system] [E][?]
Name: E-Constructive Control by Adding Voters (CCAV)
Given: An election (C, V), a collection on unregistered voters U, a preferred candidate
p∈C, and a limit k ∈N.
Question: Does there exist a subcollection of the unregistered voters U′ ⊆ U such that
kU′[k ≤][k] [and] [p][is a winner of the election (][C, V] [∪][U]′[) using election system] [E][?]
Name: E-Constructive Control by Deleting Voters (CCDV)
Given: An election (C, V), a preferred candidate p∈C, and a limitk ∈N.
Question: Does there exist a subcollection of the voters V′ [⊆][V] [such that][k][V]′[k ≤][k] [and] [p]
is a winner of the election (C, V −V′) using election system E?
Name: E-Constructive Control by Partitioning Candidates (CCPC)
Given: An election (C, V) and a preferred candidate p∈C.
Question: Does there exist a partition of the candidates (C1, C2) such that pis a winner of
the runoff election between the winners of (C1, V) under the given tie-handling model
and the candidates in C2, all using election system E?
Name: E-Constructive Control by Runoff Partitioning Candidates (CCRPC)
Given: An election (C, V) and a preferred candidate p∈C.
Question: Does there exist a partition of the candidates (C1, C2) such that pis a winner of
the runoff election that consists of the winners of (C1, V) and (C2, V) under the given
tie-handling model, all using election system E?
Name: E-Constructive Control by Partitioning Voters (CCPV)
Question: Does there exist a partition of the voters (V1, V2) such that p is a winner of
the runoff election that consists of the winners of (C, V1) and (C, V2) under the given
tie-handling model, all using election system E?
We will also consider the corresponding destructive and weighted cases of these control problems.
For the destructive cases, simply change each instance of “preferred candidate” to “de-spised candidate,” and “p is a winner” to “p is not a winner.”
For the weighted cases of partitioning and candidate control it is clear to see how weighted elections are taken into account. It is not as clear how to adapt the definitions of voter control, since
we could interpret the limit (in either adding or deleting voters) as the number of voters to add/deleteor the total vote weight to add/delete. We use the former, where the limit remains the number
of voters, and this is precisely the definition used by Faliszewski, Hemaspaandra, and Hemaspaandra [FHH15].
Bribery is the problem of determining if it is possible to change the votes of a subcollection of the voters, within a given limit, to ensure that a preferred candidate wins [FHH09].
Name: E-Bribery
Given: A candidate set C, a collection of votersV, a preferred candidatep∈C, and a limit
k ∈N.
Question: Is there a way to change the votes of at most k of the voters in V so thatp is a
winner under election system E?
In the corresponding problem of weighted bribery where each voter has an associated positive integer weight, the limit still denotes the number of voters to bribe. Though none of our results use
prices, we mention here that bribery is also considered for the case where voters have an associated price, where the limit is then the total budget used [FHH09].
Computational Complexity
reductions (≤p
m), and since all of the reductions in this thesis are polynomial-time
many-one reductions, we often simply refer to them as reductions. (See [GJ79] for a general introduction to P, NP, and polynomial-time many-one reductions.)
As is standard in computational complexity theory, the election attacks defined above are decision problems, and our results generally concern whether a given election system is easy to attack, i.e.,
it is in P to determine if the attack is possible, or if it is difficult to attack, i.e., it is NP-hard to determine if the attack is possible. Most of our results that show problems to be NP-hard also
show them to be NP-complete, where membership in NP is generally trivial to show for these problems.
Our proofs of NP-hardness are all due to reductions from NP-complete problems such as Partition and Exact Cover by 3-Sets, which we define below. We have some results that use variants of these
problems, but we will introduce them locally to where they are used.
Name: Partition [Kar72]
Given: Given a nonempty set of positive integers {k1, . . . , kt} such thatPt[i][=1]ki = 2K.
Question: Does there exist a subset A of {k1, . . . , kt} such thatPA=K?
Name: Exact Cover by 3-Sets [Kar72]
Given: Given a set B = {b1, . . . , b3k}, and a collection S = {S1, . . . , Sn} of three-element
subsets of B.
Question: Does there exists a subcollectionS′ [of] [S] [such that every element of] [B] [occurs in]
exactly one member of S′[?]
Our polynomial-time results range from simple greedy approaches, to reductions to more complex algorithms such as network flow and matrix permutation problems, and such prob-lems will be defined
locally for the results that use them.
In Chapters 5 and 6 we discuss results that include completeness results for higher classes, that are widely believed to be strictly larger than NP. Most notably, the classes Θp[2], ∆p[2], NPNP,
coNPNP, and coNPNPNPin the polynomial hierarchy. (We introduce these classes here, but Chapters 5 and 6 each contain further discussion of the classes that appear in their results.)
The polynomial hierarchy is defined by the classes Σp[k], Πp[k], and ∆p[k], where ∆p[0] = Πp[0] = Σp[0] = P and, for all k ≥ 0, Σp[k][+1] = NPΣpk, Πp
k+1 = coNPΣ
k, and ∆p
k+1 = P∆
k, where
The class Θp2 = PNP[log] was first studied by Papadimitriou and Zachos [PZ83], and
denotes the class of problems solvable by a P-machine that can ask O(logn) queries to an NP oracle. Hemachandra showed that this class is equivalent to PNP
|| , the class of problems
solvable by a P-machine that can ask one round of parallel queries to an NP oracle [Hem89]. Note the following relationships between these classes:
P⊆NP∩coNP⊆NP∪coNP⊆Θp[2] ⊆∆p[2] ⊆Σp[2]∩Πp[2] ⊆Σp[2]∪Πp[2] ⊆Πp[3].
There are far fewer completeness results for higher levels of the polynomial hierarchy than for NP (see [SU02a, SU02b]), and it is particularly interesting to find natural problems at such high levels
of complexity.
The computational complexity classes mentioned above concern the worst-case time com-plexity of a given problem. A given NP-hard problem may have many more easy instances than hard ones, and in
practice a heuristic algorithm may be able to perform quite well. However, there are known theoretical limits to the performance of heuristics for NP-hard problems. A recent survey by Hemaspaandra
and Williams [HW12] discusses such limi-tations and shows that, due to the work by Buhrman and Hitchcock [BH08] and Cai et al. [CCHO05], no polynomial-time heuristic algorithm can err on only
subexponentially many instances—unless the polynomial hierarchy collapses.
Usually not all instances of a problem are equally likely, so the distribution of the instances must be taken into account to move beyond worst-case analysis to the average case. The notion of
average-case complexity due to Levin [Lev86] takes the distribution into account, but this is difficult to work with and few problems are complete for the class.
Some recent work on election attacks has studied the performance of heuristics to examine how hard these problems are in practice. Most of the empirical study of the hardness of election attacks
follows a similar design as the influential work by Walsh [Wal11] by examining the performance of algorithms for an election-attack problem for elections with votes sampled from either theoretical
distributions such as the impartial culture model [GK68] or from real-world data, e.g., the datasets available on PrefLib [[MW13]. The analysis done in]
these studies has generally consisted of descriptive statistics, e.g., the observed runtime of an algorithm for the instances in the experiment, and some patterns and trends can be discussed using
such data, which can be valuable to motivate theoretical study. For example, Walsh observed a smooth phase transition for manipulation of STV and veto [Wal11], and Mossel, Procaccia, and R´acz later
proved this for independent and identically distributed votes [MPR13].
Chapter 3
Models for the Votes: Votes with Ties
In this chapter we consider the effect of different generalizations of total-order preferences on the complexity of the election attacks of manipulation, bribery, and control. The majority of this
chapter will focus on preferences that allow voters to state ties among the candidates in addition to strict preference, but we will also consider the case where the preferences of the voters do not
need to be transitive, which is referred to as the case of irrational votes.
The computational study of the problems of manipulation, control, and bribery has largely been restricted to elections where the voters have tie-free votes. Recent work by Narodytska and Walsh [NW14]
(see also [ML15]) studies the computational complexity of the manipulation problem for top orders, i.e., votes where ties are allowed, but only among a voter’s least-preferred candidates and are
otherwise tie free. Some of our manipulation results can be seen as expanding on the manipulation results from Narodytska and Walsh [NW14] by solving open questions and considering more general types
of votes with ties.
It is important that we understand the complexity of election problems for elections that allow votes with ties, since in practical settings voters often have ties between some of the candidates. For
example, the online preference repositoryPrefLib[contains several election]
datasets containing votes with ties [MW13], and it is natural to allow agents to state votes with ties when they have utilities over a set of candidates.
Election systems in use are sometimes defined for votes with ties. For example, both the Kemeny rule [Kem59] and the Schulze rule [Sch11] are defined for votes that contain ties. Also, there exist
variants of the Borda count that are defined for top-order votes (see [Eme13]). As described in Section 2.2.2, when necessary, we extend the definitions of election systems to properly handle votes
with ties.
It is tempting to want to break ties in a voter’s preferences when they have votes with ties, but it is very natural for a voter to have equal preference among some of the candidates and so they
should not be forced to state a tie-free vote. We mention in passing the line of work on incomplete preferences that seeks to determine the “best extension” of a voter’s preferences and use this to
determine the winner of a given election (see, e.g., [XC11]).
For the computational study of manipulation, we are the first to consider orders that allow a voter to state ties at each position of her preference order, i.e., weak orders. In contrast to the work
by Narodytska and Walsh [NW14], we give an example of a natural case where manipulation becomes hard when given votes with ties, while it is in P for tie-free votes. And we are the first to study the
complexity of the standard models of control and bribery for votes with ties. However, we mention here that Baumeister et al. [BFLR12] consider a different version of bribery called extension bribery,
for top orders (there called top-truncated votes) [BFLR12].
This chapter is organized as follows. Our results are distributed among three sections, each of which deals with a different behavior of votes with ties. In Section 3.2 we consider cases where the
complexity of an election attack can increase when moving from votes without ties to votes with ties. In Section 3.3 we present examples where the complexity decreases for votes with ties (and state
a general observation on control). And in Section 3.4 we present cases where the complexity remains the same, a general result on two-voter majority graphs, and discuss axioms that our scoring-rule
extensions defined in Section 2.2.2 satisfy. We discuss related work in Section 3.6 and our general conclusions and some directions for future work in Section 3.7.
We now present our results, many of which are about scoring rules. In Section 2.2.2 we presented the extensions min, max, round-down, and average to properly handle votes with ties, which extend
scoring rules to score “groups” of tied candidates instead of each candidate individually. To recall the definitions of each of our scoring-rule extensions, we repeat the example from Table 2.1.
Consider the candidate set{a, b, c, d}and the vote (a > b∼c > d). We show the score assigned to each candidate using Borda (h3,2,1,0i) using each of our scoring-rule extensions.
Borda using min: score(a) = 3, score(b) =score(c) = 1, and score(d) = 0.
Borda using max: score(a) = 3, score(b) =score(c) = 2, and score(d) = 0.
Borda using round-down: score(a) = 2, score(b) =score(c) = 1, and score(d) = 0.
Complexity Goes Up
The related work on the complexity of manipulation for top orders [NW14] did not find a natural case where manipulation complexity increases when moving from total orders to top orders. We found such
cases for Constructive Weighted Coalitional Manipulation (CWCM) by considering votes that are single-peaked (in the possibly single-peaked model), so the proof of the following theorem is deferred to
Chapter 4, namely the proofs of Theorems 4.4.2, 4.4.3, and 4.4.4, where we discuss models of single-peakedness.
Theorem 3.2.1 The complexity of 3-candidate Borda CWCM for possibly single-peaked
preferences goes fromP toNP-complete for top orders using max, round-down, and average.
We now present cases where we observe an increase in the computational complexity of control and of bribery when moving from tie-free votes to votes with ties.
Consider the complexity of Constructive Control by Adding Voters (CCAV), which asks if the election chair can ensure that a preferred candidate wins by adding voters to the election. (See Section
2.3.2 for a formal definition.) This problem is known to be in P for plurality for total orders [BTT92].
Theorem 3.2.2 [BTT92] Plurality CCAV for total orders is in P.
Plurality using max for bottom orders is essentially the same as approval voting (where each voter indicates either approval or disapproval of each candidate and the candidate(s) with the most
approvals win). For example, given the set of candidates {a, b, c, d}, an approval vector that approves of a and c, and a preference order (a ∼ c > b > d) yield the same scores for approval and
plurality using max respectively. So the theorem below immediately follows from the proof of Theorem 4.43 from Hemaspaandra, Hemaspaandra, and Rothe [HHR07], which shows that CCAV for approval voting
is NP-complete, and so we see an increase in the complexity with respect to the case of total orders.
Theorem 3.2.3 Plurality CCAV for bottom orders (and thus also for weak orders) using
max is NP-complete.
We now show that the case of plurality for bottom orders and weak orders using average is NP-complete.
Theorem 3.2.4 Plurality CCAV for bottom orders (and thus also for weak orders) using
Proof. Let B = {b1, . . . , b3k} and a collection S = {S1, . . . Sn} of 3-element subsets of
B where each Sj = {bj1, bj2, bj3} be an instance of Exact Cover by 3-Sets, which asks if
there exists a subcollection S′ [of] [S] [such that each] [b] [∈] [B] [occurs in exactly one member of]
S′[. Without loss of generality let] [k] [be divisible by 4 and let] [ℓ] [= 3][k/][4. We construct the]
following instance of control by adding voters.
Let the candidates C = {p} ∪B. Let the addition limit be k. Let the collection of registered voters consist of the following (3k2 [+ 9][k][)][/][4 + 1 voters. (When “· · ·][” appears at]
the end of a vote here, the remaining candidates from C are ranked lexicographically. For example, given the candidate set {a, b, c, d}, the vote (b > · · ·) denotes the vote (b > a > c > d).)
• For each i, 1≤i≤ℓ,k+ 3 voters voting (bi ∼bi+ℓ ∼bi+2ℓ ∼bi+3ℓ >· · ·).
• One voter voting (p >· · ·).
Let the collection of unregistered voters consist of the following n voters.
• For each Sj ∈ S, one voter voting (p∼bj1 ∼bj2 ∼bj3 >· · ·).
Notice that from the registered voters, the score of each bi candidate is (k−1)/4 greater
than the score ofp. Thus the chair must add voters from the collection of unregistered voters so that no bi candidate receives more than 1/4 more points, while p must gain k/4 points.
Therefore the chair must add the voters that correspond to an exact cover. ❑
The complexity of bribery for plurality also goes from P for total orders to NP-complete for votes with ties.
Theorem 3.2.5 [FHH09] Unweighted bribery for plurality for total orders is in P.
The proof that bribery for plurality for bottom orders and weak orders using max is NP-complete immediately follows from the proof of Theorem 4.2 from Faliszewski, Hemaspaandra, and Hemaspaandra
[FHH09], which showed bribery for approval to be NP-complete.
Theorem 3.2.6 Unweighted bribery for plurality for bottom orders and weak orders using
max is NP-complete.
Complexity Goes Down
the complexity of coalitional manipulation (weighted or unweighted) for Borda goes from NP-complete to P for top orders using min (which they refer to as round-up). This is because when using min an
optimal manipulator vote is to put p first and have all other candidates tied for last.
In contrast, notice that the complexity of a (standard) control action cannot decrease when more lenient votes are allowed. This is because the votes that create hard instances of control are still
able to be cast when more general votes are possible. The election chair is not able to directly change votes, except in a somewhat restricted way in candidate control cases, but it is clear to see
how this does not affect the statement below.1
Observation 3.3.1 If a (standard) control problem is hard for a type of vote with ties, it
remains hard for votes that allow more ties.
What about bribery? Bribery can be viewed as a two-phase action consisting of control by deleting voters followed by manipulation. Hardness for a bribery problem is typically caused by hardness of
the corresponding deleting voters problem or the corresponding manipulation problem. If the deleting voters problem is hard, this problem remains hard for votes that allow ties, and it is likely that
the bribery problem remains hard as well. Our best chance of finding a bribery problem that is hard for total orders and easy for votes with ties is a problem whose manipulation problem is hard, but
whose deleting voters problem is easy. Such problems exist, e.g., all weighted m-candidate t-approval systems except plurality and triviality.2
Theorem 3.3.2 [FHH09] Weighted bribery for m-candidate t-approval for all t ≥ 2 and
m > t isNP-complete.
For m-candidate t-approval elections (except plurality and triviality) the correspond-ing weighted manipulation problem was shown to be NP-complete by Hemaspaandra and Hemaspaandra [HH07] and the
corresponding deleting voters problem was shown to be in P by Faliszewski, Hemaspaandra, and Hemaspaandra [FHH15].
Theorem 3.3.3 Weighted bribery for m-candidate t-approval for weak orders and for top
orders using min is in P.
Proof. To perform an optimal bribery, we cannot simply perform an optimal deleting voter
action followed by an optimal manipulation action. For example, if the score of b is already
A similar argument is used to explain the relationship between the easiness of control with respect to general and single-peaked votes (see footnote 14 of Brandt et al. [BBHH15]).
at most the score ofp, it does not make sense to delete a voter with vote (b > p∼a). But in the case of bribery, we would change this voter to (p > a∼b), which could be advantageous. However, the
weighted version of constructive control by deleting voters (CCDV) algo-rithm from [FHH15] still basically works. Since m is constant, there are only a constant number of different votes possible. And
we can assume without loss of generality that we bribe only the heaviest voters of each vote-type and that each bribed voter is bribed to put p first and have all other candidates tied for last. In
order to find out if there exists a successful bribery of k voters, we look at all the ways we can distribute this k among the different types of votes. We then manipulate the heaviest voters of each
type to put p first and have all other candidates tied for last, and see if that makes p a winner. [❑]
Complexity Remains the Same
Narodytska and Walsh [NW14] show that 4-candidate Copeland0.5 [CWCM remains ]
NP-complete for top orders. They conjecture that this is also the case for 3 candidates and point out that the reduction that shows this for total orders from Faliszewski, Hemaspaandra, and Schnoor
[FHS08] won’t work. We will prove their conjecture using the following variation of Partition, which we define as Partition′ [and show to be NP-complete below.]
Name: Partition′
Given: A nonempty set of positive even integers {k1, . . . , kt}and a positive even integerKb.
Question: Does there exist a partition (A, B, C) of{k1, . . . , kt}such thatPA=PB + Kb?
Theorem 3.4.1 Partition′ [is] [NP][-complete.]
Proof. The construction here is similar to the first part of the reduction to a different
version of Partition from Faliszewski, Hemaspaandra, and Hemaspaandra [FHH09].
Given {k1, . . . , kt} such that Pt[i][=1]ki = 2K, corresponding to an instance of Partition,
we construct the following instance {k′
1, . . . , k′t, ℓ′1, . . . , ℓ′t},Kb of Partition′. Let k′i = 4i +
i, ℓ′i = 4i, and Kb = 4t+1K + Pt
i=14i. (Note that in Faliszewski, Hemaspaandra,
Hemaspaandra [FHH09] “3”s were used, but we use “4”s here so that when we add a subset of {k′
1, . . . , k′t, ℓ′1, . . . , ℓ′t,Kb}, we never have carries in the last t+ 1 digits base 4, and we set
the last digit to 0 to ensure that all numbers are even.)
If there exists a partition (A, B, C) of {k[1]′, . . . , kt′, ℓ′1, . . . , ℓ′t} such that P
A =PB+Kb, then ∀i,1≤ i ≤t, ⌊(PA)/4i[⌋][mod 4 =][⌊(]P[B][+][K]b[)][/][4]i[⌋][mod 4. Note that] [⌊(]P[A][)][/][4]i[⌋][mod]
{ki′, ℓ′i}k = kB ∩ {ki′, ℓ′i}k+ 1. It follows that exactly one of ki′ or ℓ′i is in A and neither is
in B. Since this is the case for every i, it follows that B =∅. Now look at all ki such that
i is in A. That set will add up to K, and so our original Partition instance is a positive
For the converse, it is immediate that a subset D of {k1, . . . , kt} that adds up toK can
be converted into a solution for our Partition′ [instance, namely, by putting] [k]′
i inAfor every
ki in D, putting ℓ′i inA for every ki not in D, letting B =∅, and putting all other elements
of {k[1]′, . . . , k′t, ℓ′1, . . . , ℓ′t} in C. ❑
We can now use Partition′ [to prove the following theorem, which in turn proves the]
conjecture by Narodytska and Walsh [NW14].3
Theorem 3.4.2 3-candidate Copelandα [CWCM remains] [NP][-complete for top orders, ]
bot-tom orders, and weak orders, for all rational α ∈ [0,1) in the nonunique winner case (our
standard model).
Proof. The proof for bottom-order votes follows from the proof of the case for total-order
votes due to Faliszewski, Hemaspaandra, and Schnoor [FHS08]. We prove the top-order case below, and it is clear to see that this proof also holds for weak orders.
Let{k1, . . . , kt},Kb be an instance of Partition′, which asks whether there exists a partition
(A, B, C) of{k1, . . . , kt}such that PA=PB+Kb.
Let k1, . . . , kt sum to 2K and without loss of generality assume that Kb ≤ 2K. We now
construct an instance of CWCM. Let the candidate set C = {a, b, p} and let the preferred candidate be p. Let there be two nonmanipulators with the following weights and votes.
• One weight K+K/b 2 nonmanipulator voting (a > b > p).
• One weight K−K/b 2 nonmanipulator voting (b > a > p).
From the votes of the nonmanipulators, score(a) = 2, score(b) = 1, and score(p) = 0. In the induced majority graph, there is the edge a → b with weight Kb, the edge a → p with weight 2K, and the
edgeb →p with weight 2K. Let there be t manipulators with, weights
k1, . . . , kt.
Suppose that there exists a partition of {k1, . . . , kt} into (A, B, C) such that PA = P
B+Kb. Then for each ki ∈ A, have the manipulator with weight ki vote (p > b > a), for
each ki ∈B, have the manipulator with weightki vote (p > a > b), and for eachki∈C have
the manipulator with weight ki vote (p > a ∼ b). From the votes of the nonmanipulators
and manipulators, score(a) = score(b) =score(p) = 2α.
For the other direction, suppose that p can be made a winner. When all of the manip-ulators put p first then score(p) = 2α (the highest score that p can achieve). Since α < 1, the manipulators must
have voted such that a and b tie. This means that a subcollection of the manipulators with weight K voted (p > b > a), a subcollection with weight K−Kb
voted (p > a > b), and a subcollection with weight Kb voted (p > a ∼ b). No other votes would causeb andato tie. Notice that the weights of the manipulators in the three different subcollections form
a partition (A, B, C) of {k1, . . . , kt}such that
A=PB+Kb. ❑
3-candidate Copelandα [CWCM is unusual in that the complexity can be different if we]
look at the unique winner case instead of the nonunique winner case (our standard model). We prove that the only 3-candidate Copeland CWCM case that is hard for the unique winner model remains hard
using a very similar approach.
Theorem 3.4.3 3-candidate Copeland0
CWCM remainsNP-complete for top orders, bottom
orders, and weak orders, in the unique winner case.
Proof. The proof for bottom-order votes follows from the proof of the case for total-order
votes due to Faliszewski, Hemaspaandra, and Schnoor [FHS08]. We prove the top-order case below, and it is clear to see that this proof also holds for weak orders.
Let{k1, . . . , kt},Kb be an instance of Partition′, which asks whether there exists a partition
(A, B, C) of{k1, . . . , kt}such that PA=PB+Kb.
Let k1, . . . kt sum to 2K and without loss of generality assume that Kb ≤ 2K. We now
construct an instance of CWCM. Let the candidate set C = {a, b, p}. Let the preferred candidate be p ∈ C. Let there be two nonmanipulators with the following weights and votes.
• One weight K+K/b 2 nonmanipulator voting (a > p > b).
• One weight K−K/b 2 nonmanipulator voting (b > a > p).
From the votes of the nonmanipulators score(a) = 2, score(b) = 0, and score(p) = 1. The induced majority graph contains the edgea→b with weightKb, the edgea→pwith weight 2K, and the edgep→bwith
weightKb. Let there betmanipulators, with weightsk1, . . . , kt.
Suppose that there exists a partition of {k1, . . . , kt} into (A, B, C) such that PA = P
B+Kb. Then for each ki ∈ A have the manipulator with weight ki vote (p > b > a), for
each ki ∈B have the manipulator with weight ki vote (p > a > b), and for each ki ∈C have
the manipulator with weight ki vote (p > a ∼ b). From the votes of the nonmanipulators
For the other direction, suppose that p can be made a unique winner. When all of the manipulators put p first then score(p) = 1. So the manipulators must have voted so that
a and b tie, since otherwise either a or b would tie with p and p would not be a unique winner. Therefore a subcollection of the manipulators with weight K voted (p > b > a), a subcollection with
weight K −Kb voted (p > a > b), and a subcollection with weight Kb
voted (p > a∼b). No other votes would cause a and b to tie. ❑
The remaining 3-candidate Copelandα[CWCM cases remain in P when moving from total]
orders to votes with ties. The theorem below follows using the same arguments as in the proof of the case without ties from Falizsewski, Hemaspaandra, and Schnoor [FHS08].
Theorem 3.4.4 3-candidate Copelandα[CWCM remains in][P][for top orders, bottom orders,]
and weak orders, for α = 1 for the nonunique winner case and for all rational α ∈(0,1] in the unique winner case.
Majority-Graph Result
We now state a general theorem on two-voter majority graphs for votes with ties. See Brandt et al. [BHKS13] for related work on majority graphs constructed from a fixed number of voters with total
orders. Recall that a majority graph can be constructed from an election (C, V) by representing each candidate as a vertex in the graph and for every pair of candidates
a, b∈C the graph contains the edge a →b if a > b by majority.
Theorem 3.4.5 A majority graph can be induced by two weak orders if and only if it can
be induced by two total orders.
Proof. Given two weak orders v1 andv2 that describe preferences over a candidate setC,
we construct two total orders, v[1]′ and v[2]′ iteratively as follows.
For each pair of candidates a, b∈C and i∈ {1,2}, if a > b invi then seta > b in vi′.
For each pair of candidates a, b ∈ C, if a > b in v1 (v2) and a ∼ b in v2 (v1) then the
majority graph induced by v1 and v2 contains the edge a→b. To ensure that the majority
graph induced by v′
1 and v′2 contains the edge a→b we must set a > b in v2′ (v1′).
After performing the above steps there may still be a set of candidates C′ [⊆][C] [such that]
v1 and v2 are indifferent between each pair of candidates in C′. For each pair of candidates a, b ∈ C′[,] [a] [∼] [b] [in] [v]
1 and v2, which implies the majority graph does not contain and edge
between a and b. To ensure that majority graph induced by v[1]′ and v[2]′ does not contain an edge betweena andb, without loss of generality set v1′ to strictly prefer the lexicographically
smaller to the lexicographically larger candidate and the reverse in v′
The process described above constructs two ordersv′1andv2′ and ensures that the majority
graph induced by v1 and v2 is the same as the majority graph induced by v1′ and v2′. Since
for each pair of candidatesa, b∈C andi∈ {1,2}we consider each possible case wherea ∼b
is in vi and set either a > b orb > a in the corresponding order vi′, it is clear that v1′ and v2′
are total orders. ❑
Observe that as a consequence of Theorem 3.4.5 we get a transfer of NP-hardness from total orders to weak orders for two manipulators when the result depends only on the induced majority graph. The
proofs for Copelandα [unweighted manipulation for two manipulators]
for all rationalαfor total orders depend only on the induced majority graph [FHS08, FHS10], so we can state the following corollary to Theorem 3.4.5.
Corollary 3.4.6 Copelandα [unweighted manipulation for two manipulators for all rational]
α6= 0.5 for weak orders is NP-complete.
Irrational-Voter Copeland Results
Another way to give more flexibility to voters is to let the voters state preferences that are not necessarily transitive, which are referred to as “irrational.” This simply means that for every
unordered pair a, b ∈ C of distinct candidates, the voter has a > b or b > a. For example, a voter’s preferences could be (a > b, b > c, c > a). As mentioned by Faliszewski et al. [FHHR09], a voter
is likely to have preferences that are not transitive when making a decision based on multiple criteria.
Additionally, the preferences of voters can include ties as well as irrationality, and we will also consider | {"url":"https://1library.net/document/wq2pg2y1-election-attack-complexity-for-more-natural-models.html","timestamp":"2024-11-04T15:25:13Z","content_type":"text/html","content_length":"227926","record_id":"<urn:uuid:99d49b65-a4da-49ab-b259-ea5b7e1910d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00793.warc.gz"} |
Naming Polynomials Practice Sheet | Math = Love
Naming Polynomials Practice Sheet
I created this naming polynomials practice sheet to give my students some much-needed practice naming polynomials. We glued the finished practice sheet in our interactive notebooks.
We completed this practice sheet directly after our naming polynomials graphic organizer.
I like this practice sheet because after I make up the first few polynomials to name, students usually suggest their own polynomials for the rest of the class to name!
More Activities for Teaching Polynomials | {"url":"https://mathequalslove.net/naming-polynomials-practice-sheet/","timestamp":"2024-11-09T00:05:24Z","content_type":"text/html","content_length":"323706","record_id":"<urn:uuid:ee65ddbc-3817-4420-808d-bb695cef9f19>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00530.warc.gz"} |
Girsanov and Feynman-Kac type transformations for symmetric Markov processes
Chen, Zhen-Qing and Zhang, Tusheng (2002) Girsanov and Feynman-Kac type transformations for symmetric Markov processes. Annales de l'Institut Henri poincaré (B) Probability and Statistics, 38 (4).
pp. 475-505. ISSN 0246-0203
Restricted to Repository staff only
Download (232kB)
Studied in this paper is the transformation of an arbitrary symmetric Markov process X by multiplicative functionals which are the exponential of continuous additive functionals of X having zero
quadratic variations. We characterize the transformed semigroups by their associated quadratic forms. This is done by first identifying the symmetric Markov process under Girsanov transform, which
may be of independent interest, and then applying Feynman–Kac transform to the Girsanov transformed process. Stochastic analysis for discontinuous martingales is used in our approach.
Actions (login required) | {"url":"https://eprints.maths.manchester.ac.uk/566/","timestamp":"2024-11-11T05:16:34Z","content_type":"application/xhtml+xml","content_length":"24091","record_id":"<urn:uuid:1dd515f8-b70e-425e-816d-506d38f752e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00787.warc.gz"} |
Transactions Online
Tian FANG, Feng LIU, Conggai LI, Fangjiong CHEN, Yanli XU, "L0-Norm Based Adaptive Equalization with PMSER Criterion for Underwater Acoustic Communications" in IEICE TRANSACTIONS on Fundamentals,
vol. E106-A, no. 6, pp. 947-951, June 2023, doi: 10.1587/transfun.2022EAL2069.
Abstract: Underwater acoustic channels (UWA) are usually sparse, which can be exploited for adaptive equalization to improve the system performance. For the shallow UWA channels, based on the
proportional minimum symbol error rate (PMSER) criterion, the adaptive equalization framework requires the sparsity selection. Since the sparsity of the L[0] norm is stronger than that of the L[1],
we choose it to achieve better convergence. However, because the L[0] norm leads to NP-hard problems, it is difficult to find an efficient solution. In order to solve this problem, we choose the
Gaussian function to approximate the L[0] norm. Simulation results show that the proposed scheme obtains better performance than the L[1] based counterpart.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2022EAL2069/_p
author={Tian FANG, Feng LIU, Conggai LI, Fangjiong CHEN, Yanli XU, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={L0-Norm Based Adaptive Equalization with PMSER Criterion for Underwater Acoustic Communications},
abstract={Underwater acoustic channels (UWA) are usually sparse, which can be exploited for adaptive equalization to improve the system performance. For the shallow UWA channels, based on the
proportional minimum symbol error rate (PMSER) criterion, the adaptive equalization framework requires the sparsity selection. Since the sparsity of the L[0] norm is stronger than that of the L[1],
we choose it to achieve better convergence. However, because the L[0] norm leads to NP-hard problems, it is difficult to find an efficient solution. In order to solve this problem, we choose the
Gaussian function to approximate the L[0] norm. Simulation results show that the proposed scheme obtains better performance than the L[1] based counterpart.},
TY - JOUR
TI - L0-Norm Based Adaptive Equalization with PMSER Criterion for Underwater Acoustic Communications
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 947
EP - 951
AU - Tian FANG
AU - Feng LIU
AU - Conggai LI
AU - Fangjiong CHEN
AU - Yanli XU
PY - 2023
DO - 10.1587/transfun.2022EAL2069
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E106-A
IS - 6
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - June 2023
AB - Underwater acoustic channels (UWA) are usually sparse, which can be exploited for adaptive equalization to improve the system performance. For the shallow UWA channels, based on the proportional
minimum symbol error rate (PMSER) criterion, the adaptive equalization framework requires the sparsity selection. Since the sparsity of the L[0] norm is stronger than that of the L[1], we choose it
to achieve better convergence. However, because the L[0] norm leads to NP-hard problems, it is difficult to find an efficient solution. In order to solve this problem, we choose the Gaussian function
to approximate the L[0] norm. Simulation results show that the proposed scheme obtains better performance than the L[1] based counterpart.
ER - | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2022EAL2069/_p","timestamp":"2024-11-06T17:28:26Z","content_type":"text/html","content_length":"62377","record_id":"<urn:uuid:06fbd118-fc6c-46a3-891a-27cc836b7c9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00459.warc.gz"} |
Frequently Asked Accenture Placement Papers with Solution Page 56
"Die hard fro your name, your name should be enough to recognise you"
Technical development can be done through integration and not by differentiation
Thanks m4 maths for helping to get placed in several companies. I must recommend this website for placement preparations. | {"url":"https://m4maths.com/frequently-asked-placement-questions.php?ISSOLVED=&page=56&LPP=10&SOURCE=Accenture&MYPUZZLE=&TOPIC=&SUB_TOPIC=","timestamp":"2024-11-06T01:22:13Z","content_type":"text/html","content_length":"84649","record_id":"<urn:uuid:bde2b61d-7280-4f7a-98a4-80362caf1644>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00860.warc.gz"} |
Investigation of the Interaction Between Process Signals and Modeled Thermomechanical Energy in Generating Gear Grinding
dCor [-] Distance correlation
F[t] N Tangential force
F⃗ N Force vector
M[i] [-] Mutual information
p(X) [-] Marginal probability of X (energy variable)
p(X,Y) [-] Joint probability function of X and Y
p(Y) [-] Marginal probability of Y (power variable)
P[c] W Cutting power
P[p] W Process power
P[s,i] W Spindle idle power
P[s,t] W Spindle total power
R [-] Ranks of X (energy variable)
r [-] Pearson coefficient
r[s] [-] Spearman’s rank correlation coefficient
S [-] Ranks of Y (power variable)
v⃗ m/s Speed vector
v[c] m/s Cutting speed
X [-] Energy variable
Y [-] Power variable
Grinding is a high-performance machining process typically applied at the end of a manufacturing process chain, due to its ability to meet surface quality and workpiece integrity requirements. In
gear manufacturing, generating gear grinding is a high-productivity variation of the grinding process. The high productivity results from the continuous process, where the workpiece is machined with
a worm-shaped grinding tool. Due to its abrasive characteristics and the resulting high degree of material deformation, this process requires a large energy amount of energy input per volume of
material removed (Ref. 23).
The energy generated during the grinding process can be categorized into thermal and mechanical energy. While the mechanical loads in the contact zone during the grinding process have a direct
influence on the residual stress state of the workpiece surface, the thermal loads account for most of the generated energy. Thermal energy can be dissipated to either the chip, the environment, the
cutting oil or the workpiece, which brings the risk of compromising the surface integrity through the occurrence of grinding burn. The extent to which process parameters can be manipulated without
causing part damage is not entirely understood. This constraint not only limits the productivity of the process but also often makes it rather iterative, as the surface characteristics must be tested
to assess whether the selected parameters are appropriate. Currently, suitable process parameters are defined by time-consuming trials or based on the operator’s experience.
To guarantee an adequate surface integrity outcome in the parts finished by grinding, several energy characterization models have been developed for the calculation of energy and heat characteristics
in the contact zone during grinding (Refs. 11, 14, 21). However, their direct application into generating gear grinding is often not viable, due to the complex contact conditions of the process
derived from its intricate kinematics. More recent models are also able to describe the energy generation in the contact zone during generating gear grinding (Refs. 12, 19). The calculation of energy
through such models offers a viable approach to understanding the energy generation in the contact zone, although it does not provide a means of directly accessing the influences of the energy on the
process itself.
The fundamental relationships between energy and power in terms of machining processes indicate power as a relevant parameter for energy assessment. Power is a measure that can be assessed in
real-time as a time-domain signal data extracted from the control of the grinding machine (Ref. 3). Moreover, the recording of spindle power signals during the process is a commonly found feature
among modern grinding machines, allowing for in-process monitoring, without the necessity of installation of additional sensors. Thus, examining the relationships between the calculated process
energy and the power signals on the spindles of the grinding machine may enable an indirect assessment of the heat in the contact zone, without the need for interactive evaluations. The approach
developed in this research aims to aid a further understanding of the correlations between the energy generated during material removal and the power signals from the machine control during
generating gear grinding. The approach is based on the development of a methodology for the investigation of correlations between machine spindle power signals and the calculated process energy in
the contact zone, during the contact between tool and workpiece in generating gear grinding.
Despite the challenges posed by a complete understanding of the energy generation during generating gear grinding, both analytical and empirical models can provide a quantification of this metric.
The energy generated in the contact zone can be correlated to the power in the spindle (Ref. 12), which in turn, can be measured by analyzing machine signals. However, since this relationship is not
inherently straightforward, a detailed study of both power signals and energy is required to identify effective correlations. Therefore, within this chapter the current state of the art on both of
those concepts is reviewed, focusing on methodologies for energy calculation and signal analysis in gear grinding. By investigating existing approaches and technologies, a foundation can be
established for proposing a method to analyze correlations between energy consumption and power signal characteristics during generating gear grinding.
As an abrasive process, the energy required in the grinding process is higher than in machining processes with a defined cutting edge (Ref. 20). This effect is derived from the large amount of
material deformation that occurs during the cut, from the material that is removed and the one that remains in the workpiece, as well as the friction between the work-piece surface and the grains in
the grinding wheel (Ref. 20). The total energy required to machine parts during grinding can be understood as a sum of process, machine and background energy (Ref. 11). The machine and background
energies correspond to the share that is required for the machine to operate (hydraulics, cooling system, lighting, etc.). Meanwhile, the process energy corresponds to the share that is actively
employed for material removal. For grinding, the process energy averages up to 20 % of the total required energy and is typically considered to be equivalent to spindle energy (Ref. 1).
The understanding of the energy involved in machining processes is often approached by a correlation with machine power, based on the principle that power is the rate at which work is done or energy
is converted. Therefore, in cutting processes, energy is defined as the product of the distance to be traveled (cutting length) and the components of the resulting force acting in its direction,
while power is defined as the product of the speed components and the resulting force acting in their direction (Ref. 9). These relations are useful in manufacturing since they allow for a more
direct comprehension of the energy in terms of process parameters. In grinding, this association is given by Equation 1, in which the cutting power Pc is directionally proportional to the tangential
force Ft. The cutting power is a share of the aforementioned process power (Equation 2) and, therefore, cannot be directly compared to the power which is observed in the machine spindle (Ref. 8).
Nevertheless, the association between those is possible through Equation 3, in which the process power Pp is defined as the spindle total power, subtracted by the idle power on the spindle—the power
which is necessary for the sole rotation of the spindle, without contact between tool and workpiece (Ref. 11).
Fundamentally, power can be understood as the rate of energy consumption. In terms of the grinding process, essentially up to 60–90 percent of the process energy can be converted into heat into the
workpiece, depending on factors such as process conditions, grain and wheel bonding type (Ref. 11). This conversion effect leads to a recurring issue during grinding processes, the incidence of
grinding burn, characterized as thermal damage in the workpiece surface. This incidence may lead to metallurgical phase transformations, tempering and possible rehardening of the surface layer (Ref.
15), as well as induction of residual stresses, which affects the fatigue strength of the material (Ref. 10). Since thermal damage like grinding burn concerns a wide combination of effects and
intensities, it cannot be detected throughout the process immediately, but rather through tests and examinations carried out on the finished part. According to Malkin, the threshold temperature for
the occurrence of grinding burn could be determined in terms of critical specific energy, which requires the definition of empirical coefficients based on the material pair (Ref. 15). Those findings
indicate that there are means for in-process identification and control of thermal damage. Previously, Rowe also developed a model for predicting grinding burn threshold, based on the heat flux
observed in the process, considering the energy partitioning between tool and workpiece (Ref. 21).
The detection of such thermal defects is further complicated due to the abrasive characteristics of the grinding process. Since the material removal is performed by grains with an undefined cutting
edge, the heat generation in the contact zone cannot be directly assessed. For this reason, many models have been developed to estimate both the energy and heat in the process during grinding. For
the case of generating gear grinding, this modeling is further hindered by the complex kinematic characteristics of the process, which leads to different contact conditions at each instant of the
process (Ref. 22). Reimann developed a thermomechanical energy description model for generating gear grinding, in which the specific grinding energy is calculated based on parameters such as the
cutting force, cutting speed, contact time, and contact zone area in the process, which are determined both analytically and empirically (Ref. 19). The model has been parameterized utilizing an
analogy trial replicating the effects of the contact between tool and workpiece at one specific point of the gear flank and validated by the inspection of the presence of grinding burn (Ref. 19).
Although the model provides a good description of the heat flux at one specific point by this approach, its application in different test cases is challenging due to its reliance on empirical factors
from temperature and force measurements during trials. Furthermore, Linke developed a model to describe the energy in conventional grinding based on the stages of chip formation—friction, plowing and
shearing, with a single-grain engagement approach (Ref. 11). The implementation of consideration of contact length in the model makes it more comprehensive to different process kinematics. However,
it is not directly applicable to generating gear grinding, as well as it does not consider the influence of different grain sizes or geometries, nor the influence of the simultaneous engagement of
multiple grains. Although both models can estimate the heat and energy in the contact zone, they either do not fully consider factors that are also relevant to the process or are not directly
applicable to generating gear grinding. Considering this, Löhrer combined the findings of both models into an approach that considers the influence of the grinding wheel topography on the energy
distribution at the flank during generating gear grinding (Ref. 12).
In the model developed by Löhrer, the energy calculation is done through the generating gear grinding simulation with the software GearGRIND (Ref. 12). With user input of the tool topography, process
parameters and workpiece properties, the software can apply the process kinematics to obtain the generated energy. This is achieved utilizing a penetration calculation in which the workpiece and tool
movements are discretized in cutting planes, to later be positioned with each other. Due to the process-specific kinematics, the tool profile penetrates the workpiece, as the cutting planes of the
tool body are projected into those of the workpiece. If there is an overlap, the common cutting surface is then determined and removed (Ref. 7). For the original simulation of the contact, neither
the grinding worm nor the gear rotated, but rather the gear was fixed in space while the tool followed a trochoidal motion, representing a combination of the gear and the tool motion which occurs in
the actual process. For the energy calculation developed by Löhrer, in addition to the original approach, the topography curves of the grinding worm are also projected into each tool profile (Ref.
12). Subsequently, the rotational motion of the worm is implemented by changing the position of each tool profile during the trochoidal motion, which allows it to accurately represent the contact
between the grains and workpiece surface through the rotational motion of the tool. Based on this foundation, the microinteraction characteristics of contact length lc, grain cross-section area Acu
and grain penetration depth hcu are calculated, as illustrated in the left side of Figure 1. In the schematic, the engagement path of a grain is represented by a line passing through the macro
contact geometry. Through the simulation, the contact characteristics are calculated along the entire engagement for all the grains in contact, in the entire gear grinding process (Ref. 12). Thereby,
it is possible to calculate the friction Efr, plowing Epl and shearing energy Esh, and with that the process energy Ew, by using the equations described in the right side of Figure 1.
Figure 1—Approach for Process Energy calculation by Löhrer (Ref. 12).
In grinding, material removal takes place through the engagement of grinding wheel grains with the workpiece, resulting in a contact zone significantly smaller than in defined cutting-edge processes
like milling or turning (Ref. 2). Therefore, the assessment of process parameters using conventional measuring techniques is often not possible, due to the difficulty of accessing the actual contact
area. These difficulties underlie the efforts which have been made to apply indirect monitoring methods to the grinding process through signal analysis, as a means of capturing and interpreting the
dynamic behavior observed in the process. This chapter provides an overview of signal analysis, initially in the context of overall manufacturing processes, as well as specifically in grinding
Within machining processes with a defined cutting edge, Tool Condition Monitoring (TCM) is a common application for signal analysis. For the process of gear hobbing, Hendricks investigated the
suitability of using acceleration sensors for predicting component quality regarding tool wear (Ref. 6). The goal of the approach was to derive measures for increasing the process stability from the
signal data. The evaluation of characteristic values extracted from the time and frequency spectrum of the signal allowed the recognition of patterns between acceleration signals and geometric
quality deviations in the hobbed parts. For the process of milling, Drouillet also investigated tool life predictions by studying the spindle power signals of the process (Ref. 5). In this approach,
the Root Mean Square (RMS) values of the signals in the time domain are evaluated by a neural network to predict the Remaining Useful Life (RUL) of the tool, presenting a strong correlation between
the predicted and true values of the RUL.
In abrasive processes such as grinding, the stochastic characteristics of the contact between the grains of undefined geometry and the surface of the workpiece bring further complexity into such
investigations. Pandiyan conducted a comprehensive review regarding monitoring of abrasive finishing process by using artificial intelligence. The review indicated that AE (Acoustic Emission) sensors
are the most commonly employed for abrasive processes, due to their sensitivity in the high-frequency range, where most of the microcutting components are dominant (Ref. 17). Further analysis
revealed that grinding burn, wheel conditioning and shatter vibration are common topics to be predicted when monitoring grinding processes (Ref. 17). Additionally, Mirifar developed an approach for
prediction of forces and surface roughness in grinding, through the analysis of AE sensors integrated into the grinding tool (Ref. 16). In the approach, the signals were initially pre-processed,
amplified and de-noised, and the peak values were used as input in feedforward neural network, which was able to predict the arithmetic mean roughness Ra and normal grinding forces FN with an
accuracy of 99 percent (Ref. 16).
To indirectly assess the process energy during generating grinding, the physical relationships between energy and power indicate the evaluation of power signals as a promising approach. On industrial
grinding machines, the recording of time-domain power signals of the machine spindles is a commonly incorporated factory feature. Therefore, the evaluation of such signals also brings the advantage
of not requiring the installation of external sensors such as accelerometers or AE sensors, with which the achievement of sensible results is dependent on the sensor positioning and distance to the
workpiece (Ref. 17), hence, possibly leading to incorrect readings.
As described in the previous chapters, the understanding of the relationships between energy in the contact zone and spindle power signals in generating gear grinding shows potential for optimizing
process parametrization. In that sense, the objective of this report is to develop a method for investigation of the correlations between process signal and calculated process energy in generating
gear grinding, see Figure 2.
To achieve the proposed objective, the approach is divided into four phases. In phase 1, experimental trials are conducted for generating gear grinding of a pinion shaft, to gather the machine power
signals. Those signals are then treated and analyzed in phase 2, where the main characteristic values which define the process are extracted, to be later compared with the process energy. This
calculation will be performed based on the model of Löhrer (Ref.12) through the software GearGRIND. The model is described in phase 3. Finally, in phase 4, a method to investigate the correlations
between the power signals and process energy gathered in the previous phases is developed.
In the trials carried out for this study, generating gear grinding was applied to finish case-hardened pinion shafts made of 16MnCr(S)5, designed for use in transmission systems of electrical
vehicles. The trials were performed as a part of the Incubator Technology Chain project, in which product, process and quality were acquired for the entire manufacturing chain of the pinion shafts.
Before the generating gear grinding trials, the pinion shafts were prepared by a hobbing process with varying parameters. The variation of parameters during the gear preparation influences the
initial geometry in the grinding process, and therefore, is presented in this chapter.
The grinding trials were performed on a Klingelnberg VIPER 500 KW grinding machine. The pinion shaft was centered and clamped between tips, as shown in Figure 3. During the process, the actual value
of current in axes B (workpiece rotation), C (grinding worm rotation), X, Y and Z (grinding worm translation) was recorded. Because current and power are directly related if the voltage is constant,
and the detectable power signal of the grinding spindle has a lower resolution than the current signal, the spindle current has been measured. The signals were recorded throughout the entire grinding
process of each pinion shaft at a sampling rate of fs = 60 Hz. As a tool, a ceramic bonded grinding worm manufactured by Krebs & Riedel was used, with characteristics shown at the bottom left of
Figure 3. To prepare the grinding worm for the grinding process, its dressing was performed using a diamond disk dresser.
The design of the experiments is shown in Figure 4. As previously mentioned, before grinding the gears are prepared by a gear hobbing process (shown on the left side of Figure 4). The energy
calculation detailed in this research only takes the grinding process into account, however, the grinding stock in the first pass is defined by the parameters used during the gear preparation.
Therefore, the variation of the parameters in this stage must also be considered. For both the gear preparation and the grinding, a reference and a productive parameter set were applied. For the gear
preparation, the cutting speed vc was kept constant between both variations, while the feed was varied between fa = 1.5 / 2.0 mm, and fa =3.0 / 4.0 mm.
During grinding, material removal was performed through a five-stroke strategy, which divided the total grinding stock into five cuts, aiming to reduce the risk of grinding burn in the final part.
The first three strokes concern the roughing operation, while the finishing is performed in the next two, each with different parameters. Generally, the first stroke acts as an equalization pass, in
which there may not be full contact between the tool and the workpiece, to level the surface. With the model considered in this research, the energy of each stroke is calculated separately,
therefore, each stroke can be considered as a different input for the energy calculation. Across the grinding process, the cutting speed vc, infeed DS and axial feed fa were varied as shown in the
center and left of Figure 4. While the cutting speed varies between the reference and productive variations, the infeed and axial feed are varied between the roughing and finishing steps.
After the execution of the trials, the next step of the approach is evaluating the acquired signals, to extract from them characteristic values which can be compared to the process energy. For that
evaluation to occur, the signals from the entire process must first be evaluated in terms of the process strategy, to understand which sections of the signals are relevant for the analysis.
The raw signals obtained from the trials contain valuable information about the process, however, evaluating them directly presents challenges. Initially, the large amount of data not only makes the
evaluation complex but also poses limitations in terms of storage and processing. Additionally, the entire signal contains regions that are not representative of the process, when there is effective
contact between the tool and workpiece. Therefore, to facilitate the recognition of patterns in the signal, it is first necessary to extract values that can be associated with the energy.
The actual values of current in the entire process were extracted from the machine control. The first step to evaluate these results in terms of energy is to convert the measured current into power,
given its relationship to the voltage. The result of this conversion for all the C-axis (tool rotation) and X-axis (tool radial translation) is shown in Figure 5. On the left side of the figure, the
signals for the X- and C-axis are compared. On the signal recorded from the X-axis, the presence of peaks at the points where contact starts on each stroke could be identified. Those peaks were then
taken as a reference for the distinction of the beginning of each stroke, as shown by the vertical dashed lines. The end of the interval of each stroke, however, was taken as the moment when the
value of the beginning of the stroke was reached again (not depicted in the diagram). As a result, each stroke was then distinguishable as shown to the right of Figure 5.
Figure 5—Identification of the stroke intervals in the tool spindle signals.
Typically, signal processing requires applying a filtering process to extract realistic values from the signals. However, the power signals during the trials were acquired at a frequency rate of fs =
60 Hz, significantly lower than common frequency rates in data acquisition (typically 60 kHz to 1 MHz for AE sensors, for example). Therefore, a further reduction of the signal is not necessary.
With the signal pre-processed by identifying the regions containing different strokes, it was then possible to characterize the signal by extracting time-domain characteristic values. For this
approach, Maximum (Max), Minimum (Min), Median (med), Mean, Peak to Peak (P2P), RMS, Kurtosis (Kurt), K4, Skewness (Skew), Variance (Var) and Krest factor (Krest) were calculated for future
comparison to the calculated energy. In the center of Figure 6, a visual representation of a few of the characteristic values for the first stroke of eight repetition trials with the reference
parameters of grinding is displayed. In this case, the stroke occurred in the time interval between 36 and 48 seconds from the beginning of the process. Through the plot to the right of Figure 6, it
is possible to see the scatter between the calculated values, which indicates the reliability of the current measurements and suggests a viable source of data for further comparison with the
generated energy during the process. Therefore, the extracted values can be used as input for the investigation of correlations with the generated energy during the process. Each of those values was
then calculated for every stroke, and each of the parameter sets described in the section “Experimental Methodology” to later be used as an input in the approach developed in the section “Development
of an Approach for Investigation of the Correlation Between Process Signal Data and Process Energy.”
Figure 6—Approach for evaluation of power signals in the tool spindle axis.
With the power signals for the machine’s main spindles evaluated and characterized in the previous section, the next step of the approach is the calculation of the process energy in each setup of the
design of experiments. As mentioned in the previous chapters, this research is based on the model developed by Löhrer, which allows the calculation of the energy over each contact point between tool
and workpiece, in one axial position of the gear gap. The model is developed utilizing a penetration calculation considering measurements of the tool topography. The application of the model is
summarized on the left of Figure 7.
Figure 7—Application of the energy model developed by Löhrer (Ref. 13).
For applying the model for the process investigated in this approach, the first step is to characterize the grinding worm in terms of its tool topography. For this purpose, the topography of the
grinding worm is measured using a laser scanning microscope Keyence VKX-1000, following the same approach detailed by Löhrer (Ref. 12). Hence, a fraction of the worm with a large enough size to
provide a representative description of the topography—in terms of grain size and distribution—is scanned by the microscope. The resulting measurement is then evaluated using the software
MountainsMap, to extract several two-dimensional curves, contained in a plane parallel to the tool surface. To extract curves that may be accurately incorporated into the tool profiles that represent
the grinding worm in the simulation, each curve must be parallel to and equidistant from the others.
On the right side of Figure 7, it is possible to see the results which were achieved by Löhrer when applying the model to the generating gear grinding of a 20MnCr5 gear, with the properties and
parameters displayed to the right of the figure (Ref. 13). With these parameters, the influence of cutting speed and axial feed on the process energy per area Ew’ (shown in lightest color) was
investigated. The results are also compared to those obtained with the empirical model of Reimann (Shown in the darkest color), and for this reason, this analysis is limited to one point on the gear
pitch circle, as defined in the analogy trials developed by Reimann (Ref. 19). Within both results, it is possible to see a direct relationship between axial feed fa and energy, although no relevant
effect can be seen by the variation of cutting speed vc. According to Löhrer, this behavior likely comes from the fact that the vc is not considered for the construction of the macromovements in the
simulation with GearGRIND, and consequently, it is not considered for the calculation of the microinteraction characteristics (Ref. 12). Physically, this can also be explained by the fact that,
although cutting forces increase with an increase of the cutting speed, and thus the process power consumption, the contact time also decreases, which may become too short to dissipate the power into
the generated energy Ew (Ref. 13). Therefore, in a further analysis of the energy considering process signals, similar behavior can be expected, considering that the power is directly correlated to
both the forces and the speeds in the process, through Equation 1 and Equation 2.
Besides the calculation of the generated energy, a particular characteristic of the model developed by Löhrer is the consideration of the different energy shares coming from the distinct chip
formation mechanisms (friction, plowing and shearing energies) (Ref. 12). Given the different interactions between the grains and the workpiece during each stage of material removal, each of these
shares may represent a different effect on the thermal and mechanical loads of the cutting process. According to Malkin and an analysis performed by Löhrer, nearly all the friction energy Efr is
conducted to the workpiece as heat, while the shearing energy Esh presents the lowest conversion into heat to the workpiece of all three energy shares (Refs. 12, 14). Thus, if most of the generated
energy Ew corresponds to Esh, the majority of this energy will likely be applied to material removal, and not converted into heat to the workpiece. These interactions suggest the relevance of also
considering each different share of generated energy (Efr, Epl, Esh) as a different variable input in the approach for investigation of correlations between the calculated energies and characteristic
values extracted from the power signals, as described in the following chapter. Therefore, through the model developed by Löhrer, the process energy in terms of the energy shares must be calculated
for each stroke, and each set of process parameters described in the section “Experimental Methodology.” The achieved results will be then considered as input for the approach developed in the next
Once the values of the generated energy in each of the grinding strokes are obtained, as well as the characteristic values of the power signals from the respective trials, the next step is the
investigation of correlations between both results. As detailed in “Energy in Generating Gear Grinding,” in generating gear grinding, there is no clear analytical relationship between power signals
and energy in terms of process parameters. Therefore, the need for an alternative approach to recognize correlations between variables obtained from the acquired signals and calculated energy using
statistical correlation techniques arises. The approach developed in this research seeks to study correlations based on the steps described in the previous chapters, as shown in Figure 8.
Figure 8—Approach for investigation of correlations between process energy and spindle power.
In the section “Treatment of Process Signals in Generating Gear Grinding,” the extraction of characteristic values from the spindle power signals was described, as shown in the upper left of Figure
8. Each set of process parameters and each grinding stroke result in a vector of characteristic values that will be used in the comparison. In the section “Description of Process Energy Calculation
Model,” the energy calculation through the model developed by Löhrer (Ref. 12) was described, as well as each energy share which will be evaluated in the comparison, as shown in the bottom left of
Figure 8. This chapter then details the development of the approach to compare how the variables extracted for each grinding stroke through the previous steps are correlated through all the trials
described in the section “Experimental Methodology.”
Considering the stochastic nature of tool and workpiece engagement in generating gear grinding, as well as the insufficient understanding of the influences of power signals in the process energy Ew,
it is challenging to estimate the nature of the correlations that are expected to be found with this approach. Although linear correlations are easier to identify, nonlinear correlations may also be
present in the data, as well as multivariate correlations resulting from the combination of different variables. Therefore, an approach to this investigation must meet the requirements of being able
to recognize different kinds of relationships, be flexible regarding the assumptions that must be met by the data distribution and be prepared to consider the presence of outliers. Initially, this is
achieved by considering a combination of different correlation coefficients in the analysis. The coefficients will be calculated for each combination of variables obtained by comparing the signal
characteristic values and the calculated process energy shares. Since it is not relevant to investigate the correlation between the signal characteristic values among themselves, the coefficients
will be calculated for each combination of one energy variable, and one power variable.
A common coefficient applied for correlation investigations is the Pearson correlation coefficient r (Equation 4) (Ref. 18), with which essentially the covariance between two variables, divided by
the product of their standard deviations is calculated. By calculating this coefficient, a value ranging from r = -1 (perfect negative linear relationship) to r = +1 (perfect positive linear
relationship) is obtained. For Pearson to be applicable, it is necessary that both variables are normally distributed, and the data is homoscedastic. Such requirements may not be met by all the
variable combinations that are evaluated in this approach. As well as the fact that this coefficient is sensitive to the presence of outliers, means that the evaluation of other coefficients is also
When compared to Pearson, a second coefficient which is less sensitive to the presence of outliers, besides not requiring the normal distribution of the data due to its non-parametric nature, is the
Spearman’s Rank Correlation Coefficient rs (Equation 5) (Ref. 18). Through the calculation of this coefficient, it is possible to measure the strength and direction of a monotonic relationship
between two variables (whether linear or not), through a value between rs = -1 (perfect negative linear relationship) and rs = +1 (perfect positive linear relationship). The calculation is done
utilizing ranking the data points for each variable and subsequently calculating Pearson’s correlation coefficient on these ranks.
Although both rs and r provide a useful overview on linear and monotonic relationships, for a most robust approach, the calculation of the Distance Correlation dCor (Equation 6) is also considered,
as it applicable for complex data where the relationship may not be apparent. The calculation of dCor allows for the detection of both linear and non-linear relationships between two or more
variables, without any assumptions about the distribution or dimension of the data, by providing a value between dCor = 0 (independence) and dCor = 1 (perfect dependence). This coefficient is
obtained by calculating standardized distances between points in the data, and thereby determining the statistical independence of these distances.
Finally, to achieve an approach that is not only based on coefficients from which it is possible to understand the strength and direction of a relationship, the Mutual Information Mi (Equation 7) is
also included in the analysis (Ref. 4). The calculation of this metric allows for the quantification of the amount of information gained about one variable by observing another, by effectively
measuring the degree of mutual dependence between them. The calculation of Mutual Information involves the estimation of the probability distributions of each variable, and their joint distribution,
therefore, it doesn’t require any assumptions about the data distribution.
By calculating the correlation coefficients, an evaluation of the relationships between the characteristic values from measured power signals and calculated energy can be made. However, for this
evaluation to yield meaningful results, it is initially necessary to ensure that the correct assumptions about the data for the calculated coefficients are fulfilled. Secondly, it is necessary to
evaluate whether the results obtained from the calculations are relevant to the overall analysis. To achieve that, the approach shown on the left of Figure 9 is developed.
Figure 9—Approach for investigation of correlations between process energy and spindle power.
In step number 1 of the procedure, a preliminary exploration of the variables is performed, to guarantee that the assumptions made for the calculation of each correlation coefficient are valid. In
this step, the linearity, normality and homoscedasticity of the variables will be evaluated to validate the application of each coefficient. With that investigation, it is possible to understand
which correlation coefficients can be calculated in step 2 for each relationship between variables. The coefficients will be calculated for each variable combination, yielding a different strength of
correlation for each. To evaluate the relevance of the correlations between each variable combination, in step 3, the results will be visually inspected, as exemplified on the center and right sides
of Figure 8. Through the heat map shown in the center, it is possible to compare the strength of correlations between each combination, thus allowing us to quickly assess which combinations are
strongly (darkest color) or weakly (white) correlated. In this evaluation, the combinations between the total generated energy Ew—as well as the energy shares Efr, Epl, and Esh—and the characteristic
values extracted from the power signals, will be observed. Subsequently, through the selection of the most strongly correlated variable combinations, a visualization of the scatter plots of each
combination allows for the identification of which kind of relationship (positive or negative, strong or weak, linear or nonlinear) is found between them, if any. Thereby, the application of this
method is expected to reveal underlying patterns between power signals and process energy in generating gear grinding, and with that, bring a foundation for identifying the energy generation through
real-time measurements of spindle power signals.
The energy generation in generating gear grinding is a critical mechanism in terms of the surface integrity of the parts. The assessment of generated energy in the process remains challenging due to
the intricate characteristics of the process kinematics and grain engagement. To provide an improved understanding of the energy generated during generating gear grinding, Löhrer developed a model
that allows for the calculation of process energy Ew with consideration of the microinteraction characteristics of the grain engagement. The model can describe the energy generation along the entire
grinding process (Ref. 12); however, it doesn’t provide the means for direct on-time assessment of the conditions within the contact zone. This research takes advantage of the power signal
measurements obtained from a grinding machine during the process, to derive an approach for understanding the energy generation in the process utilizing a real-time assessment based on power signals.
The objective of this work was to develop an approach to investigate the interactions between process signals and calculated process energy in generating gear grinding. To achieve this objective, the
approach was based initially on the execution of experimental trials of generating gear grinding, to acquire the signals of machine spindle power during the process. The acquired signals were then
analyzed considering the process strategy, and the relevant characteristic values were extracted from it, to allow a direct comparison with the calculated energy. Furthermore, the energy model
developed by Löhrer was described through the simulation in the software GearGRIND. Thus, the process energy Ew as well as the energy shares of friction energy Efr, plowing energy Epl and shearing
energy Esh were considered in the approach. With the analysis of power signals and description of the process energy, it was then possible to develop an approach for the investigation of correlations
between the two by applying statistical correlation coefficients.
The next step of the research is the application of the energy model developed by Löhrer to the process conditions in which the signals were extracted. Then, through the developed correlation
approach, it will be possible to understand the effects of the calculated energy on the real-time power signals, and subsequently establish a connection between them. That understanding will allow
the development of real-time process monitoring techniques, to assess the energy generation in generating gear grinding. With that, it will be possible to predict the occurrence of thermal damage
without the need for iterative steps during the process parametrization.
The authors gratefully acknowledge financial support by the German Research Foundation (DFG) for the achievement of the project results within the project: DFG EXC2023/1—B1.II.
This paper was first presented at the 65th Conference “Gear and Transmission Research” of the WZL on May 22–23, 2024. | {"url":"https://www.geartechnology.com/articles/30817-investigation-of-the-interaction-between-process-signals-and-modeled-thermomechanical-energy-in-generating-gear-grinding","timestamp":"2024-11-11T08:27:51Z","content_type":"text/html","content_length":"136905","record_id":"<urn:uuid:a444c9ec-d1d7-4de4-ba19-acfbc340f0aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00837.warc.gz"} |
Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes
The MAGIC stereoscopic system collected 69 hours of Crab Nebula data between October 2009 and April 2011. Analysis of this data sample using the latest improvements in the MAGIC stereoscopic software
provided an unprecedented precision of spectral and night-by-night light curve determination at gamma rays. We derived a differential spectrum with a single instrument from 50 GeV up to almost 30 TeV
with 5 bins per energy decade. At low energies, MAGIC results, combined with Fermi-LAT data, show a flat and broad Inverse Compton peak. The overall fit to the data between 1 GeV and 30 TeV is not
well described by a log-parabola function. We find that a modified log-parabola function with an exponent of 2.5 instead of 2 provides a good description of the data (χ[red]^2 = 35 / 26). Using
systematic uncertainties of the MAGIC and Fermi-LAT measurements we determine the position of the Inverse Compton peak to be at (53 ±3[stat] +31[syst] -13[syst]) GeV, which is the most precise
estimation up to date and is dominated by the systematic effects. There is no hint of the integral flux variability on daily scales at energies above 300 GeV when systematic uncertainties are
included in the flux measurement. We consider three state-of-the-art theoretical models to describe the overall spectral energy distribution of the Crab Nebula. The constant B-field model cannot
satisfactorily reproduce the VHE spectral measurements presented in this work, having particular difficulty reproducing the broadness of the observed IC peak. Most probably this implies that the
assumption of the homogeneity of the magnetic field inside the nebula is incorrect. On the other hand, the time-dependent 1D spectral model provides a good fit of the new VHE results when considering
a 80 μG magnetic field. However, it fails to match the data when including the morphology of the nebula at lower wavelengths.
Journal of High Energy Astrophysics
Pub Date:
March 2015
□ Crab Nebula;
□ Pulsar wind nebulae;
□ MAGIC telescopes;
□ Imaging atmospheric Cherenkov telescopes;
□ Very high energy gamma rays;
□ Astrophysics - High Energy Astrophysical Phenomena
accepted by JHEAp, 9 pages, 6 figures | {"url":"https://ui.adsabs.harvard.edu/abs/2015JHEAp...5...30A","timestamp":"2024-11-10T19:20:09Z","content_type":"text/html","content_length":"69388","record_id":"<urn:uuid:7da2077b-173d-4a44-b92b-31261ff1c83b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00897.warc.gz"} |
Playing with Systems
This 20th article in the series of “Do It Yourself: Electronics”, explains the basic working of IC 555 and generating a square wave using it.
Playing with raw electronics (without any microcontroller), further boosted the confidence of Pugs to dive into non-microcontroller electronics. This time he decided to explore the ever popular IC
555, loosely also known as the timer IC.
555 Functionality
555 is basically an 8-pin IC, with pin 1 for GND, pin 8 for Vcc, and pin 3 for Vo – the output voltage, which goes either high (Vcc) or low (GND), based on the other pins.
Vo goes high if the trigger pin 2 senses voltage less than 1/3 of Vcc. Vo goes low if the threshold pin 6 senses voltage greater than 2/3 of Vcc.
Pin 5 can be used as a control voltage always fixed to 2/3 of Vcc. Putting reset pin 4 low any time makes Vo go immediately low. So, if not in use it is recommended to be tied to Vcc.
Discharge pin 7 becomes GND when pin 6 senses voltage greater than 2/3 of Vcc and becomes tristate (open) when pin 2 senses voltage less than 1/3 of Vcc. In other words, discharge pin 7 becomes GND
when Vo goes low and becomes open when Vo goes high.
Generating a Square Wave
Given this background, one of the common uses of the 555 IC is to generate a square wave of any particular frequency and duty cycle (on pin 3), by varying some analog voltage between GND and Vcc (on
pins 2 and 6), more precisely between 1/3 Vcc and 2/3 Vcc, both inclusive. And this analog voltage is typically achieved by charging / discharging a capacitor through one or more resistors. Thus, the
time constants given by τ = RC, R being the resistance, and C being the capacitance in the corresponding charging & discharging paths, controlling the corresponding on & off cycle of the square wave.
Let’s consider the following circuit with R1 as a variable resistance (pot) between 0-10KΩ, and R2 as fixed resistance of 4.7KΩ, and C as a 1μF capacitor.
In the on cycle (when Vo (pin 3) is high), pin 7 would be open. Pins 2 & 6 can be assumed tristate. Hence, then C is getting charged towards Vcc through R = R1 + R2.
In the off cycle (when Vo (pin 3) is low), pin 7 would be GND. Pins 2 & 6 can be assumed tristate. Hence, then C is getting discharged towards GND (pin 7) through R = R2.
Moreover, note that in the on cycle as soon as capacitor voltage reaches 2/3 Vcc, Vo (pin 3) becomes low, and pin 7 becomes GND, i.e. off cycle starts.
And, in the off cycle as soon as capacitor voltage drops to 1/3 Vcc, Vo (pin 3) becomes high, and pin 7 becomes tristate, i.e. on cycle starts.
And the above sequence keeps on repeating, thus giving a square wave on Vo (pin 3), with on time t_on controlled by charging through R1 + R2 and off time t_off controlled by discharging through R2.
From RC circuit analysis, we have that voltage Vc across a capacitor C, getting charged through resistance R, at time t is given by:
$V_c = V_s * (1 - e^{\frac{-t}{R*C}}) + V_i * e^{\frac{-t}{R*C}}$,
where Vs is supply voltage (Vcc in our case), Vi is the initial voltage on the capacitor.
So, t_on could be obtained from the fact that it starts with inital voltage Vi = 1/3 Vcc, and ends when Vc = 2/3 Vcc, being charged by Vs = Vcc through R = R1 + R2. That is,
$2/3 * V_{cc} = V_{cc} * (1 - e^{\frac{-t_{on}}{R*C}}) + 1/3 * V_{cc} * e^{\frac{-t_{on}}{R*C}}$,
which on simplifying gives:
$t_{on} = R * C * ln(2) = (R1 + R2) * C * 0.6931$ … (1)
Similarly, from RC circuit analysis, we have that voltage Vc across a capacitor C, getting discharged through resistance R, at time t is given by:
$V_c = V_i * e^{\frac{-t}{R*C}}$,
where Vi is the initial voltage on the capacitor.
So, t_off could be obtained from the fact that it starts with initial voltage Vi = 2/3 Vcc, and ends when Vc = 1/3 Vcc, being discharged through R = R2. That is,
$1/3 * V_{cc} = 2/3 * V_{cc} * e^{\frac{-t_{off}}{R*C}}$,
which on simplifying gives:
$t_{off} = R * C * ln(2) = R2 * C * 0.6931$ … (2)
Live Demo
Pugs doesn’t get a punch unless he sees the theory working in practice. That’s where, he sets up the above circuitry on a breadboard as shown in the figure below:
WARNING: Do NOT put the pot to a value of zero, as that will short Vcc & GND, and may blow off the circuit. A safety workaround could be to put a fixed 1K resistor in series with the pot.
The audio jack is being used for observing the waveforms on the home-made PC oscilloscope, as created in his previous PC Oscilloscope article.
Below are the three waveforms Pugs observed for the values of R1 being adjusted to 1.28KΩ, 4.2KΩ, 8.6KΩ:
From the waveforms, Pugs approximately have the following t_on & t_off:
R1 = 1.28KΩ => t_on = 3.8ms, t_off = 3.0ms
R1 = 4.15KΩ => t_on = 6.0ms, t_off = 3.0ms
R1 = 8.60KΩ => t_on = 9.0ms, t_off = 3.0ms
Now, as per equations (1) & (2), for C = 1μF, R2 = 4.7K, and the above three R1 values, we should have got the following:
R1 = 1.28KΩ => t_on = 4.1ms, t_off = 3.3ms
R1 = 4.15KΩ => t_on = 6.1ms, t_off = 3.3ms
R1 = 8.60KΩ => t_on = 9.2ms, t_off = 3.3ms
Pretty close, but the t_off not really satisfactory. That triggered Pugs to take out his multimeter and check the resistance of the fixed resistor R2, he used. Ow! that actually measured 4.3K.
Recomputing using R2 = 4.3K, gave values amazingly close to the observed values.
Thus by appropriately choosing the R1, R2, and C values one should be able to get a square wave of a desired frequency given by 1 / (t_on + t_off) and duty cycle given by t_on / (t_on + t_off).
Obviously, the frequency would have a practical upper limit dictated by the 555 IC, though it is typically in MHz. What about duty cycle? Note that as per relations (1) & (2), t_on will be always
greater than t_off. Thus, duty cycle would be always greater than 0.5.
So, what if we need duty cycle less than 0.5, or at least equal to 0.5, where t_on = t_off. This is what Pugs is working out on. Watch out for the next article. | {"url":"https://sysplay.in/blog/2016/09/","timestamp":"2024-11-08T19:06:17Z","content_type":"text/html","content_length":"51521","record_id":"<urn:uuid:d6b13279-3f5b-4e97-8751-db1a64e00666>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00086.warc.gz"} |
MCLab Group List of Papers -- Query Results
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. Automatic Control Software Synthesis for Quantized Discrete Time Hybrid Systems. Vol. abs/1207.4098. CoRR, Technical
Report, 2012.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. Model Based Synthesis of Control Software from System Level Formal Specifications. Vol. abs/1107.5638. CoRR, Technical Report, 2013.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. On Model Based Synthesis of Embedded Control Software. Vol. abs/1207.4474. CoRR, Technical Report, 2012.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. Quantized Feedback Control Software Synthesis from System Level Formal Specifications for Buck DC/DC Converters. Vol. abs/1105.5640. CoRR,
Technical Report, 2011.
Vadim Alimguzhin, Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. A Map-Reduce Parallel Approach to Automatic Synthesis of Control Software. Vol. abs/1210.2276. CoRR, Technical Report,
Enrico Tronci. "Introductory Paper." Sttt 8, no. 4-5 (2006): 355–358. DOI: 10.1007/s10009-005-0212-y.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Exploiting Transition Locality in Automatic Verification of Finite State Concurrent Systems." Sttt
6, no. 4 (2004): 320–341. DOI: 10.1007/s10009-004-0149-6.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Bounded Probabilistic Model Checking with the Mur$\varphi$ Verifier." In Formal Methods in
Computer-Aided Design, 5th International Conference, FMCAD 2004, Austin, Texas, USA, November 15-17, 2004, Proceedings, edited by A. J. Hu and A. K. Martin, 214–229. Lecture Notes in Computer Science
3312. Springer, 2004. ISSN: 3-540-23738-0. DOI: 10.1007/978-3-540-30494-4_16.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Finite horizon analysis of Markov Chains with the Mur$\varphi$ verifier." Int. J. Softw. Tools
Technol. Transf. 8, no. 4 (2006): 397–409. Springer-Verlag. ISSN: 1433-2779. DOI: 10.1007/s10009-005-0216-7.
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Finite Horizon Analysis of Markov Chains with the Mur$\varphi$ Verifier." In Correct Hardware
Design and Verification Methods, 12th IFIP WG 10.5 Advanced Research Working Conference, CHARME 2003, L'Aquila, Italy, October 21-24, 2003, Proceedings, edited by D. Geist and E. Tronci, 394–409.
Lecture Notes in Computer Science 2860. Springer, 2003. ISSN: 3-540-20363-X. DOI: 10.1007/978-3-540-39724-3_34. | {"url":"https://mclab.di.uniroma1.it/publications/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%20FROM%20refs%20WHERE%20serial%20RLIKE%20%22.%2B%22%20ORDER%20BY%20abstract%20DESC&submit=Cite&citeStyle=Roma&citeOrder=&orderBy=abstract&headerMsg=&showQuery=0&showLinks=0&formType=sqlSearch&showRows=10&rowOffset=60&viewType=Print","timestamp":"2024-11-12T06:18:56Z","content_type":"text/html","content_length":"36985","record_id":"<urn:uuid:5c5eed2e-43ba-47bd-9366-565a27ee212e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00628.warc.gz"} |
Explain it: What is the Fibonacci sequence?
Explain it
... like I'm 5 years old
Imagine a series of numbers where each number is the sum of the two preceding ones. That's the Fibonacci sequence. It starts with 0 and 1, and continues like this: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,
and so on. The pattern is pretty simple: just add the last two numbers to get the next one.
The sequence gets its name from the Italian mathematician Leonardo of Pisa, also known as Fibonacci. He introduced it to the Western world in his 1202 book "Liber Abaci," but the sequence had been
previously described in Indian mathematics.
Think of it like this: you have two bags of apples. The first bag has 0 apples. The second bag has 1 apple. Now, for every new bag, you take all the apples from the previous two bags and put them
in the new bag. That's the Fibonacci sequence.
Explain it
... like I'm in College
The Fibonacci sequence is not just a random series of numbers. It has some fascinating properties and patterns. For instance, if you divide a number in the sequence by the number before it, you get a
ratio that hovers around 1.618. This number is known as the Golden Ratio, a mathematical concept that appears in various areas of art, architecture, and nature.
In addition, the Fibonacci sequence is related to the Fibonacci Spiral, a series of quarter circles inside squares whose side lengths are Fibonacci numbers. This spiral shows up in various aspects of
nature, from the arrangements of leaves on a stem to the shape of a Nautilus shell.
Suppose we have a box of Lego bricks. We pick one 1-stud brick and place it on the table - this represents the first number in the Fibonacci sequence, 1. Then we take another 1-stud brick and place
it next to the first one - this represents the second number, also 1.
Now, to represent the next number in the sequence, we take a 2-stud brick (since 1+1=2) and place it next to the first two. For the next number, we combine a 2-stud and 1-stud brick to make a 3-stud
brick. Then, to get a 5-stud brick, we combine the 3-stud and 2-stud bricks. We can continue this process to build bricks representing larger Fibonacci numbers.
This Lego representation helps visualize the additive nature of the Fibonacci sequence and how each number is built from the sum of the previous two.
Explain it
... like I'm an expert
As a mathematician, you're aware that Fibonacci numbers have numerous applications and appear in many branches of mathematics. They're used in Euclidean geometry, number theory, and combinatorics, to
name a few. For example, the number of ways to tile a board of size n with squares and rectangles is a Fibonacci number.
Fibonacci numbers also have a close relationship with the Lucas numbers, another integer sequence. They're also linked to the golden ratio, and this connection can be used to derive Binet's formula
for finding the nth Fibonacci number. | {"url":"https://www.explainitdaily.com/science/explain-it-what-is-the-fibonacci-sequence","timestamp":"2024-11-06T17:12:10Z","content_type":"text/html","content_length":"20225","record_id":"<urn:uuid:9504e9d0-ea03-4d7b-a3ea-151ae088cb78>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00133.warc.gz"} |
A quantum Kalman filter is a recursive Bayesian filter used to predict and estimate the state of a quantum system, and was first introduced by Viacheslav Belavkin. It is described in his work
(Belavkin, 1999). Another interesting work is the quantum extended Kalman filter. However, much of the robots and autonomous agents that we work with currently live in the classical regime and cannot
be treated as quantum systems. Thus, the classical Kalman filters, EKF, and etc. will suffice. However, robots and autonomous agents with underlying dynamics in the quantum regime should use the
quantum Kalman filter.
Recently, I thought about trying to run a classical Kalman filter or a particle filter on a quantum computer, but I believe that this will not be more efficient than simply running the filter on a
classical computer due to the fact that both the system observation and underlying model will both be governed by classical dynamics. The only part of any such algorithm where things can be sped up
by a quantum computer is when matrix inversion needs to be done. The algorithm responsible for a quantum speed up in matrix inversion was first invented by Harrow, Hassidim, and Lloyd, and improved
upon by Childs, Kothari, and Somma . However, if there exists a way to "Groverize" the problem, or to map the classical system to a state of quantum superpositions, then maybe there exists a speed-up
for Kalman filters through quantum computing. One naive idea to do this could be to map a vector such as \((x, y, \theta)\) to the phases of qubits, and then representing the classical actions
through some type of isomorphic manipulations in the phase of the qubit, and extracting the result through quantum phase estimation. However, some immediate issues come up such as the numerical
stability of the algorithm, since \(x, y\) are real variables getting mapped onto \([0, 2\pi]\), and the question of whether representing the classical actions of the "update" and "observation" step
with corresponding manipulations of the phase is efficient on a quantum computer, or even possible.
Thus, in my opinion it isn't too productive to directly look for speedups in recursive Bayesian filtering algorithms through quantum computing. However, quantum computers have been promising towards
realizing improvements in reinforcement learning such as a quadratic speed up in training time, and ability to use the underlying Hilbert space of a quantum computer for gradient descent. | {"url":"https://uclalemur.com/blog/what-is-a-quantum-kalman-filter","timestamp":"2024-11-01T19:52:38Z","content_type":"text/html","content_length":"57552","record_id":"<urn:uuid:06be71f8-c2f2-4d8b-a0db-174959b30252>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00134.warc.gz"} |
Why is the order of the matrices for multiplication not flipped?
In Week3, the Matrix Inverse Lecture at minute 1:13 - why is the order of not flipped?
In the earlier lecture, the professor introduced a concept where we flip the matrix order when multiplying two matrix transformations. But here the order stays the same i.e. Matrix 1 x Matrix 2.
Please see below
The slide shows:
Mat 1 x Mat 2
|3 1| x |a b|
|1 2| |c d|
when it should be
Mat 2 x Mat 1
|a b| x |3 1|
|c d| |1 2|
The lecture at 1:13 is discussing the product of two matrices that yield the identity matrix. That is a stand-alone fact. There is no reason to flip the order.
1 Like
In the matrix multiplication lecture, 1:45, we combine the two linear transformations by flipping the order during multiplication.
Isn’t the same thing happening in the lecture I’m referring?
So if the matrix with values a b c d represents the second transform, shouldn’t the order of multiplication matter this time as well? The result is the identity matrix.
The order of matrix multiplication always matters, right? Because as has been discussed, matrix multiplication is not commutative in general.
The point is that when you apply a linear transformation, you multiply by that matrix on the left. So if I have a matrix A that expresses one linear transformation and I have another matrix B that is
a different transformation and I want to apply A first and then B to a vector v, the expression would be:
B \cdot (A \cdot v)
Seems pretty clear doesn’t it? Or am I just missing your point?
But there is one well known case in which matrix multiplication is commutative: when we deal with a matrix and its inverse. Of course not all matrices are invertible, but if A is invertible, then we
A \cdot A^{-1} = A^{-1} \cdot A = I
Of course another way to express that relationship would be:
(A^{-1})^{-1} = A
1 Like
Thanks for the reference to the other lecture.
Thanks @paulinpaloalto - the exception makes it clear now.
While I try to understand this better - it’s not wrong to stick to the order of matrix multiplication in the initial lecture. Since in the case of a matrix and its inverse, the operation is
commutative anyways we can chose any order.
One other point that might be worth adding here w.r.t. my example about applying two linear transformations:
B \cdot (A \cdot v)
While matrix multiplication is not commutative, it is associative. So we have:
B \cdot (A \cdot v) = (B \cdot A) \cdot v
In other words, (B \cdot A) is the composition of those two transformations expressed as a single matrix.
1 Like | {"url":"https://community.deeplearning.ai/t/why-is-the-order-of-the-matrices-for-multiplication-not-flipped/710053","timestamp":"2024-11-12T23:15:30Z","content_type":"text/html","content_length":"47018","record_id":"<urn:uuid:d9807ddb-1927-41e6-8adf-5e2a9abe8cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00799.warc.gz"} |
ball mill sciencedirect
WEBAug 1, 1995 · A sizeenergy model is proposed for simulation of stirred ball milling. The product size distributions below 10 μm from a stirred ball mill were simulated by the model
satisfactorily. The model is simple and has only two parameters. The model was tested under different milling conditions for a stirred ball mill. Recommended articles.
WhatsApp: +86 18838072829
WEBMar 10, 2010 · DOB–MPC scheme for ball mill grinding circuits. The method of this work focuses on disturbance rejection, including model mismatches as well as external disturbances. A DOB–MPC
scheme is proposed to control the ball mill grinding circuits in this part. The detailed design procedures are described as follows.
WhatsApp: +86 18838072829
WEBJan 22, 2002 · The mill used in this experiment is made of alumina with an inside diameter of 144 mm and an inner volume of 2100 cm 3, and the grinding ball is also made of ball diameters
ranging between 3 and 30 mm were used, and feed size was varied in the order of 10 −3 to 10 −1 as a ratio of ball diameter. Feed sizes and ball .
WhatsApp: +86 18838072829
WEBMay 1, 2014 · The law of ball size distribution (D bsd) in the horizontal planetary ball mill is studied by the DEM, and takes the simulation results compared with the grinding test results,
reveals practical significance of the have obtained the specific rates of breakage can be determined by the mean contact force. • Friction work reduces with .
WhatsApp: +86 18838072829
WEBDec 1, 2009 · To achieve improvements in the production capacity and energy efficiency of an industrial tubular ball mill, an Improved adaptive EvidenceTheoretic kNN rule was proposed, and was
applied to monitor an unmeasured parameter,, level of coal powder filling in ball mill.. The improved adaptive rule was realized by means of two strategies: .
WhatsApp: +86 18838072829
WEBJun 1, 2020 · The ball size distribution comprised three size ranges; S1 (35– mm), S2 (–25 mm) and S3 (25–20 mm). The ball filling was 35% amounting to about 2220 balls for each simulation.
The mill diameter was m and the length of each section was m, making the total mill length m.
WhatsApp: +86 18838072829
WEBFeb 1, 2013 · In spite of the important developments in recent decades with improved characterization, modeling and simulation approaches applied to ball mills (Austin et al., 1984, Herbst and
Fuerstenau, 1980, Tavares and Carvalho, 2009), the Bond ball mill grindability test retains a significant part of its original importance as a convenient and ...
WhatsApp: +86 18838072829
WEBFeb 1, 1993 · Ball mill wear occurs as a result of the violent interactions within the ball charge. In the present article, a mathematical description of wear has been added to a ball charge
motion model. Wear is associated with the comminution mechanisms found in the ball charge profile. It is assumed that ball mill wear occurs in each of three comminution ...
WhatsApp: +86 18838072829
WEBAug 1, 1991 · An attractive alternative to the conventional tumbling ball mill is a highenergy Agitation Ball Mill (ABM). The primary characteristics of the ABM are an enclosed grinding
chamber filled with up to 95% of the mill volume with finesized media and its agitator rotating at rpm in the vertical or horizontal direction.
WhatsApp: +86 18838072829
WEBNov 1, 1992 · The discrete element method (DEM) is a proven numerical technique for modelling the multibody collision behavior of particulate systems. This method is used here to study the
motion of ball charge in tumbling mills. To get meaningful results, it is essential that the parameters involved in the model be carefully determined.
WhatsApp: +86 18838072829
WEBJan 15, 2014 · A planetary ball mill (QM3SP2, Nanjing University Instrument, China) was used in all experiments. A stainless 80 mL vial was filled with stainless balls in diameter of between 5
mm and 10 mm, which serve as the milling metal (Al, Zn, Fe) and quartz sand were individually added as reactive chemicals during ball milling of .
WhatsApp: +86 18838072829
WEBSep 1, 2008 · The experiments were performed on a tubular ball mill of a 250 MW power plant unit (see Fig. 1).The mill has a diameter of m and a length of m, and driven by a motor (YTM5006)..
The nominal revolutions per minute (rpm) of this mill was, and the mill power draft 710 mill was operated with a combination of three different .
WhatsApp: +86 18838072829
WEBNov 1, 2019 · Grinding was performed using a XMQΦ240 × 90 laboratoryscale conical ball mill (Wuhan Exploring Machinery Factory, Wuhan, China). The diameter and length of the intermediate
cylinder part of mill were 240 and 90 mm, respectively (Fig. 2).The operational speed was kept at 96 rpm (68% critical speed) according to the literature .
WhatsApp: +86 18838072829
WEBNov 30, 2021 · Regarding their industrial appliions, planetary ball mills are only used for mineral grinding (up to 5 tons of powder per hour). In order to explore the potential value of the
planetary ball mill in the field of food processing, this study used the highimpact, shear, and friction forces in the ballmilling process to treat and modify gluten ...
WhatsApp: +86 18838072829
WEBMay 1, 2020 · A spherical copper particle was shown with a roundness value using imageJ software. The DEM was used to simulate the ball motion in a planetary ball mill, and the impact energy
and shear energy generated during the collision were analyzed to estimate the contact number between the ball and the ball wall.
WhatsApp: +86 18838072829
WEBMar 1, 2020 · 1. Introduction. In mineral processing plants, comminution circuits are the most energy consuming units; thus, determination of mill powerdraw can be one of the most important
factors for designing, operating and evaluating of an efficient plant [1], [2], [3].It was reported that for a ball mill with 5 m diameter and 7 m length, the power draw .
WhatsApp: +86 18838072829
WEBJan 1, 2015 · A particle size reduction model has been developed as the first component of an upgraded ball mill model. The model is based on a specific energysize reduction function, which
calculates the particle breakage index, t 10, according to the sizespecific energy, and then calculates the full product size distribution using the t 10 –t n .
WhatsApp: +86 18838072829
WEBFeb 1, 2011 · Here we explore the detailed behaviour of the fine powder within a short periodic axial section of a m diameter pilot scale mill. In particular, we examine the distribution of
powder within the ball charge for powder levels ranging from 0% to 150% of the pore space between the media. The effect of the changing powder fill level on the .
WhatsApp: +86 18838072829
WEBApr 1, 2008 · The supervisory expert control for ball mill grinding circuits is a SCADA system, which consists three levels, the first level instrumentations and actuators, including particle
size analyzer, flow meters, valves, etc, level 2 regulating system composed by programmable logic controllers (PLCs), and level 3 supervisory system.
WhatsApp: +86 18838072829
WEBNov 1, 2017 · In this study, a model of the friction and wear pair composed of a liner and a grinding ball was built to simulate the wear generated in the working process of the ball mill,
which is shown in Fig. presented in Fig. 3, a friction and wear test rig was and wear experiments were conducted on the test rig to explore the .
WhatsApp: +86 18838072829
WEBJan 1, 1992 · The equations proposed previously by Austin, which allow for the presence of conical end sections on a cylindrical mill, have been applied to data on Hardinge conical ball mills
ranging from m in diameter to m in diameter. The equations predict the volume and mass of balls required to give a desired fractional filling of the ...
WhatsApp: +86 18838072829
WEBJun 2, 2008 · The mixture was introduced into a stainless steel jar (10 mL).The reaction vessel was closed and fixed on the vibration arms of a ballmilling apparatus, along with two stainless
balls of mm diameter (Retsch MM200 mixer mill, Retsch GmbH, Hann, Germany), 7 using a second parallel jar to equilibrate the system. Then, both vessels .
WhatsApp: +86 18838072829
WEBApr 1, 2005 · The hydration of an anhydrite of gypsum () in a ball mill was studied as a function of time and temperature. The amount of gypsum formed at different intervals of time was
determined by weight loss method and powder Xray diffraction technique. Specific surface area at different time intervals was determined by LASER granulometric .
WhatsApp: +86 18838072829
WEBNov 1, 2023 · Search ScienceDirect. Powder Technology. Volume 429, 1 November 2023, 118901. ... However, more recent work adopted the value of for a ball mill grinding fine iron ore [55],
whereas values ranging from about to, which varied as a function of solids concentration, were used to simulate batch [22, 23] and continuous [24] stirred .
WhatsApp: +86 18838072829
WEBJul 1, 2016 · Search ScienceDirect. Applied Surface Science. Volume 375, 1 July 2016, Pages 7484. Removal of fluoride from drinking water using modified ultrafine tea powder processed using a
ballmill. Author links open overlay panel Huimei Cai a, Lingyun Xu a, Guijie Chen a, Chuanyi Peng a, Fei Ke a b, Zhengquan Liu a, Daxiang Li .
WhatsApp: +86 18838072829
WEBAug 1, 2010 · The experiments were performed on a laboratoryscale ball mill (XMQL420 × 450), which is a continuous grinding grid mill drum is 460 mm in diameter and 460 mm in length, as shown
in Fig. mill, driven by a threephase kW motor, has maximum ball load of 80 kg, a designed pulverizing capacity of 10 kg per hour, a rated .
WhatsApp: +86 18838072829
WEBSep 1, 1999 · Shear rate defined as the velocity gradient between layers of the charge in the cascading motion was hence estimated to be 13 s −1 as a lower limit of the shear rate range for a
ball mill of m in diameter. For the second type of motion, the velocity of a freeflight ball striking the mill shell was resolved into two components, and a ...
WhatsApp: +86 18838072829 | {"url":"https://www.lacle-deschants.fr/04/13-2377.html","timestamp":"2024-11-11T04:17:32Z","content_type":"application/xhtml+xml","content_length":"26031","record_id":"<urn:uuid:c3605afb-a3d3-4213-be86-7bb9cb4f759b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00273.warc.gz"} |
Coq devs & plugin devs
What is the difference between foo; solve_constraints; []; bar. and foo; solve_constraints; []. bar.?
what does ;[] even do
Unless I am reading the manual incorrectly, don't you need Unset Solve Unification Constraints to use solve_constraints?
Or I guess that is to disable the constraints from being solved in other ltac tactics. If foo on purpose doesn't do this then maybe it is valid.
Regarding the use of []; which I haven't seen before, is this ensuring no goals exist?
i.e. the empty case of the [ .. | ..] syntax.
Seems to be:
Goal nat * nat * nat.
Fail simple notypeclasses refine (_,_,_); [].
Then to answer your original question, the first is different than the second since bar is being bound to a new empty set of goals (I don't know if those are the right words).
It would because there are no new goals
Goal nat * nat * nat.
idtac "hello"; []; idtac "world".
But if idtac "world" was replaced with a tactic that actually did anything, then the latter bar would do something in the original goal, whilsts the former bar would do it in the "empty set of
At least that is the haphazard understanding of the situation I have
what is this "new goal" concept
Goal nat * nat * nat.
Fail refine (_,_,_); [].
Fail refine (_,_,_); [idtac].
Fail refine (_,_,_); [idtac|idtac].
Succeed refine (_,_,_); [idtac|idtac|idtac].
I don't know if the correct term is "new goal" but that's what it looks like to me.
do you mean the goals which are focused after the tactic on the left of the ;?
I suppose that is it.
it seems that ; works even if there are no focused goals on the right
Which is why things like this work:
Goal True.
trivial; exact 45.
anyway removing the ;[] from the issue example changes nothing so it doesn't matter
The answer to Jason's question is then, bar does nothing in the first example since there are no focused goals, but something in the second example since there is one goal focused.
; [] is "fail if the previous tactic does not leave over exactly one goal"
Is it documented in the refman?
I mean, really my question is "what is the difference between foo; bar. and foo. bar., but I added ; [] and solve_constraints to eliminate the answers "foo might not leave over exactly one goal" and
"some constraints are only solved at ."
probably the goal evar thing https://github.com/coq/coq/issues/15520
@Ali Caglayan It should be? It's the trivial case of things like foo; [ bar | baz | qux ] where you only have one goal you're delegating to. Note that you can elide idtac when you don't want to do
something in the branch.
Is it really the trivial case?
Goal nat.
Succeed refine (_); [].
Succeed refine (_); [idtac].
Fail refine (_); [idtac|idtac].
I would have expected the first to fail
Why would it fail? As Jason just explained, the empty tactic is the same as idtac. So, the first two lines are identical.
Jason Gross said:
I mean, really my question is "what is the difference between foo; bar. and foo. bar., but I added ; [] and solve_constraints to eliminate the answers "foo might not leave over exactly one goal"
and "some constraints are only solved at ."
I thought it was a minimal repro case so this was confusing
Minimal repro case is in https://github.com/coq/coq/issues/15927, sorry for the confusion
What is confusing me is what Jason said here:
Jason Gross said:
Ali Caglayan It should be? It's the trivial case of things like foo; [ bar | baz | qux ] where you only have one goal you're delegating to. Note that you can elide idtac when you don't want to do
something in the branch.
Guillaume Melquiond said:
Why would it fail? As Jason just explained, the empty tactic is the same as idtac. So, the first two lines are identical.
How is it the same as idtac?
Goal nat * nat * nat.
Fail refine (_,_,_); [].
Succeed refine (_,_,_); idtac.
@Ali Caglayan What I meant was that ; [] and ; [idtac] are the same
No, what we mean is that [ ] is the same as [ idtac ]; [ | ] is the same as [ idtac | idtac ]; and so on.
It took me a while to realize that the semi colon there was not part of the code. :)
As the documentation states, "Omitting an ltac_expr leaves the corresponding goal unchanged."
Sorry @Jason Gross I didn't mean to hijack your thread, I guess I still don't understand some things here.
Thanks for explaining @Guillaume Melquiond
No worries
I'm still baffled by the behavior I posted though (I don't see how it can be pattern_of_constr as @Gaëtan Gilbert suggests on the issue...)
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/difference.20between.20.60.3B.60.20and.20.60.2E.60.html","timestamp":"2024-11-13T22:01:13Z","content_type":"text/html","content_length":"34433","record_id":"<urn:uuid:d0350ade-954f-4fec-8aa2-f3205a712d90>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00640.warc.gz"} |
Energy transfer from a photoexcited donor molecule to a nearby ground-state acceptor molecule is a process of fundamental interest in many fields of science, including polymer photophysics, surface
photochemistry, photochemical synthesis and molecular device engineering. It is usually known as electronic energy transfer (EET) or resonance energy transfer (RET). The fundamental theoretical
treatment was presented by Förster in 1948 [Forster48], and EET analysis computes the excitation energy transfer rate between molecules (or parts of molecules) from the overlap of the fluorescence
spectrum of the donor molecule/fragment with the absorption spectum of the acceptor molecule/fragment. However, not all energy transfers are described well by this treatment. Accordingly, there have
been many extensions to Förster’s theory, beginning with Dexter [Dexter53]. In recent years, a variety of new models have built upon these foundations; see [Scholes03] for a review.
In Gaussian 16, the EET analysis is a quantum mechanical model for EET based on a DFT description of the wavefunction, incorporating a time-dependent variational approach [Curutchet05, Russo07]. EET
is available in the gas phase and in solution. Indeed, Förster’s original theory recognizes the importance of solvent effects. The implementation in solution in Gaussian 16 is the formulation of
Iozzi, Mennucci, Tomasi and Cammi [Iozzi04], a model that differs from its predecessors (e.g., [Hsu01]) in that it incorporates solvent effects by adding the appropriate operators to the Hamiltonian
and the linear response equations; in this way, solvation is present in all steps of the quantum mechanical calculation [Cammi99b, Cammi00, Caricato04, Caricato05]. The solvation cavity for this
model is the same for other employments of IEFPCM [Cances97, Mennucci97a, Cances98a] (rather than a simplistic sphere or multipolar expansion).
The EET keyword performs an excitation energy transfer calculation using the results of the CIS/TDA/TD-DFT calculation or of an EOM-CCSD calculation. This type of calculation uses the same setup of
Guess(Fragment=…) but it can also process ONIOM-like link-atom input information to cap the fragments. An excited-state calculation is performed on each fragment, and all the coupling among all the
resulting states are computed. Solvent effects can be introduced using PCM and a single cavity, or a fragment-pair cavity can be used to evaluated the solvent-mediated coupling. | {"url":"https://gaussian.com/eet/","timestamp":"2024-11-11T17:38:40Z","content_type":"text/html","content_length":"162181","record_id":"<urn:uuid:9a81578e-fc90-41a4-a713-f7557322941c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00300.warc.gz"} |
Survey of Modal Logics, both Normal and Non-Normal
Aristotle and the medievals discussed modal logic; but its modern form developed with Hugh MacColl in the late 19th century, and then especially in the work of C.I. Lewis beginning with his
dissertation in 1910. (In this document, he is the only Lewis who will be discussed.) Lewis was in the first place trying to develop a notion of implication closer to the English conditional than
Russell and Whitehead's ⊃. Lewis's notion(s) are called "strict conditionals" and symbolized as ⥽, which we can define as:
φ ⥽ ψ =def □(φ ⊃ ψ)
The behavior of this conditional will obviously depend on the strength of the underlying modal framework (the behavior of □). Lewis published a book A Survey of Symbolic Logic in 1918, but the system
there had a flaw that trivialized the modality; Lewis fixed the flaw resulting in a system that he later called S3. In 1932 Lewis published a book with Langford Symbolic Logic that described five
systems S1 -- S5, and stated his preference for S2 being the correct logic for ⥽. At the time it wasn't known whether S2 was weaker than S3, but this was verified later; all of these systems get
progressively stronger.
Modal logic flourished after Lewis's initial work, and one high point was Kripke's work in the late 50s and early 60s establishing completeness proofs for semantics that he and others had developed
for the many logical systems then being investigated, and his identification of what is now known as the weakest "normal" modal logic, nowadays called K. Lewis's S4 and S5 are strengthenings of K,
among some other systems whose names you may have encountered (especially D, T, and B); and all of these systems are amenable to the kind of semantic analysis you're probably familiar with, in terms
of possible worlds and accessibility relations. Lewis's S2 and S3 have to receive a different kind of semantics that lets in some "non-normal" possible worlds, and Lewis's S1 has to receive an even
more different semantics. So although S4/S5 can be reached by adding axioms to Lewis's weaker systems, it's more natural to group S4 and S5 together with the other "normal" modal logics, which are
strengthenings of K, as a natural family of logics (with many members, some more to be described below). Lewis's S2 and S3 belong to a weaker family of logics, which we'll explain below (Segerberg
1971 called this family "quasi-regular" logics). There is also a parallel, even weaker family with names like E2, E3 and so on, developed in Lemmon 1957. (Segerberg called this family "regular"
There are also many other modal logics, like Lewis's S1, that don't fit into those three natural groups. For Lewis, the key difference between S1 and S2 was that the latter validated ◇(φ ∧ ψ) ⥽ (◇φ ∧
◇ψ).) One especially weak system is S0.5^0 from Lemmon 1957 (Priest 2008 Ch.4 calls it "System L"). We won't be discussing these systems here.
All of the modal logics we will discuss contain the non-modal classical propositional calculus. Also, in all the logics we'll discuss, ◇φ can be defined in the usual way as ¬□¬φ. (In some weaker
logics we're ignoring, that is not so straightforward; see Priest 2008 §4.4a.10--12.)
Here are some inference rules that modal logics may or may not include. I express the material biconditional as ⊂⊃ and the strict biconditional as ⥼⥽, with ⥽ defined as above.
RE. From ⊢ φ ⊂⊃ ψ, infer ⊢ □φ ⊂⊃ □ψ.
RM. From ⊢ φ ⊃ ψ, infer ⊢ □φ ⊃ □ψ. (This rule is sometimes called "R*". It is equivalent to RE together with axiom M, described below.)
RK. From ⊢ φ[1] ∧ ... ∧ φ[k] ⊃ ψ, for k ≥ 0, infer ⊢ □φ[1] ∧ ... ∧ □φ[k] ⊃ □ψ. (This has Rules RM and Nec as special cases.)
Becker's Rule. From ⊢ φ ⥽ ψ, infer ⊢ □φ ⥽ □ψ.
Nec[essitation]. From ⊢ φ, infer ⊢ □φ.
Restricted Nec. If φ is an axiom, or is a theorem in virtue of its non-modal form, infer ⊢ □φ.
Normal Modal Logics
The weakest normal modal logic K is standardly axiomatized as an extension of classical propositional logic that adds the rule of Necessitation and the following axiom:
K. □(φ ⊃ ψ) ⊃ (□φ ⊃ □ψ)
All of the other rules listed above can be derived in this system. (A different axiomitization of the same system would use RM (or RE and M) instead of Nec, plus the axiom N (which says □⊤), plus
either the axiom K or the axiom C, described below. Another strategy uses simply the rule RK.)
The semantics for system K are what you're familiar with, with □φ being true at a world w (in a given model) iff φ is true at every world accessible from w; and φ ⊨ ψ (in a given model) iff every
world where φ is true is also a world where ψ is true.
Strengthenings of K can be specified by adding axioms to the logic, or by placing constraints on the accessibility relation in the semantic models.
Constraints commonly considered include:
extendability or seriality: for all w, ∃u where wRu (in other words, the relation has no "dead ends")
reflexivity: for all w, wRw
symmetry: for all u,v: if uRv then vRu
transitivity: for all w,u,v: if wRu and uRv then wRv
density: for all w,v: if wRv then ∃u where wRu and uRv (note that u doesn't need to be distinct from w and v)
right Euclidean: for all w,u,v: if wRu and wRv then uRv
Another constraint I'll mention is:
left Euclidean: for all w,u,v: if uRw and vRw then uRv
In modal logic discussions, "Euclidean" is often used without any prefix to mean "right Euclidean."
Here are some interesting entailments, that you can verify: If a relation is reflexive, it's extendable (w itself is always a world that w stands in R to) and dense. If a relation is extendable and
symmetric and transitive, then it's reflexive. If a relation is reflexive and (either left or right) Euclidean, then it's symmetric. And finally, if a relation is symmetric, then it has either all or
none of the following properties: transitivity, right Euclideaness, left Euclideaness.
Here are some interesting formulas, which are theorems of some modal systems:
M. □(φ ∧ ψ) ⊃ (□φ ∧ □ψ)
As indicated, this formula is sometimes called M, but that name has also been used for a variety of other axioms and systems, too. The converse of this formula is sometimes called C. Both of these
are theorems of K. C is much less popular than M.
The following formulas are not theorems of K:
D. □φ ⊃ ◇φ
This is called D because it's the basis of the Standard Deontic Logic, where it's understood to be saying that if there's an obligation for φ, there's not also an obligation for ¬φ. (Or in brief,
"ought implies may.") The modal system D can be gotten by adding axiom D to K, or equivalently, by adding the axiom P, which says ◇⊤ (in brief, "something is permitted"). This corresponds to
requiring that the accessibility relation in the semantics is extendable/serial.
T. □φ ⊃ φ
This is a theorem in all of Lewis's systems S1 -- S5. The modal system T can be gotten by adding T to K (or equivalently, by adding the axiom φ ⊃ ◇φ). This corresponds to requiring that the
accessibility relation in the semantics is reflexive. System T is a proper strengthening of D (so all theorems of D are theorems of T, but not vice versa). That's what you'd expect, given that
reflexive relations are always extendable, but not vice versa.
B. φ ⊃ □◇φ
This is called B because of a tenuous connection to Brouwer's intuitionistic logic. (See McKinsey and Tarski 1948 for a deeper connection between intuitionistic logic and modal system S4.) Adding
axiom B to a normal modal system (or equivalently, adding ◇□φ ⊃ φ, or □(◇φ ⊃ ψ) ⊃ (φ ⊃ □ψ), corresponds to requiring that the accessibility relation in the semantics is symmetric. If B is added to K,
the result is called KB; if added to D, the result is called DB (or KDB); if added to T, the result is called simply B (or KTB).
4. □φ ⊃ □□φ
Adding axiom 4 to a normal modal system (or equivalently, adding ◇◇φ ⊃ ◇φ) corresponds to requiring that the accessibility relation in the semantics is transitive. If 4 is added to K, the result is
called K4; if added to D, the result is called D4 (or KD4); if added to T, the result is Lewis's system S4. (That's why this axiom is called 4).
The converse of axiom 4, called C4, corresponds to requiring that the accessibility relation in the semantics is dense.
5. ◇φ ⊃ □◇φ
Adding axiom 5 to a normal modal system (or equivalently, adding ◇□φ ⊃ □φ) corresponds to requiring that the accessibility relation in the semantics is right Euclidean. If 5 is added to K, the result
is called K5; if added to D, the result is called D5 (or KD5); if added to T, the result is Lewis's system S5. (That's why this axiom is called 5; some authors instead call it E for "Euclidean".)
Here are some interesting entailments. As we already mentioned, in system K T entails D; so all strengthenings of T will contain the corresponding strengthenings of D. Next, sometimes we add both
axiom 4 and 5 to a modal system. If they're added to K, the result is K45 (this is properly stronger than both K4 and K5); if they're added to D, the result is D45 (or KD45; this is properly stronger
than both D4 and D5). In any strengthening of K that includes B, 4 is a theorem iff 5 is. (As you'd expect, given that a symmetric relation is transitive iff it's right Euclidean.) In any
strengthening of T, the converse of 4 and 5 are already theorems. Finally, in any strengthening of T, 5 entails each of 4 and B.
This diagram from the SEP article on Modal Logic illustrates some of these relationships. When a system X occurs connected by a line below and/or to the left of a system Y, then Y is a proper
strengthening of X:
Note that the author of that article and diagram (James Garson) calls system T "System M".
Non-Normal Modal Logics
Next let's consider some modal logics weaker than K.
Chellas 1980 defines a "classical modal logic" as one validating at least the rule RE, and having □ and ◇ as duals. He named the weakest such logic E. Some interesting facts about E and its
• They contain axiom N (which says □⊤) iff they validate the Rule of Necessitation
• If they contain axiom M, then they contain C iff they contain K. Chellas calls classical modal logics which contain all of these axioms "regular"; I think this coincides in extension though not
in meaning with Segerberg's usage of "regular," mentioned elsewhere on this page.
• If they contain all of M, C, and N, then they are normal modal logics.
• Systems consisting of RE + some of M, C, and N validate the following rules:
□ From ⊢ □φ, infer ⊢ φ. (Note that is different from containing the axiom T, which is stronger.)
□ From ⊢ □φ ⊃ □ψ, infer ⊢ φ ⊃ ψ.
□ From ⊢ ◇φ, infer ⊢ φ.
□ From ⊢ ◇φ ⊃ ◇ψ, infer ⊢ φ ⊃ ψ.
In stronger systems, these rules can fail. For example, they all still hold in D but fail in D5.
• Any classical modal logic that contains axioms T and 5 will also contain: D, P, B, 4, and N.
Giving a semantics for these logics in full generality requires different methods than the ones we've been discussing so far, that have used accessibility relations. (You need to use "neighborhood
models," also sometimes called "minimal" or "Scott-Montague" models.) But an interesting subset of them can be handled with only a small variation on our existing techniques. None of these systems
have the Rule of Necessitation, and moreover none have any theorems of the form ⊢ □□φ.
The systems we'll consider can all be given a semantics like this: In addition to "normal" possible worlds, a model may also contain some "non-normal" possible worlds. Any non-normal world must be
"seen by" (accessible to) some normal worlds. In normal worlds, the interpretation of formulas is as it was before, including that □φ is true there iff φ is true at all accessible worlds (both normal
and non-normal). If there are non-normal worlds, nonmodal formulas are interpreted there as usual. But modal formulas are interpreted specially, namely: □φ is always false, and ◇φ is always true. (So
non-normal worlds are ones where "everything is possible.")
If we then define entailment such that ⊨ means truth-preservation in all worlds (Segerberg 1971 called such systems "regular", see esp. his Ch. 4), we get the series of modal systems E2^0, E2, E3^0,
and so on. If we instead define entailment such that ⊨ means truth-preservation in all normal worlds (Segerberg called such systems "quasi-regular"), we get the series of modal systems S2^0, S2, S3^0
, and so on. The latter series includes Lewis's systems S2 and S3. There is systematicity to the way these systems are named; but because Lewis didn't "leave enough space" in his original
enumeration, the numbering system does have some quirks.
The natural way to organize these systems is to see the E-systems as coming in two groups of six, and the corresponding S-systems as being semantic strengthenings of them (the S-systems have more
theorems). The base group of six consists of E2^0, E2, E3^0, E3, E3.5^0 and E3.5. (I've only seen some of these names in the literature; I posit the others for systematicity.) This group has further
internal structure, and corresponds roughly to the normal modal systems K, T, K4, S4, KB4/KB5, and S5. That is:
E2^0 has no constraints on the accessibility relation (this is equivalent to Chellas's EMC)
E2 = E2^0 + the accessibility relation is reflexive (as in T)
E3^0 = E2^0 + the accessibility relation is transitive (as in K4)
E3 = E3^0 + the accessibility relation is also reflexive (as in S4)
E3.5^0 = E2^0 + the accessibility relation is transitive/Euclidean and symmetric (as in KB4/KB5)
E3.5 = E3.5^0 + the accessibility relation is also reflexive (as in S5)
The systems S2^0 -- S3.5 differ only in their definition of ⊨. (Most of these names have been used in the literature. Priest 2008 Ch.4 calls S2^0 "System N".)
Note that with all of the systems just mentioned, a model may have non-normal worlds, but need not do so. With the next group of six E-systems, models are required to have non-normal worlds. (This
corresponds to adding the axiom ◇◇φ.) These systems can be called E6^0, E6, E7^0, E7, E7.5^0 and E7.5; and mutatis mutandis for the systems S6^0 -- S7.5. Note that even though these systems are
numbered greater than 5, S6 for example isn't a strengthening of Lewis's S4 or S5. This is one of the numbering quirks. Some authors call S7.5 "S9". They skip the number 8 because S8 was used to name
a different strengthening of S3 than S7 or S9. (In S8, every normal world sees an non-normal world.)
How about axiomatizations?
As mentioned before, none of these non-normal systems include the Rule of Necessitation. The "regular" modal systems (the E2-series) all instead include the rule:
RM. From ⊢ φ ⊃ ψ, infer ⊢ □φ ⊃ □ψ.
The E2-series also include the K axiom. The "quasi-regular" modal systems (the S2-series) also include the K axiom, but have different rules:
Restricted Nec. If φ is an axiom or a theorem in virtue of its non-modal form, infer ⊢ □φ.
Becker's Rule. From ⊢ φ ⥽ ψ, infer ⊢ □φ ⥽ □ψ.
These rules and axioms give you the minimal systems in each series (E2^0 and S2^0). You can get the logics that are sound and complete for the models with reflexive accessibility relations (the ones
without the ^0) by adding axiom T. This applies to both the E2-systems and the S2-systems.
You can get the logics that are sound and complete for the models with transitive accessibility relations by adding an axiom version of Becker's Rule: (φ ⥽ ψ) ⊃ (□φ ⥽ □ψ). Again, this applies to both
the E2-systems and the S2-systems.
You can get S3.5 by adding either axiom 5 or axiom B to system S3. But to get E3.5, you'd have to add a weaker axiom to E3, which we haven't yet identified. (If you added 5 to E3, you'd get the
normal modal logic S5.)
Thanks to Harvey Lederman and Graham Priest for suggestions and help sorting out some confusions!
Brian F. Chellas, Modal Logic: An Introduction, Cambridge, 1980
Ian Hacking, "What is strict implication?" Journal of Symbolic Logic 28 (1963), 51-71
G.E. Hughes and Max J. Cresswell, A New Introduction to Modal Logic, Routledge, 1996
E.J. Lemmon, "New Foundations for Lewis Modal Systems," Journal of Symbolic Logic 22 (1957), 176–186
C.I. Lewis, A Survey of Symbolic Logic, Univ of California, 1918
C.I. Lewis and C.H. Langford, Symbolic Logic, New York: Dover, 1932
J.C.C. McKinsey and Alfred Tarski, "Some Theorems About the Sentential Calculi of Lewis and Heyting," Journal of Symbolic Logic 13 (1948), 1-15
Graham Priest, An Introduction to Non-Classical Logic, 2nd ed., Cambridge Univ. Press, 2001
Krister Segerberg, An Essay in Classical Modal Logic, Uppsala: Filosotiska studier, 1971 | {"url":"http://lambda.jimpryor.net/jimpryor/teaching/courses/akrasia/2015/modal-survey.html","timestamp":"2024-11-14T07:34:29Z","content_type":"text/html","content_length":"29153","record_id":"<urn:uuid:a7ba239d-cb10-4b45-9f62-77a4a1efe89c>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00751.warc.gz"} |
predict_lmer_avg_trend: Use 'predict_lmer' on groups to generate average trend and... in caldwellst/augury: Provides Streamlined Methods for Data Imputation and Forecasting for WHO DDI Statistics
predict_lmer_avg_trend() is a simple wrapper around predict_lme4_avg_trend(). For details surrounding the linear mixed effects model fitting, please see lme4::lmer and for more details on the augury
function this wraps around and the various arguments this function accepts, please see predict_lme4_avg_trend().
predict_lmer_avg_trend( df, formula, average_cols = NULL, weight_col = NULL, group_models = FALSE, ..., ret = c("df", "all", "error", "model"), scale = NULL, probit = FALSE, test_col = NULL,
test_period = NULL, test_period_flex = NULL, group_col = "iso3", obs_filter = NULL, sort_col = "year", sort_descending = FALSE, pred_col = "pred", pred_upper_col = "pred_upper", pred_lower_col =
"pred_lower", upper_col = "upper", lower_col = "lower", filter_na = c("predictors", "response", "all", "none"), type_col = NULL, types = c("imputed", "imputed", "projected"), source_col = NULL,
source = NULL, scenario_detail_col = NULL, scenario_detail = NULL, replace_obs = c("missing", "all", "none"), error_correct = FALSE, error_correct_cols = NULL, shift_trend = FALSE )
df Data frame of model data.
formula A formula that will be supplied to the model, such as y~x.
average_cols Column name(s) of column(s) for use in grouping data for averaging, such as regions. If missing, uses global average of the data for infilling.
weight_col Column name of column of weights to be used in averaging, such as country population.
group_models Logical, if TRUE, fits and predicts models individually onto each group_col. If FALSE, a general model is fit across the entire data frame.
... Other arguments passed to the model function.
Character vector specifying what values the function returns. Defaults to returning a data frame, but can return a vector of model error, the model itself or a list with all 3 as
ret components.
Either NULL or a numeric value. If a numeric value is provided, the response variable is scaled by the value passed to scale prior to model fitting and prior to any probit
scale transformation, so can be used to put the response onto a 0 to 1 scale. Scaling is done by dividing the response by the scale and using the scale_transform() function. The
response, as well as the fitted values and confidence bounds are unscaled prior to error calculation and returning to the user.
Logical value on whether or not to probit transform the response prior to model fitting. Probit transformation is performed after any scaling determined by scale but prior to
probit model fitting. The response, as well as the fitted values and confidence bounds are untransformed prior to error calculation and returning to the user.
Name of logical column specifying which response values to remove for testing the model's predictive accuracy. If NULL, ignored. See model_error() for details on the methods and
test_col metrics returned.
Length of period to test for RMChE. If NULL, beginning and end points of each group in group_col are compared. Otherwise, test_period must be set to an integer n and for each
test_period group, comparisons are made between the end point and n periods prior.
test_period_flex Logical value indicating if test_period is less than the full length of the series, should change error still be calculated for that point. Defaults to FALSE.
Column name(s) of group(s) to use in dplyr::group_by() when supplying type, calculating mean absolute scaled error on data involving time series, and if group_models, then fitting
group_col and predicting models too. If NULL, not used. Defaults to "iso3".
String value of the form "logical operator integer" that specifies the number of observations required to fit the model and replace observations with predicted values. This is
done in conjunction with group_col. So, if group_col = "iso3" and obs_filter = ">= 5", then for this model, predictions will only be used for iso3 vales that have 5 or more
observations. Possible logical operators to use are >, >=, <, <=, ==, and !=.
obs_filter If `group_models = FALSE`, then `obs_filter` is only used to determine when
predicted values replace observed values but **is not** used to restrict values
from being used in model fitting. If `group_models = TRUE`, then a model
is only fit for a group if they meet the `obs_filter` requirements. This provides
speed benefits, particularly when running INLA time series using `predict_inla()`.
Column name(s) to use to dplyr::arrange() the data prior to supplying type and calculating mean absolute scaled error on data involving time series. If NULL, not used. Defaults to
sort_col "year".
sort_descending Logical value on whether the sorted values from sort_col should be sorted in descending order. Defaults to FALSE.
pred_col Column name to store predicted value.
pred_upper_col Column name to store upper bound of confidence interval generated by the predict_... function. This stores the full set of generated values for the upper bound.
pred_lower_col Column name to store lower bound of confidence interval generated by the predict_... function. This stores the full set of generated values for the lower bound.
Column name that contains upper bound information, including upper bound of the input data to the model. Values from pred_upper_col are put into this column in the exact same way
upper_col the response is filled by pred based on replace_na (only when there is a missing value in the response).
Column name that contains lower bound information, including lower bound of the input data to the model. Values from pred_lower_col are put into this column in the exact same way
lower_col the response is filled by pred based on replace_na (only when there is a missing value in the response).
Character value specifying how, if at all, to filter NA values from the dataset prior to applying the model. By default, all observations with missing values are removed, although
filter_na it can also remove rows only if they have missing dependent or independent variables, or no filtering at all.
type_col Column name specifying data type.
Vector of length 3 that provides the type to provide to data produced in the model. These values are only used to fill in type values where the dependent variable is missing. The
types first value is given to missing observations that precede the first observation, the second to those after the last observation, and the third for those following the final
source_col Column name containing source information for the data frame. If provided, the argument in source is used to fill in where predictions have filled in missing data.
source Source to add to missing values.
Column name containing scenario_detail information for the data frame. If provided, the argument in scenario_detail is used to fill in where prediction shave filled in missing
scenario_detail_col data.
scenario_detail Scenario details to add to missing values (usually the name of the model being used to generate the projection, optionally with relevant parameters).
Character value specifying how, if at all, observations should be replaced by fitted values. Defaults to replacing only missing values, but can be used to replace all values or
replace_obs none.
Logical value indicating whether or not whether mean error should be used to adjust predicted values. If TRUE, the mean error between observed and predicted data points will be
error_correct used to adjust predictions. If error_correct_cols is not NULL, mean error will be used within those groups instead of overall mean error.
error_correct_cols Column names of data frame to group by when applying error correction to the predicted values.
shift_trend Logical value specifying whether or not to shift predictions so that the trend matches up to the last observation. If error_correct and shift_trend are both TRUE, shift_trend
takes precedence.
A formula that will be supplied to the model, such as y~x.
Column name(s) of column(s) for use in grouping data for averaging, such as regions. If missing, uses global average of the data for infilling.
Column name of column of weights to be used in averaging, such as country population.
Logical, if TRUE, fits and predicts models individually onto each group_col. If FALSE, a general model is fit across the entire data frame.
Character vector specifying what values the function returns. Defaults to returning a data frame, but can return a vector of model error, the model itself or a list with all 3 as components.
Either NULL or a numeric value. If a numeric value is provided, the response variable is scaled by the value passed to scale prior to model fitting and prior to any probit transformation, so can be
used to put the response onto a 0 to 1 scale. Scaling is done by dividing the response by the scale and using the scale_transform() function. The response, as well as the fitted values and confidence
bounds are unscaled prior to error calculation and returning to the user.
Logical value on whether or not to probit transform the response prior to model fitting. Probit transformation is performed after any scaling determined by scale but prior to model fitting. The
response, as well as the fitted values and confidence bounds are untransformed prior to error calculation and returning to the user.
Name of logical column specifying which response values to remove for testing the model's predictive accuracy. If NULL, ignored. See model_error() for details on the methods and metrics returned.
Length of period to test for RMChE. If NULL, beginning and end points of each group in group_col are compared. Otherwise, test_period must be set to an integer n and for each group, comparisons are
made between the end point and n periods prior.
Logical value indicating if test_period is less than the full length of the series, should change error still be calculated for that point. Defaults to FALSE.
Column name(s) of group(s) to use in dplyr::group_by() when supplying type, calculating mean absolute scaled error on data involving time series, and if group_models, then fitting and predicting
models too. If NULL, not used. Defaults to "iso3".
String value of the form "logical operator integer" that specifies the number of observations required to fit the model and replace observations with predicted values. This is done in conjunction
with group_col. So, if group_col = "iso3" and obs_filter = ">= 5", then for this model, predictions will only be used for iso3 vales that have 5 or more observations. Possible logical operators to
use are >, >=, <, <=, ==, and !=.
If `group_models = FALSE`, then `obs_filter` is only used to determine when predicted values replace observed values but **is not** used to restrict values from being used in model fitting. If
`group_models = TRUE`, then a model is only fit for a group if they meet the `obs_filter` requirements. This provides speed benefits, particularly when running INLA time series using `predict_inla()
Column name(s) to use to dplyr::arrange() the data prior to supplying type and calculating mean absolute scaled error on data involving time series. If NULL, not used. Defaults to "year".
Logical value on whether the sorted values from sort_col should be sorted in descending order. Defaults to FALSE.
Column name to store upper bound of confidence interval generated by the predict_... function. This stores the full set of generated values for the upper bound.
Column name to store lower bound of confidence interval generated by the predict_... function. This stores the full set of generated values for the lower bound.
Column name that contains upper bound information, including upper bound of the input data to the model. Values from pred_upper_col are put into this column in the exact same way the response is
filled by pred based on replace_na (only when there is a missing value in the response).
Column name that contains lower bound information, including lower bound of the input data to the model. Values from pred_lower_col are put into this column in the exact same way the response is
filled by pred based on replace_na (only when there is a missing value in the response).
Character value specifying how, if at all, to filter NA values from the dataset prior to applying the model. By default, all observations with missing values are removed, although it can also remove
rows only if they have missing dependent or independent variables, or no filtering at all.
Vector of length 3 that provides the type to provide to data produced in the model. These values are only used to fill in type values where the dependent variable is missing. The first value is given
to missing observations that precede the first observation, the second to those after the last observation, and the third for those following the final observation.
Column name containing source information for the data frame. If provided, the argument in source is used to fill in where predictions have filled in missing data.
Column name containing scenario_detail information for the data frame. If provided, the argument in scenario_detail is used to fill in where prediction shave filled in missing data.
Scenario details to add to missing values (usually the name of the model being used to generate the projection, optionally with relevant parameters).
Character value specifying how, if at all, observations should be replaced by fitted values. Defaults to replacing only missing values, but can be used to replace all values or none.
Logical value indicating whether or not whether mean error should be used to adjust predicted values. If TRUE, the mean error between observed and predicted data points will be used to adjust
predictions. If error_correct_cols is not NULL, mean error will be used within those groups instead of overall mean error.
Column names of data frame to group by when applying error correction to the predicted values.
Logical value specifying whether or not to shift predictions so that the trend matches up to the last observation. If error_correct and shift_trend are both TRUE, shift_trend takes precedence.
Depending on the value passed to ret, either a data frame with predicted data, a vector of errors from model_error(), a fitted model, or a list with all 3.
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/github/caldwellst/augury/man/predict_lmer_avg_trend.html","timestamp":"2024-11-09T00:40:12Z","content_type":"text/html","content_length":"43921","record_id":"<urn:uuid:22ac7e42-c1d6-445d-942c-f5ccc202bebd>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00092.warc.gz"} |
The Designing for Dementia Knowledge Library
Cannot perform simple calculations
• 心身機能障害
• Unable to count up coins, I take out the 10,000 yen note.
• Because I cannot calculate, I also cannot apply for medical expense deductions.
• I cannot do the math with money. I have difficulty adding numbers in order from the right to the left.
• Cannot do simple calculations | {"url":"https://designing-for-dementia.jp/database/en/dysfunction/st-18/","timestamp":"2024-11-09T20:33:25Z","content_type":"text/html","content_length":"22591","record_id":"<urn:uuid:dd0ba28b-8d09-4d74-b8c4-fb0f45958c06>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00874.warc.gz"} |
Adding and Subtracting Fractions
Adding and Subtracting Fractions
When adding and subtracting fractions the denominators of both fractions must be the same. In other words, they must have a Common Denominator.
Find equivalent fractions for both fractions to be added so that the denominators are the same.
The easiest method is to cross-multiply. Take the denominator of one fraction and multiply both the numerator and denominator of the other fraction with it. Repeat the process using the other
denominator. Both denominators should now be the same.
Next, add the numerators. Keep the common denominator for the answer - in other words, do NOT add the denominators. Finally, if possible, simplify.
The process for subtraction is the same, but subtract the numerators.
Example 1
What is `frac(1)(5)` + `frac(3)(7)`?
The denominators, 5 and 7, will be used to multiply the other fraction:
`=frac(7)(35) + frac(15)(35)`
Example 2
Calculate `frac(1)(6)` - `frac(1)(12)`.
The denominators, 6 and 12, have a lowest common multiple of 12:
Check out our iOS app: tons of questions to help you practice for your GCSE maths. Download free on the
App Store
(in-app purchases required). | {"url":"https://wtmaths.com/add_subtract_fractions.html","timestamp":"2024-11-06T07:27:24Z","content_type":"text/html","content_length":"6671","record_id":"<urn:uuid:0c0e8d88-f62a-4204-b57b-e41631d976af>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00443.warc.gz"} |
Formula help
I have a formula set to change to RYGB balls depending on children. The formula is as follows:
=IF(COUNTIFS(CHILDREN(), "Red") > 1, "Red", IF(COUNTIFS(CHILDREN(), "Red") > 0, "Yellow", IF(COUNTIFS(CHILDREN(), "Yellow") > 1, "Yellow", "Green")))
This formula works great.
I now want to add another statement to this that will change the ball to BLUE, i.e;
When all children are complete (Blue), change to "Blue".
I can't seem to figure this out. What is the operator for "ALL"?
• =IF(COUNTIFS(CHILDREN(), "Red") > 1, "Red", IF(COUNTIFS(CHILDREN(), "Red") > 0, "Yellow", IF(COUNTIFS(CHILDREN(), "Yellow") > 1, "Yellow", IF(COUNTIFS(CHILDREN(), "Blue") =count(children
give that a try and see what it gives you.
• Works perfectly! Thank You
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/61076/formula-help","timestamp":"2024-11-12T16:18:15Z","content_type":"text/html","content_length":"402444","record_id":"<urn:uuid:5052b737-3fa7-43ec-a4d9-96e0b8e94bbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00737.warc.gz"} |
[System] Timed Lightnings v1.0.1.1
This bundle is marked as recommended. It works and satisfies the submission rules.
A lightning system that allows easy creation of timed lightnings.
Create lightnings from
*point to point
*point to unit
*unit to point
*unit to unit
Supports moving points/units.
* TIMED LIGHTNINGS by Maker v1.0.1.1
* Allows the creation of lightnings with expiration timer.
* Supports:
* o Fading lightnings in and out
* o Attaching to units
* o Attaching to points
* o Linear movement in x-, y- and z-axes
* Methods
* P2U
* From a static point attached to a unit
* static method P2U takes lightning l, unit t, real time, real x1, real y1, real z1, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, target unit, duration, origin x, origin y, origin z, end z
* P2UEx
* From a moving point attached to a unit
* static method P2UEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, target unit, duration, target z, origin start x, origin start y, origin start z, origin end x, origin end y, origin end z
* U2P
* From attached to a unit to a static point
* static method U2P takes lightning l, unit s, real t, real x1, real y1, real x2, real y2, real z1, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, source unit, duration, origin x, origin y, point x , point y, source z, point z
* U2PEx
* From attached to a unit to a moving point
* static method U2PEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, source unit, duration, source z, point start x, point start y, point start z, point end x, point end y, point end z
* U2U
* From attached to a unit to attached to a unit
* static method U2U takes lightning l, unit s, unit t, real time, real z1, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, source unit, target unit, duration, source z, target z
* P2P
* From a static point to a static point
* static method P2P takes lightning l, real t, real startAlpha, real endAlpha returns nothing
* The lightning, duration
* P2PEx
* From a moving point to a moving point
* static method P2PEx takes lightning l, real t, real x1, real y1, real z1, real x2, real y2, real z2, real x3, real y3, real z3, real x4, real y4, real z4, real startAlpha, real endAlpha returns nothing
* The lightning, duration, origin start x, origin start y, origin start z, origin end x, origin end y, origin end z, target start x, target start y, target start z, target end x, target end y, target end z
* Alpha values are between 1 and 0. 1 is fully visible, 0 is transparent.
library TimedLightnings
private constant real TO = 0.03125000 // Update interval
private integer CT = 0 // Lightning count
private timer TMR = CreateTimer()
private location loc = Location(0,0)
struct TimedL extends array
lightning l
real av // aplha value
real da // transparency change rate
real x1
real x2
real y1
real y2
real z1
real z2
real dx1
real dy1
real dz1
real dx2
real dy2
real dz2
unit s // source
unit t // target
integer time // how many ticks, time
integer next // next node
integer prev // previous node
boolean moves
private static integer rlast = 0 // previous created
private static thistype first // first node
private static integer ic = 0
private static integer ir = 0
private thistype rn
private static thistype dat
private static thistype dat2
private static thistype dat3
private static method destroyL takes nothing returns nothing
/*-Link previous node with next one-*/
set dat3 = dat2.prev
set dat3.next = dat2.next
/*-----Set new last created node----*/
if dat2 == rlast then
set rlast = dat3
/*-Link next node with previous one-*/
set dat3 = dat2.next
set dat3.prev = dat2.prev
/*--------Set new first node--------*/
if dat2 == first then
set first = dat3
call DestroyLightning(dat2.l)
set CT = CT - 1
if CT == 0 then
call PauseTimer(TMR)
set dat2.rn=ir
set ir=dat2
private static method looping takes nothing returns nothing
local real z1
local real z2
set dat = first
set z1 = 0
set z2 = 0
set dat.time = dat.time - 1
if dat.da != 0 then
set dat.av = dat.av - dat.da
call SetLightningColor(dat.l, 1, 1, 1, dat.av)
if dat.s == null then
if dat.dx1 != 0 then
set dat.x1 = dat.x1 + dat.dx1
if dat.dy1 != 0 then
set dat.y1 = dat.y1 + dat.dy1
if dat.dz1 != 0 then
set dat.z1 = dat.z1 + dat.dz1
set dat.x1 = GetUnitX(dat.s)
set dat.y1 = GetUnitY(dat.s)
set z1 = GetUnitFlyHeight(dat.s)
if dat.t == null then
if dat.dx2 != 0 then
set dat.x2 = dat.x2 + dat.dx2
if dat.dy2 != 0 then
set dat.y2 = dat.y2 + dat.dy2
if dat.dz2 != 0 then
set dat.z2 = dat.z2 + dat.dz2
set dat.x2 = GetUnitX(dat.t)
set dat.y2 = GetUnitY(dat.t)
set z2 = GetUnitFlyHeight(dat.t)
if dat.moves then
call MoveLocation(loc, dat.x1, dat.y1)
set z1 = GetLocationZ(loc) + dat.z1 + z1
call MoveLocation(loc, dat.x2, dat.y2)
set z2 = GetLocationZ(loc) + dat.z2 + z2
call MoveLightningEx(dat.l, true, dat.x1, dat.y1, z1, dat.x2, dat.y2, z2)
if dat.time == 0 then
set dat2 = dat
set dat = dat.next
call destroyL()
set dat = dat.next
exitwhen dat == 0
private static method InitAdd takes nothing returns nothing
/* Add node to list, make this the last on list */
if rlast != 0 then
set dat2 = rlast
set dat2.next = dat
/* Link this with previous node */
set dat.prev = rlast
/* Make this the last created node */
set rlast = dat
set CT = CT + 1
if CT == 1 then
/* Make this the first node */
set first = dat
call TimerStart(TMR, TO, true, function thistype.looping)
private static method Recycle takes nothing returns nothing
if 0==ir then
set ic=ic+1
set dat=ic
set dat=ir
set ir=dat.rn
static method P2U takes lightning l, unit t, real time, real x1, real y1, real z1, real z2, real startAlpha, real endAlpha returns nothing
local thistype this
call Recycle()
set this = dat
set .x1 = x1
set .y1 = y1
set .z1 = z1
set .z2 = z2
set .s = null
set .t = t
set .next = 0 // Nodes are added to the end of the list, there is no next node
set .l = l
set .time = R2I(time/TO) // Calculates how many loops does the lightning lasts
set .av = startAlpha
set .da = (startAlpha-endAlpha)*TO/time // Transparency change speed
set .moves = true
call InitAdd()
static method U2P takes lightning l, unit s, real t, real x1, real y1, real x2, real y2, real z1, real z2, real startAlpha, real endAlpha returns nothing
local thistype this
call Recycle()
set this = dat
set .x1 = x1
set .y1 = y1
set .x2 = x2
set .y2 = y2
set .z1 = z1
set .z2 = z2
set .s = s
set .t = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .av = startAlpha
set .da = (startAlpha-endAlpha)*TO/t
set .moves = true
call InitAdd()
static method U2U takes lightning l, unit s, unit t, real time, real z1, real z2, real startAlpha, real endAlpha returns nothing
local thistype this
call Recycle()
set this = dat
set .z1 = z1
set .z2 = z2
set .s = s
set .t = t
set .next = 0
set .l = l
set .time = R2I(time/TO)
set .av = startAlpha
set .da = (startAlpha-endAlpha)*TO/time
set .moves = true
call InitAdd()
static method P2P takes lightning l, real t, real startAlpha, real endAlpha returns nothing
local thistype this
call Recycle()
set this = dat
set .s = null
set .t = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .av = startAlpha
set .da = (startAlpha-endAlpha)*TO/t
set .moves = false
call InitAdd()
static method P2UEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns nothing
local thistype this
local real n = TO/t
call Recycle()
set this = dat
set .x1 = x1
set dx1 = (x2-x1)*n
set .y1 = y1
set dy1 = (y2-y1)*n
set .z1 = z1
set dz1 = (z2-z1)*n
set .z2 = zu
set .s = null
set .t = a
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .av = startAlpha
set .da = (startAlpha-endAlpha)*n
set .moves = true
call InitAdd()
static method U2PEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns nothing
local thistype this
local real n = TO/t
call Recycle()
set this = dat
set .x2 = x1
set .dx2 = (x2-x1)*n
set .y2 = y1
set .dy2 = (y2-y1)*n
set .z2 = z1
set .dz2 = (z2-z1)*n
set .z1 = zu
set .s = a
set .t = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .av = startAlpha
set .da = (startAlpha-endAlpha)*n
set .moves = true
call thistype.InitAdd()
static method P2PEx takes lightning l, real t, real x1, real y1, real z1, real x2, real y2, real z2, real x3, real y3, real z3, real x4, real y4, real z4, real startAlpha, real endAlpha returns nothing
local thistype this
local real n = TO/t
call Recycle()
set this = dat
set .x1 = x1
set .x2 = x3
set .y1 = y1
set .y2 = y3
set .z1 = z1
set .z2 = z3
set .dx1 = (x2-x1)*n
set .dy1 = (y2-y1)*n
set .dz1 = (z2-z1)*n
set .dx2 = (x4-x3)*n
set .dy2 = (y4-y3)*n
set .dz2 = (z4-z3)*n
set .s = null
set .t = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .av = startAlpha
set .da = (startAlpha-endAlpha)*n
set .moves = true
call InitAdd()
v1.0.0.0 uploded 10.10.11
v1.0.1.0 uploaded 10.10.11
-Combined structs
-Now runs on one timer
-Factors in flying height
-Now uses Nestharus' instance recycling
lightning, timed, maker
19th Oct 2011 Bribe: Highly recommended as a system to handle your lightning needs. It can handle as many lightnings as you want without lagging.
19th Oct 2011
Bribe: Highly recommended as a system to handle your lightning needs. It can handle as many lightnings as you want without lagging.
Jan 1, 2011
Very useful... =p
Will be good for those people who are afraid of using lightning effects in their spells
5/5 for sure.
You could use CTL32 by Nestharus or Timer32 by Jesus4Lyf here, to
avoid needing to create so many different timers.
It would make more sense to compress this into one struct and use
booleans to determine the type of source/origin.
I am not sure what you are using to check a "moving point". Shouldn't
it take a location in such a case? Also maybe instead of a unit it should
take a widget, for example if you wanted to attach to an item. And
the lightning should take into effect the flying height of the unit.
Why do i get this error when I attemt to save my map with the copied system ?
Perhaps there's a new version of JNPG ? Or ?
You need to have JassHelper 0.A.2.B to compile this.
Some older JassHelpers don't let you reference static members without the dot as a prefix.
You need to have JassHelper 0.A.2.B to compile this.
Some older JassHelpers don't let you reference static members without the dot as a prefix.
I am almost 99.9% sure I have it ... though how can I check which version it is ? I am sure I downloaded
that and copy/pasted the folders. But still it shows that error. Any idea on fixing ?
You could use CTL32 by Nestharus or Timer32 by Jesus4Lyf here, to
avoid needing to create so many different timers.
The point of this system is not to use other peoples systems, and the another thing with
this system was to keep it as minimal JNGP use as possible. It could've been created in
Jass instead of vJass, but if I remember right; The only reason he made this in vJass is
because he did not want to use udg_* variables, as they would just take space for other
systems. I have yet to discover the reason for the Library, Structs and Methods that he
seemed to require judging by the script. ~ Now I shall take a quick review on the
performance of this system, and give my opinion on that.
The system seems to be creating smooth transition on the lightnings, enough to give it a 4/5 status.
OffGraphic, minimal JNGP as possible? Maker, tell me it isn't so!
He doesn't need a timer system if he crunches it all into one struct handler,
but if he wants to synchronize a bunch of 0.03125 timers it is the best way.
Bribe, that is only what I had heard, or what I discussed with him.
You could use CTL32 by Nestharus or Timer32 by Jesus4Lyf here...
It would make more sense to compress this into one struct...
I am not sure what you are using to check a "moving point". Shouldn't
it take a location in such a case? Also maybe instead of a unit it should
take a widget, for example if you wanted to attach to an item. And
the lightning should take into effect the flying height of the unit.
Now uses only one timer and one struct, but not an external timer system. Flying height added.
If I used a widget, how would I determine whether it is a unit or not, for flying height?
Moving point as in one can make the lightning travel from (x1,y1,z1) to (x2, y2,z2) during the life time. Handy for sweeping beams.
Yeah, I've read that before. I might update this to follow that guide.
Bribe, that is only what I had heard, or what I discussed with him.
I just originally tried to make it not require external libraries.
Would have been cool making either CTL32 or T32 optional yet it would be a pain in the arse to optionalize both. Either way this is great, better than the Lightning library at TH.
And I don't find it bad to require a library (either CTL or T32) as they're both way too common and heavily used (if you're not stupid when mapmaking) by mapmakers.
Just a simple question, why is .time member not integer in the first place if you want to use integers?
Creating the structs in vanilla JASS is way harder than downloading JNGP and typing struct name / endstruct (yeah no need for the latest JH to
There are nothing as "minimal JNGP" usage. Either you use JNGP and its features or you don't. And you know that don't you?
The only reason he made this in vJass is
because he did not want to use udg_* variables
It's not the variables that are difficult, it's the struct creation as said above.
Methods are functions with an extra integer parameter (hidden to the eye of course) and static methods are basic functions.
Libraries just put the code into the map header although it gives you the oppertunity to sort functions by typing "needs libraryname" by auto than doing it yourself.
Ah right, so you define the offsetted points based on those coordinates. Not bad.
CTL would be a better choice.
I still like the fact that you managed to get this to run on one timer without any requirements
And so you don't have to read that tutorial by Nes:
struct YourStruct extends array
private static integer ic = 0
private static integer ir = 0
private thistype rn
// allocate like this:
local thistype this
if 0==ir then
set ic=ic+1
return ic
set this=ir
set ir=.rn
return this
// deallocate like this
set .rn=ir
set ir=this
Pretty straight-forward
Updated to use the Nestharus' method.
native GetLightningColorA takes lightning whichBolt returns real
native GetLightningColorR takes lightning whichBolt returns real
native GetLightningColorG takes lightning whichBolt returns real
native GetLightningColorB takes lightning whichBolt returns real
Because your lightning does not support lightning color,use these
Any chance you update this to include manual destruction?
Quick update for this system to include manual destruction and access to struct instance.
* TIMED LIGHTNINGS by Maker v1.0.1.2
* Allows the creation of lightnings with expiration timer.
* Supports:
* o Fading lightnings in and out
* o Attaching to units
* o Attaching to points
* o Linear movement in x-, y- and z-axes
* Methods
* YourStructInstance.remove(boolean fade)
* Destroys the lightning with or without fading immediately
* P2U
* From a static point attached to a unit
* static method P2U takes lightning l, unit t, real time, real x1, real y1, real z1, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, target unit, duration, origin x, origin y, origin z, end z
* P2UEx
* From a moving point attached to a unit
* static method P2UEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, target unit, duration, target z, origin start x, origin start y, origin start z, origin end x, origin end y, origin end z
* U2P
* From attached to a unit to a static point
* static method U2P takes lightning l, unit s, real t, real x1, real y1, real x2, real y2, real z1, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, source unit, duration, origin x, origin y, point x , point y, source z, point z
* U2PEx
* From attached to a unit to a moving point
* static method U2PEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, source unit, duration, source z, point start x, point start y, point start z, point end x, point end y, point end z
* U2U
* From attached to a unit to attached to a unit
* static method U2U takes lightning l, unit s, unit t, real time, real z1, real z2, real startAlpha, real endAlpha returns nothing
* The lightning, source unit, target unit, duration, source z, target z
* P2P
* From a static point to a static point
* static method P2P takes lightning l, real t, real startAlpha, real endAlpha returns nothing
* The lightning, duration
* P2PEx
* From a moving point to a moving point
* static method P2PEx takes lightning l, real t, real x1, real y1, real z1, real x2, real y2, real z2, real x3, real y3, real z3, real x4, real y4, real z4, real startAlpha, real endAlpha returns nothing
* The lightning, duration, origin start x, origin start y, origin start z, origin end x, origin end y, origin end z, target start x, target start y, target start z, target end x, target end y, target end z
* Alpha values are between 1 and 0. 1 is fully visible, 0 is transparent.
* The above methods, excluding remove, now return the struct instance.
* With access to struct instance, you can now read and write to source and target.
* In addition, you can now stop the lightning handle from moving or start it up again
* at any moment of the lightning's life time.
* You can also read the number of ticks left, current alpha and the ending alpha
library TimedLightnings
private constant real TO = 0.03125000 // Update interval
private integer CT = 0 // Lightning count
private timer TMR = CreateTimer()
private location loc = Location(0,0)
struct TimedL extends array
private lightning l
boolean moves
unit source // source
unit target // target
readonly integer time // how many ticks, time
readonly real alpha // alpha value
readonly real endAlpha // end alpha value
private real da // transparency change rate
private real x1
private real x2
private real y1
private real y2
private real z1
private real z2
private real dx1
private real dy1
private real dz1
private real dx2
private real dy2
private real dz2
private integer next // next node
private integer prev // previous node
private static integer rlast = 0 // previous created
private static thistype first // first node
private static integer ic = 0
private static integer ir = 0
private thistype rn
private static thistype dat
private static thistype dat2
private static thistype dat3
private static method destroyL takes nothing returns nothing
/*-Link previous node with next one-*/
set dat3 = dat2.prev
set dat3.next = dat2.next
/*-----Set new last created node----*/
if dat2 == rlast then
set rlast = dat3
/*-Link next node with previous one-*/
set dat3 = dat2.next
set dat3.prev = dat2.prev
/*--------Set new first node--------*/
if dat2 == first then
set first = dat3
call DestroyLightning(dat2.l)
set CT = CT - 1
if CT == 0 then
call PauseTimer(TMR)
set dat2.rn=ir
set ir=dat2
private static method looping takes nothing returns nothing
local real z1
local real z2
set dat = first
set z1 = 0
set z2 = 0
set dat.time = dat.time - 1
if dat.da != 0 then
set dat.alpha = dat.alpha - dat.da
call SetLightningColor(dat.l, 1, 1, 1, dat.alpha)
if dat.source == null then
if dat.dx1 != 0 then
set dat.x1 = dat.x1 + dat.dx1
if dat.dy1 != 0 then
set dat.y1 = dat.y1 + dat.dy1
if dat.dz1 != 0 then
set dat.z1 = dat.z1 + dat.dz1
set dat.x1 = GetUnitX(dat.source)
set dat.y1 = GetUnitY(dat.source)
set z1 = GetUnitFlyHeight(dat.source)
if dat.target == null then
if dat.dx2 != 0 then
set dat.x2 = dat.x2 + dat.dx2
if dat.dy2 != 0 then
set dat.y2 = dat.y2 + dat.dy2
if dat.dz2 != 0 then
set dat.z2 = dat.z2 + dat.dz2
set dat.x2 = GetUnitX(dat.target)
set dat.y2 = GetUnitY(dat.target)
set z2 = GetUnitFlyHeight(dat.target)
if dat.moves then
call MoveLocation(loc, dat.x1, dat.y1)
set z1 = GetLocationZ(loc) + dat.z1 + z1
call MoveLocation(loc, dat.x2, dat.y2)
set z2 = GetLocationZ(loc) + dat.z2 + z2
call MoveLightningEx(dat.l, true, dat.x1, dat.y1, z1, dat.x2, dat.y2, z2)
if dat.time == 0 then
set dat2 = dat
set dat = dat.next
call destroyL()
set dat = dat.next
exitwhen dat == 0
private static method InitAdd takes nothing returns nothing
/* Add node to list, make this the last on list */
if rlast != 0 then
set dat2 = rlast
set dat2.next = dat
/* Link this with previous node */
set dat.prev = rlast
/* Make this the last created node */
set rlast = dat
set CT = CT + 1
if CT == 1 then
/* Make this the first node */
set first = dat
call TimerStart(TMR, TO, true, function thistype.looping)
private static method Recycle takes nothing returns nothing
if 0==ir then
set ic=ic+1
set dat=ic
set dat=ir
set ir=dat.rn
method remove takes boolean flag returns nothing
if not flag then
set .time = 1
set .time = R2I(1/TO)
set da = (alpha-endAlpha)*TO/1
static method P2U takes lightning l, unit t, real time, real x1, real y1, real z1, real z2, real startAlpha, real endAlpha returns thistype
local thistype this
call Recycle()
set this = dat
set .x1 = x1
set .y1 = y1
set .z1 = z1
set .z2 = z2
set .source = null
set .target = t
set .next = 0 // Nodes are added to the end of the list, there is no next node
set .l = l
set .time = R2I(time/TO) // Calculates how many loops does the lightning lasts
set .alpha = startAlpha
set .endAlpha = endAlpha
set .da = (startAlpha-endAlpha)*TO/time // Transparency change speed
set .moves = true
call InitAdd()
return this
static method U2P takes lightning l, unit s, real t, real x1, real y1, real x2, real y2, real z1, real z2, real startAlpha, real endAlpha returns thistype
local thistype this
call Recycle()
set this = dat
set .x1 = x1
set .y1 = y1
set .x2 = x2
set .y2 = y2
set .z1 = z1
set .z2 = z2
set .source = s
set .target = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .alpha = startAlpha
set .endAlpha = endAlpha
set .da = (startAlpha-endAlpha)*TO/t
set .moves = true
call InitAdd()
return this
static method U2U takes lightning l, unit s, unit t, real time, real z1, real z2, real startAlpha, real endAlpha returns thistype
local thistype this
call Recycle()
set this = dat
set .z1 = z1
set .z2 = z2
set .source = s
set .target = t
set .next = 0
set .l = l
set .time = R2I(time/TO)
set .alpha = startAlpha
set .endAlpha = endAlpha
set .da = (startAlpha-endAlpha)*TO/time
set .moves = true
call InitAdd()
return this
static method P2P takes lightning l, real t, real startAlpha, real endAlpha returns thistype
local thistype this
call Recycle()
set this = dat
set .source = null
set .target = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .alpha = startAlpha
set .endAlpha = endAlpha
set .da = (startAlpha-endAlpha)*TO/t
set .moves = false
call InitAdd()
return this
static method P2UEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns thistype
local thistype this
local real n = TO/t
call Recycle()
set this = dat
set .x1 = x1
set dx1 = (x2-x1)*n
set .y1 = y1
set dy1 = (y2-y1)*n
set .z1 = z1
set dz1 = (z2-z1)*n
set .z2 = zu
set .source = null
set .target = a
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .alpha = startAlpha
set .endAlpha = endAlpha
set .da = (startAlpha-endAlpha)*n
set .moves = true
call InitAdd()
return this
static method U2PEx takes lightning l, unit a, real t, real zu, real x1, real y1, real z1, real x2, real y2, real z2, real startAlpha, real endAlpha returns thistype
local thistype this
local real n = TO/t
call Recycle()
set this = dat
set .x2 = x1
set .dx2 = (x2-x1)*n
set .y2 = y1
set .dy2 = (y2-y1)*n
set .z2 = z1
set .dz2 = (z2-z1)*n
set .z1 = zu
set .source = a
set .target = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .alpha = startAlpha
set .endAlpha = endAlpha
set .da = (startAlpha-endAlpha)*n
set .moves = true
call thistype.InitAdd()
return this
static method P2PEx takes lightning l, real t, real x1, real y1, real z1, real x2, real y2, real z2, real x3, real y3, real z3, real x4, real y4, real z4, real startAlpha, real endAlpha returns thistype
local thistype this
local real n = TO/t
call Recycle()
set this = dat
set .x1 = x1
set .x2 = x3
set .y1 = y1
set .y2 = y3
set .z1 = z1
set .z2 = z3
set .dx1 = (x2-x1)*n
set .dy1 = (y2-y1)*n
set .dz1 = (z2-z1)*n
set .dx2 = (x4-x3)*n
set .dy2 = (y4-y3)*n
set .dz2 = (z4-z3)*n
set .source = null
set .target = null
set .next = 0
set .l = l
set .time = R2I(t/TO)
set .alpha = startAlpha
set .endAlpha = endAlpha
set .da = (startAlpha-endAlpha)*n
set .moves = true
call InitAdd()
return this
Last edited:
8 years later you followed through, lmao. Props. | {"url":"https://www.hiveworkshop.com/threads/system-timed-lightnings-v1-0-1-1.205105/","timestamp":"2024-11-09T09:22:14Z","content_type":"text/html","content_length":"257175","record_id":"<urn:uuid:884adf61-6afa-4fb0-aff5-608d1600bfbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00804.warc.gz"} |
Touch Math Worksheets 2nd Grade
Touch Math Worksheets 2nd Grade
Touchmath s four standards based second grade units meet every math common core state standard and include a total of 600 student activity sheets teacher guides and instructional strategies and step
by step lesson planning. Numerals 1 through 5 use single touchpoints or dots.
Touch Math Printable Worksheets Yahoo Image Search Results Touch Math Math Fractions Worksheets Touch Math Printables
Some of the worksheets for this concept are touchmath second grade touchmath kindergarten introduction to touchmath effectiveness of the touch math technique in teaching math lesson plan 9 adding
using pictures addition work.
Touch math worksheets 2nd grade. Second grade each digit from 1 through 9 has touchpoints corresponding to the digit s quantity. Numerals 6 through 9 use double touchpoints symbolized by a dot inside
of a circle. 20 touch math subtraction worksheets to help students practice subtracting across zeros.
Touch math addition displaying top 8 worksheets found for this concept. Use touchmath s 2nd grade multisensory teaching and learning products as students begin their exposure to more sophisticated
math concepts. 2 sets of worksheets one with four problems per page and one with nine problems per page.
These worksheets are supplements to the touch math program and provide practice foe struggling students. Welcome to touchmath a leading multisensory teaching learning math program for preschoolers
elementary middle and high school students learners in special ed and students in the autistic spectrum.
Free Touch Math Printables Math Addition Worksheets Touch Math Touch Math Worksheets
Touch Math Triple Digit Subtraction Touch Math Math Lesson Plans Year 7 Maths Worksheets
Kinderkingdom Kids Math Touch Math Touch Math Worksheets Touch Point Math
Adding On Touch Point Math Practice Extra Large Touch Points Touch Math Math Addition Worksheets Touch Point Math
Touch Math Workbook Adding Single Digit Numbers Touch Math Touch Math Worksheets Math Addition Worksheets
Touch Math Printable Worksheets Yahoo Image Search Results Touch Math Math Addition Worksheets Kindergarten Addition Worksheets
Back To School Touch Math Addition And Subtraction Single Digit Touch Math Touch Math Worksheets Math Addition
Touch Math Double Digit Addition Worksheets By Lisa S Learning Shop Teachers Pay Teachers In 2020 Touch Math Double Digit Addition Addition Worksheets
Back To School Touch Math Addition And Subtraction Single Digit Touch Math Touch Math Worksheets Math Addition
Touch Math Worksheets Free Printables Touch Math Worksheets Touch Math Math Worksheets
Touch Math Double Digit Addition Worksheets Touch Math Math Doubles Double Digit Addition
10 Free Printable Touchpoint Math Worksheets Touch Math Worksheets Touch Math Touch Point Math
Pin By Touchmath On Free Math Materials Touch Math Touch Point Math Touch Math Printables
Math Touch Points Touch Math Worksheets Touch Math Touch Point Math
Touch Spots Math Addition Worksheets Addition Worksheets Math Addition
Touch Math Subtraction Practice Worksheet Set Math Subtraction Touch Math Subtraction Practice Worksheets
Touch Math Double Digit Subtraction Touch Math Worksheets Touch Math Free Printable Math Worksheets
Touch Math Subtraction Practice Worksheet Set Touch Math Math Subtraction Math Addition Worksheets
Touch Math Addition Adding Double Digit Numbers No Regrouping Touch Math Math Addition Special Education Students | {"url":"https://kidsworksheetfun.com/touch-math-worksheets-2nd-grade/","timestamp":"2024-11-14T16:57:48Z","content_type":"text/html","content_length":"133531","record_id":"<urn:uuid:dd72c579-274c-4a21-8edf-cefa02fa515b>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00404.warc.gz"} |
Topology: from the Basics to Connectedness
The study of connectedness requires topology. That’s awesome, since my favorite lessons of pure mathematics concerned this very topic, because it gives a formalized well-defined setting to talk about
intuitive concepts. Like connectedness! So let me introduce topology to present connectedness!
Topology? Sounds like another scary word…
It surely does. But don’t panic! I’ll make it simple. Topology is the study of locations. More precisely, it answers the question of the location of a point, with regards to other points of a set.
Developed relatively recently, topology provides a powerful framework and is now a cornerstone of modern mathematics. And as data are piling up, topology is providing powerful technics to analyze
them efficiently! This is what’s explained in this awesome video by the Simons foundation!
Sounds awesome… But also quite complicated!
Don’t worry, I’ll get to it slowly, starting with a great simple example!
Graph Topology
When you consider a collection of objects, it can be very messy. If it is messy, it might be a million dollar idea to structure it. Well, in the case of Facebook, it was a billion dollar idea to
structure social networks, as displayed in this extract from The Social Network, the movie about the birth of Facebook by David Fincher:
I’m not sure what a social network actually is… Is it like Facebook, Twitter and other social websites?
No. At least, that’s not what I mean by social network. If you consider a set of persons, they are not organized a priori. But they actually are structured by their relations, like friendship. Now,
by drawing a line between related elements, we obtain a figure known as a graph. The social network is actually the graph of human interconnections. Here is the social network of the characters of
The Social Network:
This friendship relation… that’s exactly what Facebook is about!
Yes! Facebook has actually created a social network where people are linked by the friend on Facebook relation. It’s a simple idea. It’s also a billion dollar idea!
On Facebook, Mark is friend with Eduardo, then Eduardo is also friend with Mark. This property is known as symmetry. It’s actually the major difference between Facebook and Twitter. On Twitter, you
can follow someone without having him following you.
OK. I now get what a social network is… But what about topology?
Topology is the study of how each person is located with regards to others. For instance, you can notice that Mark is a highly connected person in this figure, as he is related to 3 other persons. We
say that his degree is equal to 3. This is also the case of Dustin. Both are the core of Facebook. This reveals the importance of these connections.
Is that it? You didn’t need a word that complicated to talk about the number of connections of each person!
Hehe. Topology isn’t only about that! It’s actually about a bunch of measures to be made to analyze properties of sets, based on this idea of connectedness. Regarding graphs, I’ll mention a few
important concepts, but keep in mind that there are many many more.
What are these concepts?
First, let me introduce the concept of distance. The distance between two persons is the minimum number of links between them. For instance, Eduardo is at distance 1 from Mark, but he is at distance
2 from Sean. The more distance between two persons, the harder it is for them to communicate nicely. Indeed, to give a message to Sean, Eduardo would most of the time tell Mark to transfer the
message. Now, on this simple example it may seem like a simple measure to make. But you have to imagine applying that to much larger social networks. As it turned out, the distance between two
persons of the entire social network of Facebook has been experimentally shown to hardly ever exceed 7: This means that probably less than 6 persons separate you from Mark!
Waw, that’s cool!
This leads me to another idea. If you’re using LinkedIn, you may have notice that the direct connections may not matter as much as the second or third connections. In graph theory, the direct
connections are called the neighbourhood. The second connections can then be defined as the neighbourhood of your neighbourhood… and so on for third and n-th connections. What’s amazing is how fast
the size of neighbourhoods of neighbourhoods increase. Eventually, the important persons in society may not be those with a large neighbourhood, but, rather, those whose neighbourhoods of
neighbourhood is large. In the Social Network, it’s typically the case of the relation between Mark and the venture capitalist Peter Thiel, who is in relation with Sean.
But how about the distance between Mark and Tyler in your example?
Well, you might say that it is infinite as there is no way to link these guys. What we rather say is that they are not connected at all. More mathematically, they belong to two different connected
components. As you can see, in our example, there actually are three connected components, namely the component made of Mark, Dustin, Sean and Eduardo, the component made of Tyler, Cameron and Divya,
and the component made of Erica alone. Quite often, we can study each connected component totally separately. This gives us several graphs to compare, where each graph cannot be divided. Such graphs
are called connected.
Once again, I could go further, and there’s so much to say about graphs, but I’ll leave it here. If you can, please write about graph theory!
The study of social networks is an extremely dynamic field of research. In particular, scientists are interested in all sorts of relations. This leads to different topologies, with different
properties, as explained in the following video:
You should check the entire video, it’s one of my favorite Big Think tutorials.
Also, I want to underline that I have done all the reasonings here with social networks. But there are plenty of great studies of networks which are being made. To name a few: Networks of cities, web
pages, computers, genealogy, species, supply chains, warehouses…
Metric Topology
As you can imagine, the study of social networks gets awfully complicated when the number of people considered gets large. Weirdly enough, when the number of elements gets very high, it may become
simpler to assume that there is an infinity of them! For instance, this is what’s done in statistical physics, or, to remain with studies of human interactions, in mean field games. More
surprisingly, we naturally consider space and time as continuous, while some physical theorists are postulating through causal sets that spacetime is actually a directed graph. This would mean that,
while spacetime is a graph made of dots, emergent descriptions of spacetime are much more understandable by considering a continuum.
Are you suggesting to consider social networks with an infinite number of persons?
Yes, I am! Obviously, we can’t work with all the structures we have defined so far. Instead, we need to keep one element of structure we used for graphs before we can talk about connectedness again.
And what’s usually kept is the concept of distance. More generally, a set is a metric space if there is a distance defined for it. And this distance defines the topology. The set then becomes a
topological space.
Distances are actually not the fundamental objects of topology, which are rather the open sets. However, it would be too abstract to do topology on spaces with no distance, so I’ll keep it simple
here and restrict ourselves to metric topologies.
So that’s it? Given of distances between any two points, we’ve got a topology?
Basically, yes. Although, this distance must satisfy a few properties:
• Non-negativity: the distance between two points should always be non-negative.
• Indifference: the distance between two different points should be strictly positive.
• Symmetry: the distance from a former point to the latter should be equal to the distance from the latter to the former.
• Triangle inequality: the distance of a detour is higher or equal than the straight distance.
These properties are illustrated below (the distance may not correspond to “straight” lines in general metric spaces):
I still have trouble to imagine that simply giving a distance structures the set…
It really does. One of the important property our set can now have is called boundedness. A set is unbounded if the distances between two points can be greater than any number. In fact, one of the
biggest questions of cosmology is about the boundedness of the Universe.
So if it is bounded, then this means that space is limited, right?
Sort of. Another way of talking about boundedness is through the concept of diameter. The diameter is the largest possible distance between two points. If the Universe is bounded, then this means
that this diameter is finite, which means that all things in the Universe is at less than a diameter of the Universe away. This is the case of the visible universe. It’s also typically the case of
the surface of the Earth. Indeed, the distance between two points on Earth along the surface of the Earth cannot go to infinity. In fact, the diameter, which is the largest distance between two
people living on Earth, is half the circumference of the planet, which is more or less 20,000 km. As explained by Scott, this has dramatic consequences for map making.
What’s extremely interesting to study then is the shortest path between two points, which is called the geodesics. This would lead us to differential topology, which is too complicated to be
discussed here, but is fascinating. After all, it’s the gateway to Einstein’s theory of general relativity. If you can, please write about these things!
Balls, Open and Closed Sets
Now, we’ll need to understand topology a bit more before talking about connectedness again… The key objects of metric spaces are balls, especially the small ones.
Balls? Really?
Yes! Just like the balls you’re thinking about, topological balls are sets defined by a center and a radius. They correspond to all the other points whose distances to the center are less than the
radius. In 3-dimension space, they’re the balls you’re thinking about. In a planar 2-dimension space, a ball is a disc. In 1-dimension line, a ball is a line segment. This is displayed in the
following figure.
Do the borders of the balls belong to the balls?
Hehe… That’s a great question! It leads us to define two sorts of balls. On one hand, we have the open balls, that is, all points which are at distance strictly less than the radius. On the other
hand, we have the closed balls, which contain all points at distance less or equal to the radius.
Do these open and closed balls have something to do with open and closed sets of the title of the section?
Yes! An open set is basically what can be obtained by combining open balls.
What do you mean by “combining”?
I mean that the intersections and unions of open balls form open sets. But there’s one thing that’s a bit tricky: An infinite union of open balls make an open set, but an infinite intersection of
open balls doesn’t necessarily make an open set. In other words, open sets are sets which can be obtained by finite intersections and possibly infinite unions of open balls.
Can you give an example?
Sure. Let’s consider a rectangle without its border. With open balls, we can cover it exactly. By exactly, I mean all of it and nothing more:
Well, obviously, on your picture, your balls don’t cover the rectangle…
Indeed. But that’s because I put a finite number of them. To cover the rectangle, we’d need an infinity of open balls. This would be a messy thing to draw…
There’s actually a brutal but simple way to cover the rectangle: Let’s just use all the open balls which are included in the rectangle! If you do that, then any point inside the rectangle is the
center of one of these balls. It’s therefore inside the union of all balls! This shows that the union of all open balls inside the rectangle is equal to the rectangle! Thus, the rectangle is an open
Your construction seems very general…
It is! We have been constructing the interior of the rectangle. In fact, since interiors are always open sets by construction, any set is open if and only if it is equal to its interior. Open sets
are sort of sets without their frontier!
OK… What about closed sets?
It’s almost the opposite for closed sets… You obtain them by possibly infinite intersections and finite unions of closed balls.
And is there something similar to interiors but for closed sets?
Yes! We can construct the closure of any set. The construction is a little bit trickier: It is the complement of the interior of the complement. Look back up at the rectangle. You can fill the
exterior of it with balls, by the construction I have mentioned. Then take all these balls out of the picture, and what you’re left with is the initial rectangle, plus its borders! That’s the
So the closure is basically the set plus its borders…
Yes! That’s why any set always includes its interior, and is always included in its closure. The set is then open if and only if it equals its interior. It is closed if and only if it equals its
closure. It could also equal none of those, in which case it is simply neither open nor closed.
And by the way, we usually don’t call it the borders, but rather the boundary. And we can define it properly! It corresponds to the closure minus the interior, that is, the set of points which are in
the closure but not the interior. In the case of the rectangle, the boundary corresponds to the border of the rectangle we usually describe as such. How amazing is this? We have managed to describe
intuitive concepts with abstract powerful well-defined mathematics!
In non-metric topology, the open sets are the key objects. They actually are what defines the topology. Closed sets are then deduced from open sets by being their complements.
But I don’t quite see how all of this is connected to connectedness…
Good one! I could actually talk about connectedness right now, but you’d be missing part of the story. I need one more tremendously important concept!
I said that topology was particularly interested in small balls. In the general setting, topology is actually rather interested in all sets containing at least an open set containing an element. In
particular, this excludes sets for which the element is on the edge of the set. The set of all sets containing an open set which contains the element is called the neighbourhood of the element. This
concept is essential to continuity.
Really? I thought continuity only concerned real functions…
Not at all! Continuity is actually the most important idea of topology. Consider a function and a point of the input set. The intuitive idea of the continuity of the function at the point is that
small deviations of the input imply not too big deviations of the output. More precisely, no matter how much we zoom in around the output, we can zoom in so much around the input that all images of
the zoomed-in input area are included in the zoomed-in output area.
This concept is so important to mathematics and so hard to understand that I’m going to spend more time on it and use formal notations to help you understand. Denote f the function and x the input.
Now, take any set V of the neighbourhood of the output f(x) (that’s the zoomed-in output area!). Then, there exists a set U of the neighbourhood of the input x (the zoomed-in input area), such that
images f(u) for inputs u∈U of the zoomed-in input area all belong to the zoomed-in output area V. This is displayed in the figure below:
If you understand this, then you’ve made a giant leap in mathematics!
In metric spaces, this definition of continuity is equivalent to working with open balls instead of sets of neighbourhoods, which is usually simpler for proofs.
And I guess a function would be continuous if it is continuous for all inputs…
Exactly! But amusingly (at least for me!), there is another characterization of continuous functions: A function is continuous if and only if the inverse images of open sets are open sets.
What??? What’s the inverse image?
The inverse image of an output subset is the set of all inputs whose images are in the output subset. Let me rephrase. An input is in the inverse image if and only if its image is in the output
subsets. This is displayed in the following figure, where images of the inverse image belong to the output subset, while images of other inputs do not:
And you’re saying that a function is continuous if and only if the inverse images of open sets are open sets?
Yes! That’s surprising, right? Now, the proof is very technical, but since the English Wikipedia doesn’t provide it, I’m giving it…
By definition, a function is continuous if and only if it is continuous for every input. This means that for every input, and for any open set around its image, there is an open set around the input
such that the image of the input open set is included in the output open set. This is equivalent to saying that the input open set is included in the inverse image of the output open set. Now, the
trick is that the output open set is valid in the definition of continuity for all inputs of the inverse image. So all inputs of the inverse image must have an input open set which match the
definition of continuity. Now, these input open sets are all included in the inverse image, while their unions contain all elements of the inverse image. Thus, the inverse image is equal to the union
of the input open sets, and is therefore an open set too. Overall, a function is continuous if and only if the inverse images of open sets are open sets.
If you haven’t understood, don’t worry, it’s a very abstract and difficult proof. But you should try to understand it. If you can, then there’s not much stopping you from becoming a mathematician!
What’s important to remark is that, through the inverse images, continuous functions sort of preserve topology! This means that if there is a continuous function between two sets, then the topology
of the outputs has several similarities with the topology of the inputs. But to go further in these comparisons, we’d need functions with even more properties, called homeomorphisms. Shortly put,
they are bijective continuous functions with continuous inverse functions. And the reason why they are essential is that there exists a homeomorphism between two sets if and only if the two sets have
the same topology. Homeomorphisms therefore enable to classify sets with regards to their topologies.
Unfortunately, I won’t have time to dwell on this amazing concept. If you can, please write about them!
We’re good to talk about connectedness in infinite topological space, finally! But we’re not totally out of all troubles… since there are actually several sorts of connectedness!
Are you kidding me?
No. But don’t see it as a trouble. It actually multiplies the fun! Let’s start with the simplest one.
What is it?
Path connectedness. A topological space is path connected if there is a path between any two of its points, as in the following figure:
But what’s a path in a topological space?
Hehe… That’s a great question. A path is a continuous function that to each real numbers between 0 and 1 associates an element of the topological space. It is indeed a path between two points if the
image of 0 is the first point, and the image of 1 is the second point. Simple, right?
OK… What about other definitions?
I’ll just present one more definition. It’s the most important one, and is simply called connectedness. The idea is based on the comparison of closures and interiors as we have made earlier. In fact,
we should expect any subset to have a frontier. But it’s not the case, as the entire space never has a frontier in the topological sense. That’s because it is both open and closed (this is required
by the definition of topology!).
Is it the only one to be both open and closed?
Sometimes yes, sometimes no. But an interesting remark is that if a subset is both open and closed, then its complement is both closed and open, as complements of open sets are closed. This means
that we can divide our entire space into the subset and its complement and study each part separately. In other words, the two parts are not connected. That’s the actual fundamental definition of
But there’s an interesting equivalent characterization of connected sets. Indeed, a set is connected if and only if all continuous functions defined on this set with values equal to 0 or 1 are
constant. Now, if a set is not connected, similarly to what we’ve done with graphs, we can define connected components as the largest connected subsets.
Is connectedness equivalent to path connectedness?
As it turns out, no. More precisely, any path connected space is connected. But there are connected spaces which are not path connected. An example of such spaces is the closure of the graph of sin(1
/x). But I’ll leave it here and let you study this problem on yourself.
So which one matters the most? Connectedness or path connectedness?
What matters the most between a knife and a fork? It depends on what you want to do. Path connectedness is basically stronger a concept than connectedness, and it plays a key role in Poincaré
Let’s Conclude
I hope you’ve enjoyed this introduction to topology. It’s a topic I love but I know it’s also very technical and hard to follow. But for you to enjoy it, you have to go even further! There are so
many more major concepts which I haven’t mentioned in this article, such as completeness, compactness, convexity and simple connectedness. If you’re interested in knowing, please encourage me to do
so, via email, Facebook, LinkedIn, Twitter… I’m not sure how much such topics can interest you!
Applied to science, I’d say that the topology of research needs to be improved. By this, I mean that more connectivity is required to improve performance. And an important path to more
interdisciplinary is science popularization. This is why I strongly invite you all to join me in the quest of making top science simple and cool. If you haven’t, you should check the guest post I’ve
written on White Group Maths on the importance of popularization.
I love topology and I was a very nice surprise when I had to use it in a proof in my research. I’m still writing the paper corresponding to this proof, and, as soon as it gets published, I’ll tell
you more about it! In the meantime, you should check this awesome video about topology by MyWhyU
1. I think this is among the most significant info for me.
And i am glad reading your article. But wanna remark on some general things, The website style is wonderful, the articles is really great : D.
Good job, cheers
2. Hey very interesting blog! | {"url":"https://www.science4all.org/article/topology/","timestamp":"2024-11-06T15:49:40Z","content_type":"text/html","content_length":"83901","record_id":"<urn:uuid:94d35f78-fa2e-408a-a143-ff9efc95d19a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00248.warc.gz"} |
What type of phase diagram should I draw if more than one pure component present? | Socratic
What type of phase diagram should I draw if more than one pure component present?
1 Answer
If there are only 2 components, draw the phase diagram of binary solutions. A 2D plot requires some variables to be held constant.
Recall the Phase Rule:
$F = C - P + 2$
$F$: the number of degree of freedom (number of independent intensive variables that are needed to specify a state)
$C$: the number of components ($C = 2$ for binary systems)
$P$: the number of phases
If the binary system ($C = 2$) is in a single phase ($P = 1$), then $F = 3$, i.e. 3 variables; pressure, temperature and the mole fraction of either component, are needed to describe the system.
In order to plot a 2D phase diagram, either $p$ or $T$ needs to be fixed, so only the other two variables are shown by the vertical and the horizontal axes.
Two types of phase diagrams are commonly used: the pressure–composition phase diagrams and the temperature–composition phase diagrams.
Below is a pressure-composition phase diagram of an ideal solution with 2 components, A and B.
Impact of this question
2906 views around the world | {"url":"https://socratic.org/questions/what-type-of-phase-diagram-should-i-draw-if-more-than-one-pure-component-present","timestamp":"2024-11-07T00:33:14Z","content_type":"text/html","content_length":"35408","record_id":"<urn:uuid:73cb84e9-fb2e-4201-96df-34f8a698ceb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00253.warc.gz"} |
Tech Tips
The following Tech Tips were previously published in the FEMtools News newsletter:
For technical papers, click here.
Extracting Mass Properties from FRFs
Accurate values of the inertia properties are often unknown because an accurate CAD or finite element (FE) model of the full operational structure is seldom available due to the complex nature of the
assembly components. This may be due to geometrical details or any parts like cabling that are missing in the model.
FEMtools RBPE uses the Inertia Restrain Method (IRM) to extract the rigid body response of the structure from FRF measurements, using the horizontal mass lines located between the low-frequency
suspension modes and the elastic modes of the structure.
Because all ten inertia properties of a complex structure cannot be easily determined accurately using a model, several test-based methods have been developed. A weighing test may be possible to
determine the mass of a structure, but the center of gravity and the moments of inertia are generally much more challenging to determine using a test. Direct measurements to find the center of
gravity require specialized equipment like centrifuges or pendulum methods (using multiple suspensions). Besides requiring expensive equipment, these methods may be time-consuming or impractical for
complex structures.they may be time consuming, or impractical for complex structures.
Indirect methods use structural responses from which the inertia properties can be extracted. Among the developed methods, the Inertia Restrain Method (IRM) is known to be practical and relatively
insensitive to measurement noise. The IRM uses the Frequency response Function (FRF) mass line values to compute the inertia properties as presented by many researchers. From these rigid-body
responses, the structure's mass properties are directly obtained by solving a set of rigid-body equations of motion. The advantage of this method is that the procedure is non-iterative and inertia
properties are obtained through a direct solution of two linear problems.
Other than the FRF, the only additional input data required by the RBPE are the position coordinates and directions of excitation and response DOF. An optional wireframe mesh may also be provided for
visualization purposes.
Accurate estimation of the inertia properties of a structure, and from there the rigid body modes, is necessary as input for simulation models that are used for multibody dynamics and modal-based
substructuring. For example, the design of heavy engine supports that void large displacements at low frequencies requires the knowledge of the inertia properties and rigid body modes. The prediction
of the response of an assembly using modal-based substructuring requires a modal base that includes elastic modes as well as the rigid body modes.
Another application is found in the validation and updating of finite element models. The experimentally obtained mass, COG and MMI values can be used as reference responses for comparison with the
values obtained from simulation models, and used as targets for updating the mass parameters of the model. This updating process can be done using only the inertia properties but they can also be
combined with other targets like static displacement or resonant frequencies. See FEMtools Model Updating for more information on this application.
M. Afzal, K. Saine, C. Paro, E. Dascotte, Experimental Evaluation of the Inertia Properties of Large Diesel Engines. Presented at the 38th International Modal Analysis Conference (IMAC), February
2020, Houston, Texas, USA.
Download (PDF, 1.0 MB)
Orders-based Correlation and Model Updating
Validation and updating of models for dynamic simulation can be done using different types of validation testing to obtain reference fingerprint data. Classic modal testing and operational modal
analysis are popular techniques but may not be suitable for rotating machinery.
A new innovative 2-step methodology was presented by Dynamic Design Solution, in collaboration with an industry partner, to update finite element models of rotating machinery that are dynamically
excited by their moving parts.
In a first step, the modal parameters of the structure are estimated from a simulation model, and updated using test order responses as references. These are obtained from order tracking after run-up
/down vibration testing or at a number of constant rotation speeds. The simulated order responses are computed from the initial modal parameters, obtained from the finite element model, and the loads
acting on the machinery, obtained for each order, from postprocessing specific engine performance prediction software. The identification of the true modal parameters involves correlation analysis
between simulated and test order responses, and an optimization process that uses the modal parameters as updating parameter to minimize the difference between the simulated and test order responses.
This methodology provides an alternative to direct modal extraction from order response functions .
The second step consists of updating the physical parameters of the simulation model using the updated modal base, obtained in the first step, as the target. This is a standard modal-based updating
task that can be performed by FEMtools Model Updating.
More information can be found in following presentation:
E. Dascotte, Orders-based Validating and Updating of Rotating Machinery FE Models, Presentation at 41st International Modal Analysis Conference (IMAC), February 2023, Austin, Texas, USA.
Model Upgrading versus Model Updating
Before starting a finite element model updating task, there are two questions that must be answered:
• Is the model verified? This check is needed to ensure that no errors will be introduced to mistakes (for example units, material selection) and complies to the specifications imposed by the
solver. It also checks if the intentions of the model are met by the selected elements and mesh density.
• Is the model fit for purpose in terms of geometrical fidelity and refinement? Because every model is based on assumptions and simplification to some extent, it is necessary to understand the
purpose of the simulation and the requirements that model has to satisfy in order to have a reasonable chance to produce relevant results with target accuracy.
Only when these questions are answered positively, model updating should be attempted. This is the process of determining the degree to which a model is an accurate representation to the real world,
again from the perspective of the intended use of the model. Probably the most important and also challenging task in this process is to select the updating parameters. Obviously, only the parameters
that are present in the model can be selected. Therefore, if the model is missing elementss that are important for an accurate simulation, then there are no parameters associated to those elements.
It may be possible to compensate for missing refinement by updating other parameters but this will reduce the usefullness of the simulation results because the loss of physical reality.
The term model upgrading was introduced in literature to identify the phase during which the model must be assessed for its capability to predict the behavior for the intended use. Maybe features are
missing due to oversimplification of the geometry leading to incorrect estimation of physics (like how to simulate joints).
For example, consider the following scenario and situation cases: a simulation model must correctly simulate mode shapes between 100 Hz and 1000 Hz. Tests reveals the existence of 10 modes.
Case 1: If there are zero modes predicted in the target range then there is probably something seriously wrong with the model. Turn to verification and check, for example, the units used and their
Case 2: Only 5 lowest modes are predicted in the target range. Probably the model not suitable for simulating the higher modes and global mesh refinement is needed.
Case 3: Modes 1,2 4, 6, 8,9 are predicted. Probably some types of modes are missing like torsional modes which are constrained by the model. Model upgrading may be needed and local refinement
introduced in the model. Model upgrading involves changing the meshing, whereas model updating will leave the mesh fixed and focus solely on the physical properties assigned to elements.
An in-between situation is geometry updating which uses techniques for shape optimization to adjust the modelled geometry to the true measured geometry. See this Tech Tip for more information on this
Using Digital Twins for Structural Health Monitoring
The concept of Digital Twins has been presented by many companies recently. However, it can mean different things for different people. The definition of interest for people with an interest in
Structural Health Monitoring (SHM), is that a Digital Twin is a simulation model that represents the true physical product to a sufficient extent, considering its purpose, during the entire service
life of the product. In this context, test data is used to validate and update a simulation model so that structural responses can be predicted with confidence, even at locations that will not be
instrumented. The simulation makes use of real-world loading and supports, also obtained from testing.
Depending on the monitored structure, a simulation model can be created with mature traditional codes (like Ansys, Abaqus, Nastran,...) that have the advantage of being well known, robust, and used
by many companies worldwide. Data interfaces and driver add-ons are available in software like FEMtools.
Alternatively, specialist simulation codes may be available that are better suited for specific purposes, for example for modelling and simulation of oil&gas offshore platforms. The advantage of such
specialist codes is that they are well adapted to the simulation and decision-making tasks at hand (e.g. material models, joints, nonlinearity, wave and wind load models, fatigue analysis according
to industry standards,...). For this purposes, the scripting capability in FEMtools facilitates the development of customer interfaces and drivers.
The Digital Twin is created by reducing uncertainty on the input parameters (geometry, materials, joints, loading,...) based on validation test data (see FEMtools Model Updating). Such testing may be
based on sensor data like accelerometers for operational modal analysis, strain gauges at selected locations, displacement transducers (e.g. GPS receivers) complemented by measurements of
environmental conditions (e.g. wind, temperature, wave height).
In such Digital Twin, the stress at any location of the platform can be obtained, even if there is no strain gauge in that location, and this at any moment in time (if the deformation measurement
data is stored 24/7). This technology was demonstrated by DDS on projects like Hong Kong Stonecutters bridge and Oil platforms in North Sea [1-2]. There are other researchers who have demonstrated
and confirmed the validity of similar techniques to estimate the peak stresses at locations where there is no possibility for direct strain measurements, like for example at a wind turbine tower base
and foundation. Once peak stresses are known, and loading (past and future) can be estimated, then fatigue analysis is used to estimate residual life expectancy.
Less uncertainty on stress evaluations means that lower safety factors can be used to demonstrate that the residual lifetime is sufficient. For example if an uncertain (non-validated) simulation
model is used and a safety factor of 5 is needed, then 100 years lifetime must be demonstrated with the model if the requested remaining lifetime is 20 years. With a more reliable Digital Twin
(validated and updated model), the safety factor may be reduced to 3 and only 60 years residual lifetime must be demonstrated. This may result is less critical spots to inspect or reinforce and thus
saving time and money. This way, aging structures that may otherwise have to be decommissioned can be kept in operation for a longer time while respecting the safety standards.
Bottom line: the combination of a validated simulation model and load data from testing, results in a complete Digital Twin that can be used to compute peak stresses at joints, welds or other
critical locations even if these are not directly instrumented. From there, it is possible to estimate residual life, decide on the need for reinforcements and better plan on-site inspections.
FEMtools is the perfect tool for developing Digital Twins, thanks to its interfaces with FE and test data, its unique blend of database and analysis tools, framework architecture and scripting
[1] E. Dascotte, J. Strobbe, U. Tygesen, Continuous Stress Monitoring of Large Structures. Presented at the 5th International Operational Modal Analysis Conference (IOMAC), May 2013, Guimaraes,
Download (PDF, 1.2 MB)
[2] E. Dascotte, Vibration Monitoring of the Hong Kong Stonecutters Bridge. Presented at the 4th International Conference on Experimental Vibration Analysis for Civil Engineering Structures (EVACES
2011), October 3-5, Varenna, Italy.
Download (PDF, 1.0 MB)
See also the webinar recording of "The Combined Use of Simulation and Test Data: From Test Planning to Response Estimation" at https://www.femtools.com/news/index.htm#20180501
Digital Twin model of Hong Kong Stonecutters Bridge and an aerial photograph shown side-by-side. The Digital Twin shows realtime deformation of the bridge and stress build-up using input from GPS
receivers at tower tops and on the bridge deck.
Exploratory Model Updating
A major challenge in FE model updating is to decide about which updating parameters to use. In those cases that the updating parameters are well identified, for example because they have not been
validated before or simply because there is much uncertainty about their value, then model updating is mainly a matter of running the automated procedure to iteratively update the selected parameter
values. Depending on the sensitivity values, the choice of starting values, and number of selected parameters, there can still be difficulties to obtain satisfactory correlation with test results.
However, many diagnostics and settings are available in FEMtools Model Updating to overcome these difficulties.
Many times updating parameters cannot be well identified, and then exploratory model updating can help. This involves selecting parameters at the element level ("local parameters") and updating them
as independent parameters. With many local parameters selected, updating results are presented as a color-coded plot that highlights the areas of the model that need large modifications. This may
provide insight about the type of modification that is needed in the model.
Parameters like elastic modulus, mass density or plate thickness are very useful parameters for this purpose. In the case of exploratory model updating they must be interpreted as indicators for mass
and stiffness changes that may relate to physical element properties (like material properties or joint stiffness) or can be used to pinpoint issues with mesh density or lack of detail in the model.
In this latter case, upgrading the model or refining the mesh may be a necessary first step before returning to updating of physical element properties.
Eventhough exploratory model updating does not entirely solves the issue of parameter selection, it is a very powerful tool to assist in decision making. It helps to filter out parameters that do not
matter, and focus on areas and parameters that are most likely to be in need of updating. Sensitivity analysis can also be used for this purpose but requires that the sensitivity pattern for each
response is examined and compared. This can be unpractical in case there are many parameters and responses to examine. Model updating will condense the results in a single plot per parameter type.
For example, the figure below shows the local updating of elastic modulus in an engine block, needed to match the first 10 modes shapes with test results. This may be due to a local stiffness issue
or could be related to the choice of mesh density in this area. Further investigation is required to decide about how to interpret this result and what to do next.
Exploratory model updating is a unique capability of FEMtools Model Updating. It includes the tools to automatically generate and handle local updating parameters, and the possibility to efficiently
compute a potentially very large number of sensitivity coefficients. The parameter estimator that is used for iterative model updating is a special-purpose gradient-based optimizer that is able to
handle the big sensitivity matrix and large number of parameters. There is no more practical reason to not consider exploratory model updating for preliminary investigation and learning about the
model at hand.
Exploratory model updating of an engine block model to match 10 mode shapes. The areas colored in red requires further investigation.
Full Field Modal Analysis using FEMtools MPE
Researchers at Technical University Vienna (Austria) and University of Bologna (Italy) have used FEMtools MPE to process massive amounts of vibration data collected with modern full field measurement
In the framework of the EU-sponsored TEFFMA project (Towards Experimental Full Field Modal Analysis, FP7-PEOPLE-2011-IEF-298543), a comparison is made by means of challenging optical technologies
(SLDV, Hi-Speed DIC, Dynamic ESPI) on the same broad band vibration measurement problem of a lightweight plate, with different spatial resolution and quality of the measured patterns. For each
measurement, an experimental modal model is extracted and finite element model updating results are compared between different full field technologies. The results expected from this research will
strengthen the path of full field technologies in mechanical engineering and other fields, ranging from aerospace to vehicle technologies, electronic components, and advanced material behaviour
The following figure shows an example of such high-resolution mode shape. Full field measurements were taken at 49042 points resulting in FRFs at 1285 frequency lines and occupying several GB of disk
space per experiment. All FRFs of an experiment could be imported in FEMtools and efficiently processed by the Modal Parameter Extractor (MPE) add-on for extraction of mode shapes.
Plate mode shape at around 246 Hz obtained using FEMtools MPE, from FRFs measured by ESPI using a 226x217 measurement grid.
For more information, see the following resources:
• A. Zanarini, Full Field Optical Measurements in Experimental Modal Analysis and Model Updating, Journal of Sound and Vibration 442 (2019) 817-842.
• A. Zanarini, Broad Frequency Band Full Field Measurements for Advanced Applications: Point-Wise Comparisons between Optical Technologies, Mechanical Systems and Signal Processing 98 (2018)
• TEFFMA project webpage (http://cordis.europa.eu/project/rcn/103580_it.html)
• Tthe researcher's web page (http://www.diem.ing.unibo.it/personale/zanarini/Zanarini_ricerca_EN.htm)
See also:
FEMtools Modal Parameter Extractor (MPE)
Scenario-Based Damage Identification
Damage has a direct impact on the modal parameters of structures. However, finding the location and severity of the damage from the modal parameters is a challenging task. This is because damage
identification problems are in general highly undetermined, i.e. the number of potential damage locations is much higher than the size of the experimental data set.
In a paper written by DDS engineers [1], the concept of a damage scenario-based framework was presented that tries to overcome this problem by both increasing the size of the experimental data set
and reducing the number of investigated damage locations. Using a carefully validated FE model of the undamaged structure, the effects of a number of damage scenarios are simulated. Eventually, the
identification routine detects the fingerprints of the damage scenarios in the frequency pattern of the damaged structure.
The scenario-based damage identification framework has been evaluated and showed promising results. It appears to be possible to decompose the measured frequency pattern into the signatures of a
series of pre-defined damage scenarios. The scenario-based approach seems to be capable of not only identifying the location of the damage but also the degree of damage. It is a realistic assumption
that only a limited number of damage scenarios with high probability can be expected most mechanical and civil structures. This situation can be enforced by introducing weak spots in a built
structure and monitoring damage on these spots only, thus introducing manufactured damage scenarios.
In a follow-on paper by a Hansen et al [2], the same concept was further investigated with a focus on the practical considerations which are crucial to the applicability of a given vibration-based
damage assessment configuration. The technique is demonstrated on a laboratory test case using automated OMA. to simulate the practical situation of ice accretion on wind turbine blades. Ice
accretion on the rotor blades of a wind turbine leads, among other things, to added loads, safety issues and diminished aerodynamic performance of the airfoil. This type of perturbation constitute an
added mass and occurs frequently in northern regions. The presented technique could be implemented directly to localize and quantify ice accretion.
[1] T. Lauwagie, E. Dascotte, A Scenario-based Damage Identification Framework. Presented at the 30th International Modal Analysis Conference (IMAC), February 2012, Jacksonville, Florida, USA.
Download (PDF, 0.85 MB)
[2] J.B Hansen, R. Brincker, M. Lopez- Aenlle, C.F. Overgaard, K. Kloborg, A New Scenario-based Approach to Damage Detection using Operational Modal Parameter Estimates. Mechanical Systems and Signal
Processing 94 (2017) 359-373.
For more information, contact support@femtools.com
FRF-Based vs. Modal-Based Model Updating
For the purpose of finite element model validation, any test data that is reliable and relevant can be used. Some people give preference to using raw, unprocessed test data. In the field of
structural dynamics, it can be discussed if it is better to use FRFs (Frequency Response Functions defined as response spectra divided by input force spectra) or the mode shapes that can be extracted
from these FRFs.
Mode shapes are a powerful mathematical tool to represent the dynamics of a structure. Although limited to linear behavior and best used with lightly, proportionally damped structures, they allow a
condensation of the often massive amount of FRF data. Working with a condensed set of data is more practical for FE-Test correlation and helpful to gain understanding of the gap between FE simulation
and the true structural behaviour through the examination of modes shapes and resonant frequencies. For solving the FE model updating problem, requiring to minimize an objective function that
describes the distance between FE and test, it is common to use a gradient-based optimization approach. The definition of the updating problem in terms of response residues (differences between
comparable FE and test structural responses), computation of the gradients, and optimized parameter estimation all benefit from using the relatively compact set of modal data compared to the FRFs.
The benefit comes from the ease of use and data handling, and computation speed.
The use of FRFs would be mandatory in case target modal parameters are not available or cannot extracted from the FRFs with confidence. For example, when the structure exhibits high non-proportional
damping, high modal density, or nonlinear behaviour. Another benefit of FRFs is that amplitude levels are sensitive to damping. In the modal approach, the extracted modal damping, which is known to
be highly unreliable, can only be used as input for the FE analysis. From FE computional point, modal superposition is still the industry standard tool for simulating FRFs. One can therefore state
that if analytical mode shapes are used for simulating the FRFs, then these are the raw data and should be used for validating the FE model. On the other hand, test FRFs represent the true response
of a structure under given test conditions, and therefore they reflect the true non-linear nature and real physical damping of a structure. If it is the objective of validating and updating an FE
model to incorporate those properties in the model, then one cannot do with mode shapes. Vice versa, if the FE model is intended to be a linear and simplified representation of the real world
behavior, then using FRF-based updating may result in residual discrepancies between the simulation and test FRFs that cannot be overcome by updating, due to missing refinement and necessary physical
parameters in the FE model.
In summary, when extraction of modal parameters from FRFs is not recommended, when analytical FRFs are computed using a direct method, or when the updating parameters include damping, then FRF-based
updating should be seriously considered. In all other cases, the modal approach is preferred. If only experimental FRFs are supplied, then the FEMtools MPE tool can be used for modal parameter
An example of FRF-based (left) and modal-based correlation and model updating using FEMtools Model Updating.
For more information:
E. Dascotte, J. Strobbe, Updating Finite Element Models using FRF Correlation Functions, Proceedings of the 17th International Modal Analysis Conference (IMAC), February 1999, Kissimmee, Florida.
Download (PDF, 98 KB)
FEMtools Modal Parameter Extractor (MPE)
FEMtools Model Updating
Modal Extraction for the FE Analyst
Testing and simulation are traditionally done by separate teams. Test engineers will hand test results over to FE analysts who use them for the FE model validation and updating process. Test data has
an important role in the validation process and serves as the reference, representing the true physical behaviour of the structure during the specifically designed validation test. Considering the
important role of test data in the validation process, it is mandatory to adopt the highest standards with respect to equipment, operator training, data processing and reporting. The quality of the
test result must be guaranteed in order to make subsequent use of it for decision making during model validation and updating. Double checking of the test data by an independent expert may be
required as part of a quality assurance standard.
In case validation testing is based on experimental modal analysis, the FE analysts working on model validation will usually be provided with only the modal parameters (resonant frequencies, mode
shapes, modal damping). The data must be accompanied by a detailed report on the test conditions and processing that was done. However, it is recommended to also provide the analysts with the raw
test data and the software tools to double-check modal extraction done by the test team. This provides them an opportunity to gain additional insight in the response of the structure under test and
for making informed decisions during the updating process.
Modal analysis of a satellite structure using the global MPE applet.
The FEMtools Modal Parameter Extractor (MPE) add-on tool is a high performance tool that can be used by FE analysts with only minimal training to obtain modal parameters from Frequency Response
Functions or output-only time histories. The polyreference method that is used produces very clean stabilization charts and reduces the often subjective separation between physical and mathematical
poles. Complemented by validation tools and a local curvefit method for data that is affected by mass loading, this add-on tool provides a fast and easy way to increase confidence in the modal
parameters by double-checking. In case different results are obtained, the FE analyst has good reasons to inquire with the test team and demand their confirmation of results.
See also:
Automated Operational Modal Analysis
FEMtools Modal Parameter Extractor (MPE)
Structural Health Monitoring of Piping Systems using Modal Analysis and Finite Element Model Updating
Safe operation, availability and lifetime assessment of piping are of utmost concern for plant operators. The knowledge on how failures in piping and its support construction are reflected in changes
of the dynamic behavior is a useful basis for system identification and Structural Health Monitoring (SHM).
Modal analysis of complex piping, the identification of system changes and the use of vibration dampers in piping still constitute challenges. Researchers at the MPA University of Stuttgart in
Germany used Operational Modal Analysis (OMA), finite element modelling and model updating to study changes in the natural frequencies and corresponding mode shapes due to through-wall cracks or
changing boundary conditions.
One part of their study involved the design of a new type of tuned mass damper (TMD) that was first tested in the laboratory of MPA. Using detailed FE modelling, updated by OMA, provided the
information necessary to adapt these TMD for efficiently cancelling the resonances in a piping system of a chemical plant.
In another part of their research, the influence of local wall thinning on the eigenfrequencies and mode shapes of a laboratory piping system was evaluated. Using sensitivity analysis and FE model
updating it was found that rotational spring stiffness of the supports were important parameters for successful model updating. FEMtools was also used to sort a large number of local mode shapes
which led towards the detection of a high order mode that showed a collapse-like motion at the exact position of the local wall thinning.
Comparison of the higher order mode shape of two FE models, without (left) and with (right) local wall thinning in the elbow. Eventhough the change in eigenfrequency was very small, an 8% change in
Modal Assurance Criterion (MAC) was observed, which was significantly higher than all other mode shapes.
The correlation between FE analysis and modal testing for this particular mode shape was demonstrated to be most sensitive for local wall thinning at the elbow, compared to other (lower order) mode
shapes. This finding suggests that it might be possible to design a structural health monitoring device that is capable to detect mode shape changes such as this, for example by using a laser doppler
vibrometer and automated scanning robots which make it possible to get experimental modal data at a resolution close to that of the FE-model. Used on a regular basis in an operational plant, such
device can be a cost-efficient tool to prevent structural failure.
More information can be found in the following paper:
G. Hinz, K. Kerkhof, System Identification and Reduction of Vibrations of Piping in Different Conditions, Proceedings of the ASME 2013 Pressure Vessels & Piping Division (PVP2013), July 2013, Paris,
Multi-Model Updating
Multi-Model Updating (MMU) is simultaneous updating of multiple finite models that each correspond with a different structural configuration, but that share common updating parameters. If for each
configuration modal test data is available, then MMU is used to combine the sensitivity information from every configuration and in this way increase the number of updating targets. This will lead to
an improved condition for model updating compared to using only a single test.
For example, solar panels for satellites can be tested during different stages of deployment. A finite element model and modal test data can be obtained that correspond with each stage of deployment.
This provides a richer set of test data to serve as reference for updating element properties that are common in all configurations. Such properties can be, for example, joint stiffness or material
properties. Using only a single stage of deployment would not provide sufficient information to identify all properties. Another example is composite material identification using tests on plates and
coupons with different geometries. This will introduce more mode shape types and possibly also redundant test data for improved identification of material properties.
FEMtools Model Updating includes an MMU automation tool that collects the FE and test data for each configuration and automates the process of sensitivity analysis, sensitivity matrix assembly,
parameter updating, and FE re-analysis. This makes MMU a straightforward and easy to use process.
For more information, contact support@femtools.com
Bottom-Up Versus Top-Down Approach for Model Validation and Updating
Validating and updating a finite element model in a bottom-up procedure is in general more rewarding than a top down approach. The bottom-up approach naturally follows the validation pyramid with
coupon and component testing a the base, building up to sub-assemblies and finally to full assemblies at the top. At each level the complexity is increased and joints are added. For updating
purposes, this means that the updating parameters for each model to be validated are limited to the uncertain parameters introduced at the level under study. Components that have been validated
previously can be frozen as superelements and added to an assembly at a higher level. In a top-down approach, the selection of relevant updating parameters would be a serious challenge given the
large number of potential parameters in an assembly. It is also more difficult to conduct validation experiments if the assembly is a large structure, and in a top-down approach these would be the
only validation experiments.
The bottom-up approach to model validation and updating is supported in a natural way by the dynamic substructuring methods that are available in FEMtools. Using substructuring, the different
components that constitute an assembly are modeled, tested and updated separately. Updated components are frozen as Craig-Bampton superelements. Repeated tests at different phases of the assembly
allow focusing on the modeling of joints. Component modes synthesis is used to obtain the responses of the assembly.
For more information, contact support@femtools.com
Automated Operational Modal Analysis
A vibration monitoring system, combined with a structural health evaluation system, enhances safety by allowing better planning of inspections and maintenance work. Such a system may include an
Operational Modal Analysis (OMA) tool to extract mode shape from acceleration, velocity, displacement or strain time histories in situations that the dynamic loads are unknown. This is typically the
case for large structures like for example bridges, offshore platforms, or aircraft.
If the modal extraction process is automated then the modal parameters (resonant frequency, mode shapes and modal damping) can be monitored 24/7 over long periods of time, ideally covering the entire
operational lifetime of the structure. They can be used for applications like FE model updating and damage identification. If the deformation of the structure is also monitored, then modal
parameters, in combination with an updated finite element mode, are also used for dynamic stress recovery at all locations of the structures as an alternative to strain gauges. These stress are in
turn used for accumulated fatigue monitoring.
The FEMtools Modal Parameter Extractor (MPE) add-on tool can be used for OMA. A high performance poly-reference Least Squares Complex Frequency (pLSCF) method is used that produces very clean
stabilization charts and therefore lends itself for automated modes extractions. The MPE add-on also provides Digital Signal Processing (DSP) tools such as decimation, filtering, detrending,
selecting reference channels and computing cross-power spectra. The integration within the FEMtools Framework allow for automation and combination with all the other FEMtools modules like FE-test
correlation, model updating, and optimization. The availability of an extended function library for data manipulation, graphics display, interfacing with SQL databases and many other tasks, position
FEMtools as an ideal platform for custom development of structural health monitoring and evaluation systems. A case study is described in the following conference paper:
E. Dascotte, Vibration Monitoring of the Hong Kong Stonecutters Bridge. Presented at the 4th International Conference on Experimental Vibration Analysis for Civil Engineering Structures (EVACES
2011), October 3-5, Varenna, Italy.
Download (PDF, 1.0 MB)
For more information, contact support@femtools.com
For other papers, click here
Working with Abaqus Condensed Matrices
ABAQUS can condense stiffness and mass matrices at external nodes and enriched by Craig-Bampton component modes synthesis. This is equivalent to the use of superelements in FEMtools and other finite
element programs.
FEMtools comes with an interface to import superelement matrices condensed with ABAQUS. Once imported in FEMtools, the ABAQUS condensed matrices can be used as a standard superelement in every
FEMtools analysis that supports superelements. This feature allows, for instance, the management in FEMtools of large assembled FE models that use ABAQUS specific features like tie-contact without
loss of accuracy, mixing condensed parts to a residual FE mesh. Similar functionality exists for superelements imported from NASTRAN.
The superelement reduction can be used to speed-up pretest analysis, dynamic analysis, correlation analysis on large assemblies and even model updating of the residual part of the FE model. Using a
wireframe connection between the external nodes, FEMtools allows the visualization of mode shapes using a test model look and feel.
For more information, contact support@femtools.com
Geometry Updating
The concept of geometry updating was explored in a recent study of the cast iron lantern housing of a gear box. The resonant frequencies and mode shapes of the test structure were measured using
impact testing. Next, a set of digital pictures were taken from a number of different angles. By means of photogrammetry, these pictures were converted into a surface model that represented the
actual geometry of the lantern housing. This surface model was then compared with an FE-model derived from a CAD-model of the lantern housing. In this way, the regions where there was a substantial
difference between the actual geometry and CAD-model could be identified. Finally, the geometry of the FE-model was corrected based on the measured geometry using a mesh morphing technique. For the
considered test case, the correction of the geometry provided a significant improvement of the quality of FEM-test correlation of the modal parameters.
The project demonstrated that only a limited number of geometry measurements are needed to update a CAD-based geometry using mesh morphing techniques. With geometry updating it is possible to
eliminate most of the uncertainty on the geometry. As such, geometry updating eliminates, or at least reduces, the need for equivalent parameter changes to compensate the effects of geometrical
inaccuracies. As the updating process provides parameter changes that are physically more relevant, the application range in which the updated FE-model can be used as a reliable predictive tool for
design optimization can be increased.
Improving the accuracy of the FE model to predict a larger number of mode shapes covering a wider frequency range, increases chances to detect damages or manufacturing issues by monitoring the modal
parameters. Combined with automated testing and metrology, this opens up the perspective of a modal-based quality inspection tool.
Two technical papers on this subject were presented at the international conferences and are now available for download from the FEMtools website:
T. Lauwagie, E. Dascotte, Geometry-based Updating of 3D Solid Finite Element Models. Presented at the 29th International Modal Analysis Conference (IMAC), February 2011, Jacksonville, Florida, USA.
Download (PDF, 1.0 MB)
T. Lauwagie, F. Van Hollebeke, B. Pluymers, R. Zegels, P. Verschueren, E. Dascotte, The Impact of High-Fidelity Model Geometry on Test-Analysis Correlation and FE Model Updating Results. Presented at
the International Seminar on Modal Analysis 2010 (ISMA), September 20-22, 2010, Leuven, Belgium.
Download (PDF, 0.75 MB)
ODS-Based Model Updating
Finite element model updating is a well established method for validating and improving simulation models in structural dynamics. The traditional approach consists of correlating simulation data with
the results of an experimental modal analysis (EMA). Natural frequencies and mode shapes extracted from frequency response functions are preferred as references since they are independent of the
applied loads.
However, the operational loads or boundary conditions can change the dynamic behavior of a structure, or make it impossible to perform an experimental modal analysis with measured or controllable
dynamic loading. In such cases, only operational data can be used as reference data for model updating. Additionally, updating a model using operational data automatically guaranties the validity of
the model under the considered operational conditions.
DDS recently introduced a new method in FEMtools for model updating based on Operational Deflection Shapes (ODS) that is able to update the mass, stiffness and damping properties of a structure
simultaneously. A technical paper describing the method is available for download:
T. Lauwagie, J. Guggenberger, J. Strobbe, E. Dascotte, Model Updating using Operational Data. Presented at the International Seminar on Modal Analysis 2010 (ISMA), September 20-22, 2010, Leuven,
Download (PDF, 1.0 MB)
More technical papers on different subjects can be found here
Mapping Laser Scanning Measurements on a FE Mesh
Laser vibrometry or electronic holography can be used to obtain vibration modes which in turn can be correlated with finite element results. Each experimental mode shape will typically be presented
as a dense cloud of scanning points with each point moving in the direction of the laser or camera. Analyzing the correlation of these vibration modes with a finite element model poses some specific
problems with respect to mapping the scanned surface onto FE model, identifying and extracting the corresponding translation degrees of freedom, averaging for measurement noise and computing
numerical correlation criteria.
DDS has recently developed a custom solution for postprocessing a set of data files containing measured vibration modes with an ANSYS finite element model of a turbine blade. Written in FEMtools
Script, this solution automates the entire work flow and reporting of results. It can be integrated into the FEMtools menus or operated in batch mode for processing large quantities of data or as
part of an integrated quality inspection system.
For more information, contact support@femtools.com
Ogden Material Identification using FEMtools Optimization
The Ogden material model is frequently used in finite element programs to simulate the behavior of non-linear elastomers. The values of the material parameters of the Ogden model are highly material
dependent. The main challenge in using the Ogden model in finite element simulations, is to find reliable estimates for the values of the Ogden material parameters. The relation between an imposed
displacement and the resulting reaction force can be used to identify these material parameters using a mixed numerical-experimental approach. In this approach, the objective is to fit the simulated
reaction force curve onto the measured reaction force curve. The computationally most efficient way of doing that is by using a gradient-based optimization strategy.
Such identification routine was implemented using FEMtools Script for the process identification part, FEMtools Optimization for the optimizer routines, and used MSC.Marc to compute the reaction
force curves.
More information can be found in the following application note:
Identification of Ogden Material Parameters using FEMtools, Download (PDF, 433 KB)
Finding Optimal Master DOF for Guyan Reduction with FEMtools Pretest Analysis Tools
The pretest analysis tools in FEMtools Correlation are primarily used to find the optimal number and location of transducers for modal testing. One of the methods that are available is the Iterative
Guyan Reduction (IGR) method, which is an elimination method to optimize sensors using the modal cross-orthogonality as selection criterion. The method can as well be used for selecting master DOF
for Guyan reduction. In an FEA-only context, there are no constraints on the number of master DOFs and their accessibility because they will not serve as test locations. Furthermore, master DOF can
include rotational DOF.
Using the IGR tool in FEMtools is a fast and efficient way to select master DOFs for structural components that will be reduced using Guyan reduction.
For more information on this application, contact info@femtools.com | {"url":"https://femtools.com/news/techtips.htm","timestamp":"2024-11-13T00:59:16Z","content_type":"text/html","content_length":"64180","record_id":"<urn:uuid:3c72008b-c427-4e3c-b016-1a3b19a3eb26>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00542.warc.gz"} |
Chapter 10 – Foundations of Mathematics - Teaching Secondary Mathematics
Chapter 10 – Foundations of Mathematics
Knowing where to begin this chapter and what to include is difficult. There are so many topics to be covered. What is the best order? What background information will students possess? Should topics
be integrated into this discussion? You are expected to blend topics as you teach by relating one to the other. Perhaps this discussion should follow an integrated format. Still, if that is done,
continuity will be next to impossible. This comment brings to mind the story of an individual wanting to borrow a chainsaw. The request was made, and the response was “No.” Rather stunned, the
individual followed with “Why not?” “My dog is sick” was the reply. “What does your dog being sick have to do with my borrowing your chainsaw?” was the next question. The response was “Nothing, but I
don’t want to lend it to you, and one excuse is as good as another.” The moral of that story is that no matter what order is selected, others could serve equally well.DOWNLOAD
Exercise 10.1
1. Reflect on your experiences with courses taken prior to Algebra I. Describe how the NCTM suggested changes would alter the mathematics education you received in Grades 5 – 8.
2. After reading the introduction to A Vision for School Mathematics (NCTM, 2000, p. 3), select one (and only one) sentence that stands out to you. Why did you select that sentence?
Exercise 10.2
1. Have the Common Core Standards for Mathematics been implemented in your state or country? If so, in which grades?
2. Rank the Standards for Mathematical Practice in order of importance to you. Should these standards be in a priority order for teachers?
3. Interview a current math teacher. Ask about the teacher’s perception of the Common Core Standards for the content he/she is teaching.
Exercise 10.3
1. Select a topic, like “What percent of 8 is 5?” and research how to teach it and all the prerequisite experiences the student should have prior to broaching this subject. Describe the skills you
would expect the student to have prior to the study of the topic and the method you would use to introduce it.
2. Find a real-world application of the topic you select for part 1 of this exercise. Describe the application and how you would relate it to students.
Exercise 10.4
1. Do The Big 20 listed here. Calculators are not permitted. Do not study the problems prior to attempting to do this assignment. Your objective is to do it in two minutes or less and miss none.
2. Get a friend to try The Big 20. Describe to them what they will be working as they do The Big 20 and then time them as they do it. Discuss their emotions as they did The Big 20. What did you
learn that could be used as you teach mathematics?
Exercise 10.5
1. Describe how egg cartons would be used to show $4 5 + 2 3$.
2. Use egg cartons to describe $4 5 − 2 3$.
3. Use egg cartons to demonstrate that $1 2 = 2 4 = 4 8 = 8 16$. Explain the result.
Exercise 10.6
1. Assume fractions with denominators of 3 and 5 are to be added. Describe a process to determine the unit rod that would permit expression of thirds and fifths at the same time, which could be used
with students.
2. Do each of the following problems using Cuisenaire rods.
□ Show $1 2 3 = 5 3$
□ Show $2 3 = 8 12$
□ $2 3 + 3 4 =$
□ $13 12 − 2 3 =$
□ $2 3 ÷ 3 4 =$
□ $3 4 ÷ 2 3 =$
Note: Parts e and f are very difficult to rationalize and yet, if you work through them, your understanding of division of fractions will increase significantly.
Exercise 10.7
1. Look at a minimum of three secondary texts that deal with the teaching of addition of fractions. Compare the sequence of presentation of problem types with Figure 10.11 in the text. Is the text
sequence laid out well? Is consideration given to avoiding issues like dividing out common factors, conversion of improper fractions to mixed numbers, and so on? Would you alter the sequence? Why
or why not? Is the sequence adequate as presented? Why or why not?
2. List the readiness skills necessary for a student to understand the explanation for doing a problem like $3 4 − 2 7$.
3. Find a secondary text that uses manipulatives to introduce division of fractions. Describe the presentation. Do you think students would learn from that explanation (assuming the teacher followed
the pattern and supplemented it as needed)? Why or why not?
Exercise 10.8
1. The last few paragraphs contained points for rationalizing the use or nonuse of a calculator in the curriculum. Present a defense for both sides of the issue. Where do you stand, and why?
Exercise 10.9
1. Create an addition problem involving three addends with units, tenths, and hundredths in each of them so that each column requires regrouping. Show through a series of steps using F, L, and S how
the regrouping from each place would be accomplished, being careful to adequately show the trades as they are made, doing one exchange at a time. Relate each step to an abstract representation of
the same problem.
2. Create a decimal subtraction problem that requires regrouping in both the tenths and hundredths places. Show through a series of steps using F, L, and S how the regrouping from each place would
be accomplished, being careful to adequately show the trades as they are made, doing one exchange at a time. Relate each step to an abstract representation of the same problem.
3. Develop a position paper on whether or not there is value to spending so much time in the curriculum teaching students to deal manually with decimals in light of the existence of inexpensive
technology that deals with them so easily.
Exercise 10.10
1. Investigate perfect numbers. In the process, answer questions like the ones that follow. You should not limit your research to answering the listed questions. What are the next two perfect
numbers after 28? How many perfect numbers have been found to date? Has it been established that all perfect numbers have been found?
2. The following are listed as abundant numbers: 12, 18, 24, and 36. Are all abundant numbers multiples of 6? Is there a multiple of 6 that is not an abundant number? Are there abundant numbers that
are not multiples of 6? List examples and explain your position.
3. Prime numbers are deficient numbers. Is there a pattern for nonprime deficient numbers? List examples and explain your position.
Exercise 10.11
1. It is possible to extend the sieve of Eratosthenes so there will be 100 consecutive composites. Where does that occur?
2. Excluding the first row, will there be another row in a 10-column sieve of Eratosthenes that will have four primes in it?
3. Create a 6-column sieve as opposed to a 10-column sieve like that attributed to Eratosthenes. Define the generalizations students would be expected to make using this new creation. Describe the
benefits and disadvantages to presenting this 6-column sieve to a class of secondary students.
Exercise 10.12
1. Write a summary of at least three instances where art could be inserted into the mathematics curriculum. For each of the examples, describe the mathematical application.
2. Create a design that will tessellate the plane. Your design should not be a regular polygon.
3. Many mathematicians choose between art or music and mathematics as their career. Describe the background of one such individual.
Exercise 10.13
1. Consider Figure 10.20 in the text to be a body of water containing two islands. How many colors would be needed to color Figure 10.20 in the text with the restriction that no two edges share the
same color?
2. Research four-color maps. Has the idea ever been clearly proven?
Exercise 9.14
1. List at least three areas of geometry where you see possibilities for students to become confused. In each instance explain the source of confusion and describe how you would clarify it for
2. Use textbooks or objectives to determine the major geometric concepts covered in a mathematics course prior to Algebra I. Discuss what topics, if any, should be eliminated from, or added to, the
list, including a rationalization for each entry.
3. Many geometric topics are introduced in prior grades. Still, they are revisited in successive work, often with no elaborations or extensions. How can we, as professionals who should teach a
concept right the first time, justify this repeated visiting of an idea with no alterations to our approach?
Exercise 10.15
1. Should the curriculum of courses prior to Algebra I be changed to reflect items discussed in this text, NCTM publications, and the Common Core Standards? Why or why not?
2. Defend or take issue with the statement, “General mathematics is foundational work for future mathematical study. There is no need to connect general mathematics curriculum with the real world of
Problem Solving Challenges
1. A camel merchant willed his 17 camels to hYou have a digital clock that shows only hours and minutes. How many different readings between 11:00 a.m. and 5:00 p.m. (of the same day) contain at
least two 2s in the time?
Hint: Try a smaller problem (less hours)
Answer/solution: 34.
There is one such reading between 11:00 and 12:00, 1:00 and 2:00, 3:00 and 4:00, and 4:00 and 5:00 each of which being 22 minutes past the hour. There are 15 such readings between each of 12:00 and
1:00 and 2:00 and 3:00. 12:2X accounts for 10 of them, while 12:02, 12:12, 12:32, 12:42, and 12:52 account for five. 2:2X accounts for 10 of them, while 2:02, 2:12, 2:32, 2:42, and 2:52 account for
five as well. The total reading is 1 + 1 + 1 + 1 + 10 + 5 + 10 + 5 = 34.
2. Start with a square piece of paper. Draw the largest circle possible inside the square, cut it out and discard the trimmings. Draw the largest square possible inside the circle, cut the square
out and discard the trimmings. What fraction of the original square piece of paper has been cut off and thrown away?
Hint: Try the problem with scissors and paper? What do you notice?
Answer/solution: Half the area.
Try it yourself. The following diagram shows the results. B is the midpoint of AC, and D is the midpoint of CE. Since BG is congruent to CD and DG is congruent to BC, triangle BCD is congruent to BGD
(they also share side BD) by SSS. Therefore the cut-away portion (triangle BCD) of square BCDG is half of the square. This is the same for each portion of the original square. See Figure 9.2 below.
Additional Learning Activities
There is no Additional Learning Activities for this chapter. | {"url":"https://routledgelearning.com/teachingsecondarymathematics/content/resources/chapter-10-foundations-of-mathematics/","timestamp":"2024-11-10T03:05:26Z","content_type":"text/html","content_length":"81754","record_id":"<urn:uuid:f0b11513-ca67-4c72-947f-e09142313df3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00395.warc.gz"} |
Compute auto- [and cross-] spectra from one [or two] time series
gmt spectrum1d [ table ] [ -Ssegment_size ] [ -C[xycnpago] ] [ -Ddt ] [ -L[h|m] ] [ -N[name_stem] ] [ -T ] [ -W ] [ -bbinary ] [ -dnodata ] [ -eregexp ] [ -fflags ] [ -ggaps ] [ -hheaders ] [ -iflags
] [ -qiflags ] [ --PAR=value ]
Note: No space is allowed between the option flag and the associated arguments.
spectrum1d reads X [and Y] values from the first [and second] columns on standard input [or x[y]file]. These values are treated as timeseries X(t) [Y(t)] sampled at equal intervals spaced dt units
apart. There may be any number of lines of input. spectrum1d will create file[s] containing auto- [and cross- ] spectral density estimates by Welch’s method of ensemble averaging of multiple
overlapped windows, using standard error estimates from Bendat and Piersol.
The output files have 3 columns: f or w, p, and e. f or w is the frequency or wavelength, p is the spectral density estimate, and e is the one standard deviation error bar size. These files are named
based on name_stem. If the -C option is used, up to eight files are created; otherwise only one (xpower) is written. The files (which are ASCII unless -bo is set) are as follows:
Power spectral density of X(t). Units of X * X * dt.
Power spectral density of Y(t). Units of Y * Y * dt.
Power spectral density of the coherent output. Units same as ypower.
Power spectral density of the noise output. Units same as ypower.
Gain spectrum, or modulus of the transfer function. Units of (Y / X).
Phase spectrum, or phase of the transfer function. Units are radians.
Admittance spectrum, or real part of the transfer function. Units of (Y / X).
(Squared) coherency spectrum, or linear correlation coefficient as a function of frequency. Dimensionless number in [0, 1]. The Signal-to-Noise-Ratio (SNR) is coh / (1 - coh). SNR = 1 when coh =
In addition, a single file with all of the above as individual columns will be written to stdout (unless disabled via -T).
Required Arguments¶
One or more ASCII (or binary, see -bi) files holding X(t) [Y(t)] samples in the first 1 [or 2] columns. If no files are specified, spectrum1d will read from standard input.
segment_size is a radix-2 number of samples per window for ensemble averaging. The smallest frequency estimated is 1.0/(segment_size * dt), while the largest is 1.0/(2 * dt). One standard error
in power spectral density is approximately 1.0 / sqrt(n_data / segment_size), so if segment_size = 256, you need 25,600 data to get a one standard error bar of 10%. Cross-spectral error bars are
larger and more complicated, being a function also of the coherency.
Optional Arguments¶
Read the first two columns of input as samples of two time-series, X(t) and Y(t). Consider Y(t) to be the output and X(t) the input in a linear system with noise. Estimate the optimum frequency
response function by least squares, such that the noise output is minimized and the coherent output and the noise output are uncorrelated. Optionally specify up to 8 letters from the set { x y c
n p a g o } in any order to create only those output files instead of the default [all]. x = xpower, y = ypower, c = cpower, n = npower, p = phase, a = admit, g = gain, o = coh.
dt Set the spacing between samples in the time-series [Default = 1].
Leave trend alone. By default, a linear trend will be removed prior to the transform. Alternatively, append m to just remove the mean value or h to remove the mid-value.
Supply an alternate name stem to be used for each individual output file [Default = “spectrum”]. If -N is given with no argument then we disable the writing of individual output files and instead
write a single composite results table to standard output.
Select verbosity level [w]. (See full description) (See cookbook information).
Disable the writing of a single composite results table to stdout. Only individual output files for each selected component (see -C) will be written.
Write Wavelength rather than frequency in column 1 of the output file[s] [Default = frequency, (cycles / dt)].
-bi[ncols][t] (more …)
Select native binary format for primary input. [Default is 2 input columns].
-bo[ncols][type] (more …)
Select native binary output. [Default is 2 output columns].
-d[i|o]nodata (more …)
Replace input columns that equal nodata with NaN and do the reverse on output.
-e[~]“pattern” | -e[~]/regexp/[i] (more …)
Only accept data records that match the given pattern.
-f[i|o]colinfo (more …)
Specify data types of input and/or output columns.
-g[a]x|y|d|X|Y|D|[col]zgap[+n|p] (more …)
Determine data gaps and line breaks.
-h[i|o][n][+c][+d][+msegheader][+rremark][+ttitle] (more …)
Skip or produce header record(s).
-icols[+l][+ddivide][+sscale][+ooffset][,…][,t[word]] (more …)
Select input columns and transformations (0 is first column, t is trailing text, append word to read one word only).
-qi[~]rows[+ccol][+a|f|s] (more …)
Select input rows or data range(s) [default is all rows].
-^ or just -
Print a short message about the syntax of the command, then exit (NOTE: on Windows just use -).
-+ or just +
Print an extensive usage (help) message, including the explanation of any module-specific option (but not the GMT common options), then exit.
-? or no arguments
Print a complete usage (help) message, including the explanation of all options, then exit.
Temporarily override a GMT default setting; repeatable. See gmt.conf for parameters.
ASCII Format Precision¶
The ASCII output formats of numerical data are controlled by parameters in your gmt.conf file. Longitude and latitude are formatted according to FORMAT_GEO_OUT, absolute time is under the control of
FORMAT_DATE_OUT and FORMAT_CLOCK_OUT, whereas general floating point values are formatted according to FORMAT_FLOAT_OUT. Be aware that the format in effect can lead to loss of precision in ASCII
output, which can lead to various problems downstream. If you find the output is not written with enough precision, consider switching to binary output (-bo if available) or specify more decimals
using the FORMAT_FLOAT_OUT setting.
Note: Below are some examples of valid syntax for this module. The examples that use remote files (file names starting with @) can be cut and pasted into your terminal for testing. Other commands
requiring input files are just dummy examples of the types of uses that are common but cannot be run verbatim as written.
Suppose data.g is gravity data in mGal, sampled every 1.5 km. To write its power spectrum, in mGal**2-km, to the file data.xpower, use
gmt spectrum1d data.g -S256 -D1.5 -Ndata
Suppose in addition to data.g you have data.t, which is topography in meters sampled at the same points as data.g. To estimate various features of the transfer function, considering data.t as input
and data.g as output, use
paste data.t data.g | gmt spectrum1d -S256 -D1.5 -Ndata -C > results.txt
The output of spectrum1d is in units of power spectral density, and so to get units of data-squared you must divide by delta_t, where delta_t is the sample spacing. (There may be a factor of 2 pi
somewhere, also. If you want to be sure of the normalization, you can determine a scale factor from Parseval’s theorem: the sum of the squares of your input data should equal the sum of the squares
of the outputs from spectrum1d, if you are simply trying to get a periodogram. [See below.])
Suppose we simply take a data set, x(t), and compute the discrete Fourier transform (DFT) of the entire data set in one go. Call this X(f). Then suppose we form X(f) times the complex conjugate of X
P_raw(f) = X(f) * X’(f), where the ‘ indicates complex conjugation.
P_raw is called the periodogram. The sum of the samples of the periodogram equals the sum of the samples of the squares of x(t), by Parseval’s theorem. (If you use a DFT subroutine on a computer,
usually the sum of P_raw equals the sum of x-squared, times M, where M is the number of samples in x(t).)
Each estimate of X(f) is now formed by a weighted linear combination of all of the x(t) values. (The weights are sometimes called “twiddle factors” in the DFT literature.) So, no matter what the
probability distribution for the x(t) values is, the probability distribution for the X(f) values approaches [complex] Gaussian, by the Central Limit Theorem. This means that the probability
distribution for P_raw(f) approaches chi-squared with two degrees of freedom. That reduces to an exponential distribution, and the variance of the estimate of P_raw is proportional to the square of
the mean, that is, the expected value of P_raw.
In practice if we form P_raw, the estimates are hopelessly noisy. Thus P_raw is not useful, and we need to do some kind of smoothing or averaging to get a useful estimate, P_useful(f).
There are several different ways to do this in the literature. One is to form P_raw and then smooth it. Another is to form the auto-covariance function of x(t), smooth, taper and shape it, and then
take the Fourier transform of the smoothed, tapered and shaped auto-covariance. Another is to form a parametric model for the auto-correlation structure in x(t), then compute the spectrum of that
model. This last approach is what is done in what is called the “maximum entropy” or “Berg” or “Box-Jenkins” or “ARMA” or “ARIMA” methods.
Welch’s method is a tried-and-true method. In his method, you choose a segment length, -SN, so that estimates will be made from segments of length N. The frequency samples (in cycles per delta_t
unit) of your P_useful will then be at k /(N * delta_t), where k is an integer, and you will get N samples (since the spectrum is an even function of f, only N/2 of them are really useful). If the
length of your entire data set, x(t), is M samples long, then the variance in your P_useful will decrease in proportion to N/M. Thus you need to choose N << M to get very low noise and high
confidence in P_useful. There is a trade-off here; see below.
There is an additional reduction in variance in that Welch’s method uses a Von Hann spectral window on each sample of length N. This reduces side lobe leakage and has the effect of smoothing the (N
segment) periodogram as if the X(f) had been convolved with [1/4, 1/2, 1/4] prior to forming P_useful. But this slightly widens the spectral bandwidth of each estimate, because the estimate at
frequency sample k is now a little correlated with the estimate at frequency sample k+1. (Of course this would also happen if you simply formed P_raw and then smoothed it.)
Finally, Welch’s method also uses overlapped processing. Since the Von Hann window is large in the middle and tapers to near zero at the ends, only the middle of the segment of length N contributes
much to its estimate. Therefore in taking the next segment of data, we move ahead in the x(t) sequence only N/2 points. In this way, the next segment gets large weight where the segments on either
side of it will get little weight, and vice versa. This doubles the smoothing effect and ensures that (if N << M) nearly every point in x(t) contributes with nearly equal weight in the final answer.
Welch’s method of spectral estimation has been widely used and widely studied. It is very reliable and its statistical properties are well understood. It is highly recommended in such textbooks as
“Random Data: Analysis and Measurement Procedures” by Bendat and Piersol.
In all problems of estimating parameters from data, there is a classic trade-off between resolution and variance. If you want to try to squeeze more resolution out of your data set, then you have to
be willing to accept more noise in the estimates. The same trade-off is evident here in Welch’s method. If you want to have very low noise in the spectral estimates, then you have to choose N << M,
and this means that you get only N samples of the spectrum, and the longest period that you can resolve is only N * delta_t. So you see that reducing the noise lowers the number of spectral samples
and lowers the longest period. Conversely, if you choose N approaching M, then you approach the periodogram with its very bad statistical properties, but you get lots of samples and a large
fundamental period.
The other spectral estimation methods also can do a good job. Welch’s method was selected because the way it works, how one can code it, and its effects on statistical distributions, resolution,
side-lobe leakage, bias, variance, etc. are all easily understood. Some of the other methods (e.g. Maximum Entropy) tend to hide where some of these trade-offs are happening inside a “black box”.
Bendat, J. S., and A. G. Piersol, 1986, Random Data, 2nd revised ed., John Wiley & Sons.
Welch, P. D., 1967, The use of Fast Fourier Transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms, IEEE Transactions on Audio and
Electroacoustics, Vol AU-15, No 2. | {"url":"https://docs.generic-mapping-tools.org/6.2/spectrum1d.html","timestamp":"2024-11-12T16:21:07Z","content_type":"text/html","content_length":"32170","record_id":"<urn:uuid:2ea27b84-e849-49a1-9e53-f4d50fc5a5c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00542.warc.gz"} |
Markov chain
Markov chains are sequences of random variables (or vectors) that possess the so-called Markov property: given one term in the chain (the present), the subsequent terms (the future) are conditionally
independent of the previous terms (the past).
This lecture is a roadmap to Markov chains. Unlike most of the lectures in this textbook, it is not an exhaustive treatment of the subject with detailed proofs, derivations, examples and exercises
(for which a separate textbook would be needed). Its aim is to provide a quick introduction to some concepts that are often used in applied statistics (e.g., in Markov Chain Monte Carlo methods).
Readers who are interested in a more detailed treatment can have a look at the references reported at the end of this page.
Let us start with a formal definition.
Definition Let be a sequence of random vectors. The sequence is said to be a Markov chain if and only if any given term of the sequence is independent of all terms preceding conditional on : where
the letter denotes a conditional distribution function.
State space
The state space of a Markov chain is the set of all possible realizations of the terms of the chain. In other words, for any given term , the support of is included in .
In what follows we present the main facts about Markov chains, by tackling, in order of increasing difficulty, the cases of:
• a finite state space
• an infinite but countable state space
• an uncountable state space
Markov chains with a finite state space
Let us start with the case of a finite state space. In particular, we assume thatthat is, the terms of the chain can take one of values .
We specify an initial distribution for the first value of the chain, that is, a vector of initial probabilities and impose
Then, we can choose a transition probability matrix (a matrix such that each of its rows is a vector of probabilities) and imposefor all and . The assumption that is equal for all is called
time-homogeneity (think of the index as a measure of time).
These two choices (of the initial distribution and the transition distribution ) completely determine the distribution of all the terms of the chain. As a matter of fact, for any , we have where
The latter equality holds because
Stationary distribution
If, for a given transition probability matrix , there is an initial distribution such that the distribution of all the terms of the chain is equal to the initial distribution, then is called a
stationary distribution (or invariant distribution) of the chain.
When is a stationary distribution, we have that
Together with the fact thatit implies
Detailed balance
A Markov chain with finite state space is said to satisfy the detailed balance condition if and only if there exists a distribution such thatfor any .
By summing both sides of the equation over , we getor
Therefore,for any . But this can be written in matrix form as
As a consequence, is a stationary distribution of the chain.
Note that detailed balance is a more stringent requirement than the existence of a stationary distribution: the former implies the latter, but the converse is not true.
Irreducible chain
We now introduce some properties of Markov chains that are frequently used to study their asymptotic behavior. The first such property is so-called irreducibility.
Let . Define the hitting time where the subscript indicates that the chain is assumed to start from .
We say that leads to if and only if
Again, the notation means that the probability is computed under the assumption that .
A Markov chain is said to be irreducible if and only if every state leads to itself and to every other state, that is, if and only if there is a positive probability that for any starting state the
chain will reach any other state (including itself) in finite time.
Recurrent chain
A state is called recurrent if and only if
Otherwise, it is called transient.
In other words, a state is recurrent if and only if the probability that the chain will return to in finite time (after having started from itself) is equal to .
A Markov chain is called recurrent if and only if all the elements of its state space are recurrent.
Aperiodic chain
Let . The period of is defined as where is the greatest common denominator. In other words, is the minimum time the chain takes to return to (after starting from itself).
It is possible to prove that if the chain is irreducible, then all its states have the same period , which is called the period of the chain.
A chain is called aperiodic if and only if the period of the chain is .
Existence and uniqueness of the stationary distribution
We have the following important result.
Proposition If a Markov chain with a finite state space is irreducible, then it has a unique stationary distribution.
Convergence to the stationary distribution
If we assume not only irreducibility, but also aperiodicity, we get the following result.
Proposition If a Markov chain with a finite state space is irreducible and aperiodic, then, irrespective of the initial distribution , where is the unique stationary distribution of the chain.
So, even if the initial distribution of the chain is not the stationary distribution, the terms of the sequence become less and less dependent on the initial value as increases and their
distributions converge to the stationary distribution.
Strong law of large numbers
Irreducibility can also be used to prove a Strong Law of Large Numbers.
Proposition If a Markov chain with a finite state space is irreducible, then, for any bounded function ,where is the unique stationary distribution of the chain and denotes almost sure convergence as
tends to infinity.
Thus, when the state space is finite, irreducibility guarantees that sample averages taken across the chain converge to population averages taken across the states.
This kind of proposition is often called an ergodic theorem.
Markov chains with a countable state space
We now tackle the case in which the state space is infinite but countable.
In particular, we assume thatthat is, the elements of the state space can be arranged into a sequence .
The initial distribution of the chain is a sequence of initial probabilities such that
Then, we can choose a double sequence of transition probabilitiessuch that, for any index ,and we imposefor all , and .
Note that the transition probabilities are independent of . This property is called time-homogeneity.
These two choices (of the initial distribution and the transition probabilities ) completely determine the distribution of all the terms of the chain.
The distribution of is a sequence whose elements can be derived as follows:
The distributions of the other terms of the chain are sequences that can be derived recursively as follows:
Stationary distribution
The concept of stationary distribution is almost identical to that introduced in the case of a finite state space.
If, for a given double sequence of transition probabilities , there is an initial distribution such that the distribution of all the terms of the chain is equal to the initial distribution, then is
called a stationary distribution of the chain.
Furthermore, we have that
Detailed balance
A Markov chain with countable state space is said to satisfy the detailed balance condition if and only if there exists a distribution such thatfor any .
This implies thator
Therefore,for any . So, is a stationary distribution of the chain.
Irreducible chain
The concept of irreducibility is the same found in the case of a finite state space.
Positive recurrent chain
When dealing with countable state spaces, we usually strengthen the concept of recurrence introduced for the case of a finite state space.
Remember that a state is called recurrent if and only if the probability that the chain will return to (after having started from itself) is equal to .
Even if a state is recurrent, it could happen thatthat is, the expected value of the return time is infinite.
A state is said to be positive recurrent if and only if the latter possibility is ruled out, that is, if and only if
A Markov chain is called positive recurrent if and only if all the elements of its state space are positive recurrent.
Aperiodic chain
The concept of aperiodicity is identical to that found in the case of a finite state space.
Existence and uniqueness of the stationary distribution
The following important result holds.
Proposition If a Markov chain with a countable state space is irreducible and positive recurrent, then it has a unique stationary distribution.
Note that in the case of a finite state space irreducibility was sufficient to obtain the uniqueness of the stationary distribution. In the countable case, we need to add the requirement of positive
Convergence to the stationary distribution
The following convergence result holds for chains having a countable state space.
Proposition If a Markov chain with a countable state space is irreducible, positive recurrent and aperiodic, then, irrespective of the initial distribution , for any , where is the unique stationary
distribution of the chain.
Compare this proposition to the one for finite state spaces:
• finite state space + irreducibility + aperiodicity convergence to the stationary distribution;
• countable state space + irreducibility + aperiodicity + positive recurrence convergence to the stationary distribution.
Strong law of large numbers
Positive recurrence is needed also to prove a Strong Law of Large Numbers.
Proposition If a Markov chain with a countable state space is irreducible and positive recurrent, thenwhere is any function such that the expected value exists and is finite, is the unique stationary
distribution of the chain and denotes almost sure convergence as tends to infinity.
Often, there are cases in which positive recurrence is difficult to prove, but we know that the chain has a stationary distribution. In these cases, we can use the following proposition.
Proposition If a Markov chain with a countable state space is irreducible and has a unique stationary distribution , thenfor any function such that the expected value exists and is finite.
Markov chains with an uncountable state space
We now analyze the more difficult case in which the state space is infinite and uncountable.
The initial distribution of the chain is a probability measure such that for any event .
Then, we can choose a function called transition kernel and we imposefor all and all events .
The transition kernel does not depend on . It is time-homogeneous.
These two choices (of the initial distribution and the transition kernel ) completely determine the distribution of all the terms of the chain.
The distribution of can be derived as follows: where is any event and the integral is a Lebesgue integral with respect to the probability measure .
The distributions of the other terms of the chain can be derived recursively as follows:
When the terms of the sequence are continuous variables (or vectors), then has a probability density such thatand the transition kernel can be expressed in terms of a conditional probability density
As a consequence, the marginal densities of the terms of the chain can be expressed as
Stationary distribution
The concept of stationary distribution is similar to that found above for the cases of a finite and a countable state space.
If, for a given transition kernel , there is an initial distribution such that the distribution of all the terms of the chain is equal to the initial distribution, then is called a stationary
distribution of the chain.
Furthermore, we have that satisfiesfor any .
In the case in which the transition kernel and the stationary distribution have densities, we can write
Detailed balance
A Markov chain with uncountable state space and transition kernel is said to satisfy the detailed balance condition if and only if there exists a probability measure such that
If the measure and the transition kernel can be written in terms of probability densities, then the detailed balance condition can be written as
By integrating both sides of the equation with respect to , we getor
Sincewe get
Thus, is a stationary distribution.
Phi-irreducible chain
The concept of irreducibility can be generalized to the case of an uncountable state space.
Let and be an event. Define the hitting time where the subscript indicates that the chain is assumed to start from .
We say that leads to if and only if
Again, the notation means that the probability is computed under the assumption that .
A Markov chain is said to be -irreducible if and only if there is a measure such that every state leads to when .
In other words, a chain is said to be -irreducible if and only if there is a positive probability that for any starting state the chain will reach any set having positive measure in finite time.
Harris recurrent chain
When dealing with uncountable state spaces, we often use a concept of recurrence, called Harris recurrence, that is even stronger than the concept of positive recurrence introduced in the case of an
uncountable state space.
An event is called Harris recurrent if and only if the probability that the chain will return to infinitely often (after having started from a point belonging to ) is equal to .
A Markov chain is called Harris recurrent if and only if it is -irreducible and all the events such that are Harris recurrent.
Aperiodic chain
In the uncountable case, the definition of aperiodicity is slightly more complicated.
A Markov chain is said to have period if its state space can be partitioned into at most mutually exclusive events such that the chain always takes exactly periods to cycle through these events.
In symbols, if the events are ,...,, then you have
A chain is said to be aperiodic if its period is .
Existence and uniqueness of the stationary distribution
Unlike in the finite and countable cases, -irreducibility and Harris recurrence are not sufficient to guarantee the existence and uniqueness of the stationary distribution. Further technical
conditions need to be met (see, e.g., Glynn 2013).
Convergence to the stationary distribution
Because -irreducibility and Harris recurrence are not sufficient to guarantee the existence and uniqueness of the stationary distribution, the latter is often directly added among the sufficient
conditions for the convergence of the chain to a stationary distribution.
Proposition If a Markov chain with an uncountable state space is -irreducible, Harris recurrent, aperiodic and has a stationary distribution , then converges to .
The convergence of the sequence of probability distributions to the stationary distribution is in total variation norm (a technical detail that we can safely skip here).
Strong law of large numbers
In the uncountable case, the following ergodic theorem holds.
Proposition If a Markov chain with an uncountable state space is -irreducible, Harris recurrent and has a stationary distribution , thenfor any function such that the expected value exists and is
Gilks, W. R., Richardson, S., Spiegelhalter D. (1995) Markov Chain Monte Carlo in Practice, CRC Press.
Glynn, P. W. (2013) Harris recurrence, Stochastic Systems lecture notes, Stanford University.
Hoel, P. G., Port, S. C., Stone, C. J. (1986) Introduction to Stochastic Processes, Waveland Press.
Norris, J. R. (1998) Markov Chains, Cambridge University Press.
Pishro-Nik, H. (2014) Introduction to Probability, Statistics, and Random Processes, Kappa Research, LLC.
Roberts, G. O., Rosenthal, J. S. (2006) Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov Chains, The Annals of Applied Probability, Vol. 16, No. 4, 2123-2139.
How to cite
Please cite as:
Taboga, Marco (2021). "Markov chain", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/ | {"url":"https://statlect.com/fundamentals-of-statistics/Markov-chains","timestamp":"2024-11-11T23:23:40Z","content_type":"text/html","content_length":"127813","record_id":"<urn:uuid:500970e4-ccc8-450b-ac23-95d694df2d24>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00076.warc.gz"} |
Multidimensional Hyperbolic Problems And Computations [PDF] [41hp82dueld0]
E-Book Overview
This IMA Volume in Mathematics and its Applications MULTIDIMENSIONAL HYPERBOLIC PROBLEMS AND COMPUTATIONS is based on the proceedings of a workshop which was an integral part ofthe 1988-89 IMA
program on NONLINEAR WAVES. We are grateful to the Scientific Commit tee: James Glimm, Daniel Joseph, Barbara Keyfitz, Andrew Majda, Alan Newell, Peter Olver, David Sattinger and David Schaeffer for
planning and implementing an exciting and stimulating year-long program. We especially thank the Work shop Organizers, Andrew Majda and James Glimm, for bringing together many of the major figures
in a variety of research fields connected with multidimensional hyperbolic problems. A vner Friedman Willard Miller PREFACE A primary goal of the IMA workshop on Multidimensional Hyperbolic Problems
and Computations from April 3-14, 1989 was to emphasize the interdisciplinary nature of contemporary research in this field involving the combination of ideas from the theory of nonlinear partial
differential equations, asymptotic methods, numerical computation, and experiments. The twenty-six papers in this volume span a wide cross-section of this research including some papers on the
kinetic theory of gases and vortex sheets for incompressible flow in addition to many papers on systems of hyperbolic conservation laws. This volume includes several papers on asymptotic methods such
as nonlinear geometric optics, a number of articles applying numerical algorithms such as higher order Godunov methods and front tracking to physical problems along with comparison to experimental
data, and also several interesting papers on the rigorous mathematical theory of shock waves.
E-Book Content
The IMA Volumes in Mathematics and Its Applications Volume 29 Series Editors Avner Friedman Willard Miller, Jr.
Institute for Mathematics and its Applications IMA The Institute for Mathematics and its Applications was established by a grant from the National Science Foundation to the University of Minnesota in
1982. The IMA seeks to encourage the development and study of fresh mathematical concepts and questions of concern to the other sciences by bringing together mathematicians and scientists from
diverse fields in an atmosphere that will stimulate discussion and collaboration. The IMA Volumes are intended to involve the broader scientific community in this process. Avner Friedman, Director
Willard Miller, Jr., Associate Director
********** IMA PROGRAMS 1982-1983 1983-1984
Statistical and Continuum Approaches to Phase Transition Mathematical Models for the Economics of
1984-1985 1985-1986 1986-1987 1987-1988 1988-1989 1989-1990 1990-1991
Decentralized Resource Allocation Continuum Physics and Partial Differential Equations Stochastic Differential Equations and Their Applications Scientific Computation Applied Combinatorics Nonlinear
Waves Dynamical Systems and Their Applications Phase Transitions and Free Boundaries
********** SPRINGER LECTURE NOTES FROM THE IMA:
Tbe Mathematics and Pbysics of Disordered Media Editors: Barry Hughes and Barry Ninham (Lecture Notes in Math., Volume 1035, 1983) Orienting Polymers Editor: J.L. Ericksen (Lecture Notes in Math.,
Volume 1063, 1984)
New Perspectives in Tbermodynamics Edi tor: James Serrin (Springer-Verlag, 1986)
Models of Economic Dynamics Editor: Hugo Sonnenschein (Lecture Notes in Econ., Volume 264, 1986)
James Glimm
Andrew J. Majda
Multidimensional Hyperbolic Problems and Computations With 86 Illustrations
Springer-Verlag New York Berlin Heidelberg London Paris Tokyo Hong Kong Barcelona
James Glimm Department of Applied Mathematics and Statistics SUNY at Stony Brook Stony Brook, NY 11794-3600
Andrew J. Majda Department of Mathematics and Program in Applied and Computational Mathematics Princeton University Princeton, NJ 08544
Series Editors Avner Friedman Willard Miller, Jr. Institute for Mathematics and its Applications University of Minnesota Minneapolis, Minnesota 55455 USA Mathematics Subject Classification: 35, 69,
76, 80.
Printed on acid-free paper.
© 1991 Springer-Verlag New York Inc. Softcover reprint of the hardcover 1st edition 1991 All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection
with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general
descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks
and Merchandise Marks Act, may accordingly be used freely by anyone. Permission to photocopy for internal or personal use, or the internal or personal use of specific clients, is granted by
Springer-Verlag New York, Inc. for libraries registered with the Copyright Clearance Center (CCC), provided that the base fee of $0.00 per copy, plus $0.20 per page is paid directly to CCC, 21
Congress St., Salem, MA 01970, USA. Special requests should be addressed directly to Springer-Verlag New York, 175 Fifth Avenue, New York, NY 10010, USA. ISBN-13 978-1-4613-9123-4 DOL 10.\007/
e-1SBN-13 978-1-4613-9121-0
Camera-ready copy prepared by the IMA. 987654321
Ron DiPerna (1947 - 1989) Ron DiPerna was a uniquely talented mathematician. He was a leading researcher of his generation in the mathematical theory of systems of hyperbolic conservation laws,
incompressible flow, and the kinetic theory of gases. Ron DiPerna died tragically in January 1989 after a courageous struggle with cancer. His work and its impact was known to virtually all of the
several hundred participants in this meeting. For many of us, he was a warm and loyal friend with a sharp wit and keen sense of humor. He left us too soon and at the height of his creative power.
The IMA Volumes in Mathematics and its Applications Current Volumes: Volume 1: Homogenization and Effective Moduli of Materials and Media
Editors: Jerry Ericksen, David Kinderlehrer, Robert Kohn, J.-L. Lions Volume 2: Oscillation Theory, Computation, and Methods of Compensated Compactness
Editors: Constantine Dafermos, Jerry Ericksen, David Kinderlehrer, Marshall Slemrod Volume 3: Metastability and Incpmpletely Posed Problems
Editors: Stuart Antman, Jerry Ericksen, David Kinderlehrer, Ingo Muller Volume 4: Dynamical Problems in Continuum Physics
Editors: Jerry Bona, Constantine Dafermos, Jerry Ericksen, David Kinderlehrer Volume 5: Theory and Applications of Liquid Crystals
Editors: Jerry Ericksen and David Kinderlehrer Volume 6: Amorphous Polymers and Non-Newtonian Fluids
Editors: Constantine Dafermos, Jerry Ericksen, David Kinderlehrer Volume 7: Random Media
Editor: George Papanicolaou Volume 8: Percolation Theory and Ergodic Theory of Infinite Particle Systems
Editor: Harry Kesten Volume 9: Hydrodynamic Behavior and Interacting Particle Systems
Editor: George Papanicolaou Volume 10: Stochastic Differential Systems, Stochastic Control Theory and Applications
Editors: Wendell Fleming and Pierre-Louis Lions Volume 11: Numerical Simulation in Oil Recovery
Editor: Mary Fanett Wheeler Volume 12: Computational Fluid Dynamics and Reacting Gas Flows
Editors: Bjorn Engquist, M. Luskin, Andrew Majda
Volume 13: Numerical Algorithms for Parallel Computer Architectures Editor: Martin H. Schultz Volume 14: Mathematical Aspects of Scientific Software Editor: J.R. Rice Volume 15: Mathematical
Frontiers in Computational Chemical Physics Editor: D. Truhlar Volume 16: Mathematics in Industrial Problems by A vner Friedman Volume 17: Applications of Combinatorics and Graph Theory to the
Biological and Social Sciences Editor: Fred Roberts Volume 18: q-Series and Partitions Editor: Dennis Stanton Volume 19: Invariant Theory and Tableaux Editor: Dennis Stanton Volume 20: Coding Theory
and Design Theory Part I: Coding Theory Editor: Dijen Ray-Chaudhuri Volume 21: Coding Theory and Design Theory Part II: Design Theory Editor: Dijen Ray-Chaudhuri Volume 22: Signal Processing: Part I
- Signal Processing Theory Editors: L. Auslander, F.A. Griinbaum, J.W. Helton, T. Kailath, P. Khargonekar and S. Mitter Volume 23: Signal Processing: Part II - Control Theory and Applications of
Signal Processing Editors: L. Auslander, F.A. Griinbaum, J.W. Helton, T. Kailath, P. Khargonekar and S. Mitter Volume 24: Mathematics in Industrial Problems, Part 2 by A vner Friedman Volume 25:
Solitons in Physics, Mathematics, and Nonlinear Optics Editors: Peter J. Olver and David H. Sattinger
Volume 26: Two Phase Flows and Waves Editors: Daniel D. Joseph and David G. Schaeffer Volume 27: Nonlinear Evolution Equations that Change Type Editors: Barbara Lee Keyfitz and Michael Shearer Volume
28: Computer Aided Proofs in Analysis Editors: Kenneth R. Meyer and Dieter S. Schmidt Volume 29: Multidimensional Hyperbolic Problems and Computations Editors: James Glimm and Andrew Majda Volume 31:
Mathematics in Industrial Problems, Part 3 by A vner Friedman Forthcoming Volumes:
1988-1989: Nonlinear Waves Microlocal Analysis and Nonlinear Waves Summer Program 1989: Robustness, Diagnostics, Computing and Graphics in Statistics Robustness, Diagnostics in Statistics (2 Volumes)
Computing and Graphics in Statistics
1989-1990: Dynamical Systems and Their Applications An Introduction to Dynamical Systems Patterns and Dynamics in Reactive Media Dynamical Issues in Combustion Theory Twist Mappings and Their
Applications Dynamical Theories of Turbulence in Fluid Flows Nonlinear Phenomena in Atmospheric and Oceanic Sciences Chaotic Processes in the Geological Sciences Summer Program 1990: Radar/Sonar
Radar/Sonar (lor 2 volumes) Summer Program 1990: Time Series in Time Series Analysis Time Series (2 volumes)
This IMA Volume in Mathematics and its Applications
is based on the proceedings of a workshop which was an integral part ofthe 1988-89 IMA program on NONLINEAR WAVES. We are grateful to the Scientific Committee: James Glimm, Daniel Joseph, Barbara
Keyfitz, Andrew Majda, Alan Newell, Peter Olver, David Sattinger and David Schaeffer for planning and implementing an exciting and stimulating year-long program. We especially thank the Workshop
Organizers, Andrew Majda and James Glimm, for bringing together many of the major figures in a variety of research fields connected with multidimensional hyperbolic problems.
Avner Friedman Willard Miller
A primary goal of the IMA workshop on Multidimensional Hyperbolic Problems and Computations from April 3-14, 1989 was to emphasize the interdisciplinary nature of contemporary research in this field
involving the combination of ideas from the theory of nonlinear partial differential equations, asymptotic methods, numerical computation, and experiments. The twenty-six papers in this volume span a
wide cross-section of this research including some papers on the kinetic theory of gases and vortex sheets for incompressible flow in addition to many papers on systems of hyperbolic conservation
laws. This volume includes several papers on asymptotic methods such as nonlinear geometric optics, a number of articles applying numerical algorithms such as higher order Godunov methods and
fronttracking to physical problems along with comparison to experimental data, and also several interesting papers on the rigorous mathematical theory of shock waves. In addition, there are at least
two papers in this volume devoted to open problems with this interdisciplinary emphasis. The organizers would like to thank the staff of the IMA for their help with the details of the meeting and
also for the preparation of this volume. We are especially grateful to Avner Friedman and Willard Miller for their help with the organization of the special day of the meeting in memory of Ron
Diperna, Tuesday, April 4, on short notice.
James Glimm Andrew J. Majda
Macroscopic limits of kinetic equations •........................ Claude Bardo3, Francoi3 Gol3e and David Levermore
The essence of particle simulation of the Boltzmann equation ..................................... H. Babov3ky and R. Illner The approximation of weak solutions to the 2-D Euler equations by vortex
elements J. Thoma3 Beale Limit behavior of approximate solutions to conservation laws Chen Gui- Qiang Modeling two-phase flow of reactive granular materials
.............................................. Pedro F. Embid and Melvin R. Baer Shocks associated with rotational modes Heinrich Freistiihler Self-similar shock reflection in two space dimensions
........................................... Harland M. Glaz Nonlinear waves: overview and problems Jame3 Glimm The growth and interaction of bubbles in Rayleigh-Taylor unstable interfaces........
....... ............ Jame3 Glimm, Xiao Lin Li, Ralph Menikoff, David H. Sharp and Qiang Zhang
Front tracking, oil reservoirs, engineering scale problems and mass conservation ................................ Jame3 Glimm, Brent Lindqui3t and Qiang Zhang
Collisionless solutions to the four velocity Broadwell equations ............................................ J.M. Greenberg and Cleve Moler
Anomalous reflection of a shock wave at a fluid interface ................................................ John W. Grove and Ralph Menikoff
An application of connection matrix to magnetohydrodynamic shock profiles ........................... Harumi Hattori and Kon3tantin Mischaikow
Convection of discontinuities in solutions of the Navier-Stokes equations for compressible flow David Hoff
Nonlinear geometrical optics John K. Hunter
Geometric theory of shock waves ............................... Tai-Ping Liu
An introduction to front tracking ............................... Christian Klingenberg and Bradley Plohr
One perspective on open problems in multi-dimensional conservation laws ............................................... Andrew J. Majda
Stability of multi-dimensional weak shocks Guy Metivier
Nonlinear stability in non-newtonian flows ...................... J.A. Nohel, R.L. Pego and A.E. Tzavaras
A numerical study of shock wave refraction at a C02/CH 4 interface Elbridge Gerry Puckett
An introduction to weakly nonlinear geometrical optics .............................................. Rodolfo R. Rosales
Numerical study of initiation and propagation of one-dimensional detonations Victor Roytburd
Richness and the classification of quasilinear hyperbolic systems ............................................. Denis Serre A case of singularity formation in vortex sheet motion studied by a
spectrally accurate method M.J. Shelley The Goursat-Riemann problem for plane waves in isotropic elastic solids with velocity boundary conditions T. C. T. Ting and Tankin Wang
MACROSCOPIC LIMITS OF KINETIC EQUATIONS CLAUDE BARDOS, FRANQOIS GOLSE* AND DAVID LEVERMOREt Abstract. The connection between kinetic theory and the macroscopic equations of fluid dynamics is
described. In particular, our results concerning the incompressible Navier-Stokes equation are compared with the classical derivation of Hilbert and Chapman-Enskog. Some indications of the validity
of these limits are given. More specifically, the connection between the DiPerna-Lions renormalized solution for the Boltzmann equation and the Leray-Hopf solution for the N avier-Stokes equation is
I. Introduction. This paper is devoted to the connection between kinetic theory and macroscopic fluid dynamics. Formal limits are systematically derived and, in some cases, rigorous results are given
concerning the validity of these limits. To do that several scalings are introduced for standard kinetic equations of the form (1) Here F.( t, x, v) is a nonnegative function representing the density
of particles with position x and velocity v in the single particle phase space R; x R! at time t. The interaction of particles through collisions is modelled by the operator C( F); this operator acts
only on the variable v and is generally nonlinear. In section V the classical Boltzmann form of the operator will be considered. The connection between kinetic and macroscopic fluid dynamics results
from two types of properties of the collision operator: (i) conservation properties and an entropy relation that implies that the equilibria are Maxwellian distributions for the zeroth order limit;
(ii) the derivative of C(F) satisfies a formal Fredholm alternative with a kernel related to the conservation properties of (i). The macroscopic limit is obtained when the fluid becomes dense enough
that particles undergo many collisions over the scales of interest. This situation is described by the introduction of a small parameter E, called the Knudsen number, that represents the ratio of the
mean free path of particles between collisions to some characteristic length of the flow (e.g. the size of an obstacle). Properties (i) are sufficient to derive the compressible Euler equations from
equation (1); they arise as the leading order dynamics from a formal expansion of F in E (the Chapman-Enskog or Hilbert expansion described briefly in section III). Properties (ii) are used to obtain
the N avier-Stokes equations; they depend on a more detailed knowlege of the collision operator. The compressible Navier-Stokes equations arise as corrections to those of Euler at the next order in
the Chapman-Enskog expansion. In a compressible fluid one also introduces the Mach Number Ma which is the ratio of the bulk velocity to the sound speed and the Reynolds number He which is
*Departement de Mathematiques, Universite Paris VII , 75251 Paris Cedex 05, France tDepartment of Mathematics, University of Arizona, Tucson, Arizona 85721, USA
a dimensionless reciprocal viscosity of the fluid. These numbers (cf.[LL] and [BGL]) are related by the formula
(2) Our main contribution concerns the incompressible limit; due to relation (2) it is the only case where one obtains, when € goes to zero, an equation with a finite Reynolds number. This is the
only regime where global weak solutions of fluid dynamic equations are known to exist. Related results have been obtained simultaneously by A. De Masi, R. Esposito and J.L. Lebowitz [DMEL]. Our
considerations on the relation between the renormalized solution of the Boltzmann equation and the eray [L] solution of the Navier-Stokes equations rely on the pioneering work of DiPerna and Lions
[DiPLJ, giving one more example of the importance of Ron DiPerna's influence in our community. II. The Compressible Euler Limit. In this section the integral of any scalar or vector valued function f
(v) with respect to the variable v will be denote by (f);
(f) =
f( v) dv .
The operator C is assumed to satisfy the conservation properties
= 0,
= 0,
= o.
These relations represent the physical laws of mass, momentum and energy conservation during collisions and imply the local conservation laws
+ V'x· (vF) = 0,
(5) at Wvl2 F)
+ V'x· (v~ Ivl 2 F) = o.
Additionally, C(F) is assumed to have the property that the quantity (C(F)logF) is nonpositive. This is the entropy dissipation rate and implies the local entropy inequality
at (F log F)
+ V'x· (vFlog F) = (C(F) log F) :::; o.
Finally, the equilibria of C(F) are assumed to be characterized by the vanishing of the entropy dissipation rate and given by the class of Maxwellian distributions, i.e. those of the form
= (27rO)3/2 exp
Iv -
More precisely, for every nonnegative measurable function F the following properties are equivalent:
(C( F) log F} = 0 ,
= 0,
F is a Maxwellian with the form (7).
These assumptions about C(F) merely abstract some of the consequences of Boltzmann's celebrated H-theorem. The parameters p, u and (} introduced in the right side of (7) are related to the fluid
dynamic moments giving the mass, momentum and energy densities:
{F} = p,
= pu,
They are called respectively the (mass) density, velocity and temperature of the fluid. In the compressible Euler limit, these variables are shown to satisfy the system of compressible Euler
equations (11 below). The main obstruction to proving the validity of this fluid dynamical limit is the fact that solutions of the compressible Euler equations generally become singular after a
finite time (cf. Sideris [S]). Therefore any global (in time) convergence proof cannot rely on uniform regularity estimates. The only reasonable assumptions would be that the limiting distribution
exists and that the relevant moments converge pointwise. With this hypothesis, it is shown that the above assumptions regarding C(F) imply that the fluid dynamic moments of solutions converge to a
solution of the Euler equations that satisfies the macroscopic entropy inequality. THEOREM I. Given a collision operator C with properties (i), let F.(t,x,v) be a sequence of nonnegative solutions of
the equation
(9) such that, as E goes to zero, F. converges almost everywhere to a nonnegative function F. Moreover, assume that the moments
{F.} ,
{vF.} ,
{v0 v F.},
converge in the sense of distributions to the corresponding moments
{F} ,
{vF} ,
{v0 v F},
the entropy densities and fluxes converge in the sense of distributions according to
lim{F. log F.}
= {F logF} ,
lim{vF. logF,}
= {vF logF};
while the entropy dissipation rates satisfy
limsup{C(F.) logF,} ::; (C(F) logF} . • -0
Then the limit F( t, x, v) is a Maxwellian distribution,
= (27r0(t,x»3/2
exp -2"
Iv -
u(t,x)l2) O(t,x) ,
where the functions p, u and 0 solve the compressible Euler equations
= 0,
OtP+ V.,·(pu)
Ot(pu) + V.,·(pu 0 u) + V.,(pO) = 0,
and satisfy the entropy inequality
(12) Remark 1. The above theorem shows that any type of equation of the form (9) leads to the compressible Euler equations with a pressure p given by the ideal gas law p = pO and an internal energy
of pO (corresponding to a -law perfect gas). This is a consequence of the fact that the kinetic equation considered here describes a monoatomic fluid in a three dimensional domain. Other equations of
state may be obtained by introducing additional degrees of freedom that take into account the rotational and vibrational modes of the particles.
The proof of this theorem, as well as those of subsequent ones, can be found in our paper [BGL2J.
III. The Compressible Navier-Stokes Limit. As has been noticed above, the form of the limiting Euler equation is independent of the choice of the collision operator C within the class of operators
satisfying the conservation and the entropy properties. The choice of the collision operator appears at the macroscopic level only in the construction of the Navier-Stokes limit. The compressible
Navier-Stokes equations are obtained by the classical Chapman-Enskog expansion. To compare this approach with the situation leading to the incompressible Navier-Stokes equation, a short description
of this approach is given below. Given (p, u, 0), denote the corresponding Maxwellian distribution by
= (27r0)3/2 ..exp
Iv -
u12) .
The subscript (p, u, 0) will often be omitted when it is convenient. Introduce the Hilbert space L;' defined by the scalar product
(flg)M = (fg)M =
5 Denote by Land Q the first two Frechet derivatives ofthe operator G at G = 1:
= MDC(M).(Mg) ,
1 = MD2C(M):(Mg V Mg).
Taylor's formula then gives
(16) The linear operator L is assumed to be self-adjoint and to satisfy a Fredholm alternative in the space L~ with a five dimensional kernel spanned by the functions {I, VI, V2, V3, IvI2}. The fact
that L must be nonpositive definite follows directly from examining the second variation of the entropy dissipation rate at M. Denote by V, A
= {Ai}
and B
= {Bij}
the following vectors and tensors:
By symmetry, the functions Ai and Bij are orthogonal to the kernel of L; therefore the equations
L(A') = A,
L(B') = B,
have unique solutions in Ker( L).L. Assume (this would be a consequence of rotational invariance for the collision operator) that these solutions are given by the formulas
= -a(p, lI, IV I) A(V),
= -i3(p,lI,IVI)B(V),
where a and fJ are positive functions depending on p, II and IVI. If C(F) homogeneous of degree two (for example, quadratic) then a simple scaling shows that the p dependence of a and i3 is just
proportionality to p-I. A function H.(t,x,v) is said to be an approximate solution of order p to the kinetic equation (1) if
(19) where O( fP) denotes a term bounded by fP in some convenient norm. An approximate solution of order two can be constructed in the form
(20) where (P., u., lI.) solve the compressible Navier-Stokes equations with dissipation of the order f (denoted C N SE. ):
(21) tp, (8t
+ u,· \7", )B, + p,B, \7", ·U, = f!/-l'O"( U,) :0"(U,) + f\7", .['"
In these equations 0"( u) denotes the strain-rate tensor given by
while the viscosity /-l, defined by the relations
/-l(p" B,) and the thermal diffusivity '"
"(p" B,) are
(22) /-l(p,B)
= ioB(,8(p,B,IVI)IB(V)12)M = fgpB ~ foo .8(p,B,r)r6 e-!r 2 dr,
v 27r 10
1 "(p, B) = tB(a(p, B, IVI)IA(VW)M = iPB ro= v 27r
a(p, B, r )(r2 - 5?r 4 e-"21 r2 dr.
Notice that in the case where C(F) is homogeneous of degree two, the left sides become independent of Pi this is why classical expressions for the viscosity and thermal diffusivity depend only on B.
The Chapman-Enskog derivation can be formulated according to the following. THEOREM II. Assume that (p" u" B,) solve the CN SE, with the viscosity /-l(p, B) and thermal diifusivity "(p, B) given by
(22). Then there exist g, and w, in Ker(L).l. such that H" given by (20), is an approximate solution of order two to equation (1). Moreover, g, is given by the formula
(23) Remark 2. Let F, be a solution of the kinetic equation that coincides with a local Maxwellian at t = O. Let (p" u" B,) be the solution of the CN SE, with initial data equal to the corresponding
moments of F,(O,x,v). Then the expression given by 2 1 ( 1 Iv - u,(t,x)1 2 ) H'=(27rB,(t,x»3f2 exp -"2 B,(t,x) (p,(t,X)+fg,+fW,) is an approximation of order two of F,. Since M,g, is orthogonal to
the functions 1, v, Iv1 2, the quantities p" p,u, and p,( lu,12+tB,) provide approximations of order two to the corresponding moments of F,. In fact, this observation was used to do the
Chapman-Enskog derivation by the so called projection method (cf. Caflisch
[C]). IV. The Incompressible Navier-Stokes Limit. The purpose of this section is to construct a connection between the kinetic equation and the incompressible Navier-Stokes equations. As in the
previous section, this will describe the range of parameters for which the incompressible Navier-Stokes equations provide a good approximation to the solution of the Boltzmann equation. However in
this case the connection is drawn between the Boltzmann equation and macroscopic fluid
7 dynamic equations with a finite Reynolds number. It is clear from formula (2), E = Mal Be, that in order to obtain a fluid dynamic regime (corresponding to a vanishing Knudsen number) with a finite
Reynolds number, the Mach number must vanish (cf. [LL] or [BGL1]). In order to realize distributions with a small Mach number it is natural to consider them as perturbations about a given absolute
Maxwellian (constant in space and time). By the proper choice of Galilean frame and dimensional units this absolute Maxwellian can be taken to have velocity equal to D, and density and temperature
equal to 1; it will be denoted by M. The initial data F,(D, x, v) is assumed to be close to M where the order of the distance will be measured with the Knudsen number. Furthermore, if the flow is to
be incompressible, the kinetic energy of the flow in the acoustic modes must be smaller than that in the rotational modes. Since the acoustic modes vary on a faster timescale than rotational modes,
they may be suppressed by assuming that the initial data is consistent with motion on a slow timescale; this scale separation will also be measured with the Knudsen number. This scaling is quantified
by the introduction of a small parameter E such that the timescale considered is of order E- 1 , the Knudsen number is of order Eg, and the distance to the absolute Maxwellian 1.1 is of order Er with
q and r being greater or equal to one. Thus, solutions F, to the equation
(24) are sought in the form
(25) The basic case r = q = 1 is the unique scaling compatible with the usual incompressible Navier-Stokes equations. The notation introduced in the previous section regarding the collision operator
and its Fn~chet derivatives is conserved but here the Maxwellian M is absolute so that Land Q no longer depend on the fluid variables. THEOREM III. Let F,(t,x,v) be a sequence of nonnegative
solutions to the scaled kinetic equation (24) SUell that, when it is written according to formula (25), the sequence g, converges in the sense of distributions and almost everywhere to a function 9
as E goes to zero. Furthermore, assume the moments
(L- 1 (A(v))g')M'
(L- 1 (A(v)) Q9 Vg')M'
(L-l(A(v)) Q(g"g'))M'
(L- 1 (B(v))g')M'
(L- 1 (B(v)) Q9Vg')M'
(L- 1 (B(v»Q(g"g'))M
converge in D' (Rt x R~) to the corresponding moments
(L-l(A( V)) Q(g, g)}M ,
Vg}M ,
(L-l(B(v)) Q(g, g)}M .
Then the limiting 9 has the form
(26) where the velocity u is divergence free and the density and temperature fluctuations, P and 9, satisfy the Boussinesq relation (27) Moreover, the functions p, u and 9 are weak solutions of the
(29) (30)
+ u·\7xu + \7xP = 0,
+ u·\7x9 = 0,
= 1, q > 1;
(31) In these equations the expressions ft* and "'* denote the function values ft( 1, 1) and ",(1,1) obtained from (22) in the previous section.
Remark 3. The equation (31) is completely trivial; it corresponds to a situation where the initial fluctuations and the Knudsen number are too small to produce any evolution over the timescale
selected. However, this limit would be nontrivial if it corresponded to a timescale on which an external potential force acts on the system (Bardos, Golse, Levermore [BGLl]). Remark 4. The second
equations of (27), (28) and (29) that describe the evolution of the temperature do not contain a viscous heating term ft*u( u): u( u) such as appears in the CNSE,:
tp, (at + U,· \7x)O, + p,o, \7x ·U, =
Etft,U( u,): u( u,)
+ E\7x .[""
This is consistent with the scaling used here when it is applied directly to the CN SE, to derive the incompressible Navier-Stokes equations. More precisely with the change of variable (33)
= Po + Ep(t,:'), E
U,=EU(t,-), E
O,=Oo+dJ(t,-), E
the system (27), (28) is obtained for p( t, x), u( t, x) and 8( t, x) as € vanishes. In this derivation every term of the last equation of (62) is of the order €2 except tfl.U(U.) : u(u.), which is
of order three. The viscous heating term would have appeared in the limiting temperature equation had the scaling in (33) been chosen with the density and temperature fluctuations of order €2 [BLP].
Remark 5. In the case where q = r = 1 a system is obtained that has some structure in common with a diffusion approximation. A formal expansion for g., the solution of the equation
(34) can be constructed in the form
(35) This approach is related to the method of the previous section and to the work of De Masi, Esposito and Lebowitz.
Remarks Concerning the Proof of the Fluid Dynamical Limit. In
this section the collision operator is given by the classical Boltzmann formula,
CB(F) =
(F(vDF(v' ) - F(vI)F(v))b(vI - v,w)dwdvI,
R3 X S2
where w ranges over the unit sphere, VI over the three dimensional velocity space and b(vI - v,w) is a smooth function; v' and v~ are given in term of v, VI and w by the classical relations of
conservation of mass, momentum and energy (cf. [CC]) . Any proof concerning the fluid dynamical limit for a kinetic model will, as a by-product, give an existence proof for the corresponding
macroscopic equation. However, up to now no new result has been obtained by this type of method. Uniform regularity estimates would likely be needed in order to obtain the limit of the nonlinear
term. These estimates, if they exist, must be sharp because it is known (and is proved by Sideris [S] for a very general situation) that the solutions of the compressible nonlinear Euler equations
become singular after a finite time. In agreement with these observations and in the absence of boundary layers (full space or periodic domain), the following theorems are proved: i) Existence and
uniqueness of the solution to the C N SE. for a finite time that depends on the size of the initial data, provided the initial data is smooth enough (say in HS with s > 3/2). This time of existence
is independent of € and when € goes to zero the solution converges to a solution of the compressible Euler equations. ii) Global (in time) existence of a smooth solution (cf. [KMN]) to the CNSE.
provided the initial data is small enough with respect to €. These two points have their counterparts at the level of the Boltzmann equation: i) Existence and uniqueness (under stringent smallness
assumptions) during a finite time independent of the Knudsen number, as proved by Nishida [N] (cf. also
Caflisch [CD. When the Knudsen number goes to zero this solution converges to a local thermodynamic equilibrium solution governed by the compressible Euler equations. ii) Global existence for the
solution to the Boltzmann equation provided the initial data is small enough with respect to the Knudsen number. Concerning a proof of existence, the situation for the incompressible Euler equations
in three space variables is similar; their solution (defined during a finite time) is the limit of a sequence of corresponding incompressible Navier-Stokes solutions with viscosities of the order of
f that remains uniformly smooth over a time interval that is independent of f. However, there are two other types of results concerning weak solutions. First, the global existence of weak solutions
to the incompressible Navier-Stokes equations has been proved by Leray [L]. Second, using a method with many similarities to Leray's, R. DiPerna and P.-L. Lions [DiPL] have proved the global
existence of a weak solution to a class of normalized Boltzmann equations, their so-called renormali zed solution. This solution exists without assumptions concerning the size of the initial data
with respect to the Knudsen number. Such a result also holds for the equation (37) over a periodic spatial domain T3. The situation concerning the convergence to fluid dynamical limits (with f going
to zero) of solutions of the Boltzmann equation (37) with initial data of the form
(38) continues to reflect this similarity. Following Nishida [N], it can be shown that for smooth initial data (indeed very smooth) the solution of (37) is smooth for a time on the order of f l - r .
For r = 1 this time turns out to be independent of f and during this time the solution converges (in the sense of the Theorem III) to the solution of the incompressible Euler equations when q > 1 or
to the solution of the incompressible Navier-Stokes equations when q = 1. . For r > 1 the solution is regular during a time that goes to infinity as f vanishes; in this situation it converges to the
solution of the linearized Navier-Stokes equations when q = 1 or to the solution of the linearized Euler equation when q > 1. The borderline consists of the case r = q = 1. In this case it is natural
to conjecture that the DiPerna-Lions renormalized solutions of the Boltzmann equation converge (for all time and with no restriction on the size of the initial data) to a Leray solution of the
incompressible Navier-Stokes equations. However, our proof of this result is incomplete without some additional compactness assumptions (cf. [BGL3]). Leray's proof relies on the energy estimate
For the Boltzmann equation the classical entropy estimate plays an analogous role in the proof of DiPerna and Lions. The entropy integrand can be modified by the addition of an arbitrary conserved
density; the form chosen here is well suited for comparing F, with the absolute Maxwellian M:
(F,(O) log
- F,(O)
+ M) dv dx,
where D, is the entropy dissipation term given by
bdwdvl dv. Here F: 1 , F:, F O},
is the acute angle between n and v - w,
=v -
n( n . (v - w))
w' = w + n(n. v - w)) are the post-collisional velocities associated with the (ingoing) collision configuration (v,w,n), and k(lv - wl,e) is the collision kernel (for hard spheres, k(lv - wl,B) = Iv
- wi· cos e). >. is proportional to the mean free path between collisions - for the rest of this article, we set >. = l. The integration in (2) is 5-dimensional, and this is the second reason why
particle simulation is a sensible way to solve the Boltzmann equation numerically - except in isotropic situations where au, f) could be evaluated with low-dimensional integrals, it would be just too
inefficient to evaluate (2) by quadrature formulas. Monte Carlo simulation is a well-known alternative, and we shall see that it arises quite naturally (but not necessarily) in particle simulation.
2. Reduction of the Boltzmann Equation. We now go through a series of fairly elementary steps which will reduce the Boltzmann equation to a form which is readily accessible to an approximation by
point measures (particle simulation). These are a) time disretization b) separation of free flow and interaction ("splitting") c) local homogenization d) weak formulation e) measure formulation.
15 To start, suppose that a particle at (x, v) in state space will go to tPt(x, v) alter time t, provided there is no collision with another particle (if the particle does not interact with the
boundary of the confining container, tPt(x, v) = (x + tv, v); otherwise, we assume that the trajectory is defined by some reasonable deterministic boundary condition, like specular reflection).
Choose a time step
> 0, then a first order discrete counterpart to the deriva-
~t {fC(j + l)~t,
tPt.t(x,v» -
Substitute (3) for the left hand side in (1), let (y, w) = tPt:.t(x, v) and evaluate. The result is
The discretized equation (4) suggests to split the approximation into the collision simulation
+ l)~t,y,w) = f(j~t,y,w) + ~t a(f,j)(j~t,y,w)
and the free flow step
+ l),~t,y,w) = l((j + l)~t,tP_t:.t(Y'W»).
The numerical simulation of (6) will be obvious once we have understood the collision simulation. We therefore focus on that. Notice that the position variable y is not operated on in (5) -
effectively, (5) is a discretized version of the spatially homogeneous Boltzmann equation. To continue, we have to introduce a concept of "spatial cell" which already Boltzmann used in the classical
derivation of his equation (his cell was just defined by (x, x + dx), (y, y + dy) etc.). The key idea is that such a cell is small from a macroscopic point of view, but large enough to contain many
particles, and certainly large enough to keep a collision count for any particle in the cell during ~t( dt) by just counting collisions with other particles in the same cell. In this collision count,
the spatial variation of the gas density over the cell is neglected, i.e. spatial homogeneity over the cell is assumed (in fact, the numerical procedures we are about to describe allow to keep the
exact positions of the approximating particles, but the collision partners needed for the collision simulation are assumed to be homogeneously distributed in each cell; see [2]). Specifically,
suppose that the gas in question is confined to a container 1\ C R3 , and that this container is partitioned into cells by 1\ = UCi ,
Ci n Cj = 0,
and we assume that the cells are such that f(j6..t, .) is on Ci well approximated by its homogenization
(we replace f(j6..t,·) by its homogenization, but keep writing f(j6..t, .)). If f(j6..t,·) is locally homogeneous in this sense, so is j((j + l)6..t, .); however, the free flow step will destroy the
homogeneity, and one has to homogenize again before the next collision simulation. We note that the cells need not be the same size and that the partition of /\ can actually be changed with time
(refined, for example) to gain better adjustment to the hypothesis of local homogeneity. We have now reduced the Boltzmann equation to the remaining key question of doing the collision simulation (5)
on an arbitrary but fixed cell Ci, where f(j . 6..t,.) is supposed to be independent of y. To simplify notation, we write h( v) for f(j . 6..t, y, v) and h+l (v) for j((j + l)6..t, y, v). Then (5)
reads, explicitly
h+1(v) (7)
(1- 6..t 11 k(lv - wl,8)dn h(w)dw)h(v)
k(lv - wi, 8)h( v')h( w')dn dw.
We next make the crucial assumption that there is an A > 0 such that
k(lv - wl,8)dn::; A <
for all v, w. Unfortunately, this means that k has to be truncated even for the hard sphere case; a little thought shows that we have to modify k for large Iv - wi, i.e. some of the collisions
between particles with large relative velocity are neglected. Fortunately, for any reasonable gas cloud only few particles are affected.
Also, we renormalize fj(v) such that JIi dw = 1 (assUlning that we have dy = 1 and J J fj(y, v)dv dy = ,A3( Ci) J Ii( v )dv = 'Yj,i, this means
J Jh(y, v)dv
that we have to replace fj by Then, if 6..t
,A3(Ci). . I Ii· ---; for thIS paper, we sImp y set ~~
< ~, Ji ~ 0 implies that Ji+1 ~
--- = 1 . Ui
Thus the truncation (8) is necessary to keep the density nonnegative, an essential feature. This is an artifact of the explicit nature of our approximation scheme; (8) can be avoided by starting from
an alternative formulation of the Boltzmann equation, but this would lead to serious problems later on. The next step is a transition to a weak formulation of (7). To this end, multiply (7) with a
test function 'P E Cb(R!), integrate, use the involutive property of the collision transformation and that Iv' - w'l = Iv - wi. The result is
Kv,w'PJi(v)Ji(w)dv dw,
17 where K v,w'P
(1 - b.t J k dn) 'P( v) + b.t J k 'P( v')dn
(we have also used the
J fidw = 1).
Finally, before we rewrite (9) in measure formulation, we introduce a convenient representation for Kv,w'P. Let v and w be given. Then, we define a continuous function Tv,w : Si
~ R3 by Tv,w(n) =
v'. Moreover, let Bl = {Y E R2;
be the circle of area 1, and assume that b.t
Ilyll ~ ~ }
LEMMA 2.1. (see rl}) For all V,w E R3 , there is a continuous function ¢;v,w :
Si such that
REMARKS AND SKETCH OF THE PROOF. This lemma, which is extremely useful for the sequel, is proved in detail in [1]. The function ¢;v,w can actually be computed in terms of the collision kernel k. The
purpose of the function ¢;v,w is a) to decide whether the particles with velocities v nd w collide at all, and b) if they collide, with what collision parameter. We refer to y as "generalized
collision parameter" . The idea of the proof is as follows. We represent Bl by polar coordinates as
{(r, /1); 0 ~ r ~ ~, 0 ~ /1 ~ 27r }. There is an ro < ~ such that 7rr5
= b.t J
k dn.
Let n E Si be represented by (8,tf;) (8 E [0,%], tf; E [0,27r)), where 8 is the polar angle with respect to the axis in direction of v - w, and tf; is an azimuthal angle. For r 2: ro, let ¢;v,w(r,/1)
correspond to a grazing collision, and therefore Tv,w on a set of measure 1 - b.t J k dn.
= 0
tf; = /1. These angles ¢;v,w(r, /1) = v; this happens
For r < ro, the collision result is nontrivial. We set again tf;(r, /1) 8(r, /1) = 8( r) is defined as the inverse of a function r( 8) which satisfies
= /1,
Clearly 7r/2
(i) =2b.t J k(lv-wl, 8) sin 8d8 = r5, and b.tJt.p(v')k dn =J J'P &v,w(8(r), /1V rdrd/1. 0 0 0
This completes the proof.
Now define probability measures dfLj = By the lemma, (9) reduces to
Ii dv, and let 'l1( v, w, y) =
Tv,wo 1, possess weak solutions which may be obtained as a limit of vortex "blobs"; i.e., the vorticity is approximated by a finite sum of cores of prescribed shape which are advected according to
the corresponding velocity field. If the vorticity is instead a finite measure of bounded support, such approximations lead to a measure-valued solution of the Euler equations in the sense of DiPerna
and Majda [7]. The analysis is closely related to that of [7]. Key words. incompressible flow, Euler equations, weak solutions, vortex methods AMS(MOS) subject classifications. 76C05
Introduction. There has been renewed interest lately in the development of singularities in weak solutions of the Euler equations of two-dimensional, incompressible flow. Two examples are boundaries
of patches of constant vorticity and vortex sheets. (See [16J for a general discussion of both.) In the latter case the vorticity is a measure concentrated on a curve or sheet which we take to be
initially smooth. At later time, a singularity in the sheet may develop, and the nature of solution past the singularity formation is unclear. Such questions for vortex sheets were dealt with at this
conference in talks by Cafiisch, Krasny, Majda, and Shelley. We focus here on discrete approximations of weak solutions of the 2-D Euler equations of the sort used in computational vortex methods.
The vorticity is approximated by a finite sum of "blobs", or cores of prescribed shape, which are advected according to the corresponding velocity field. For the 2-D Euler equations with initial
vorticity in LP, p > 1, with bounded support, we show here that weak solutions in the usual sense are obtained in the limit as the size and spacing of the blob elements go to zero. (Weak solutions
are known to be unique only if the vorticity is in Loo.) If the initial vorticity is instead a finite measure of bounded support, as would be the case for a vortex sheet of finite length, a limit is
obtained which is a measure-valued solution of the Euler equations in the sense of DiPerna and Majda [7J. The analysis is closely related to that of [7J. For both results, the number of vortex
elements needed is very large compared to the radius of the vortex core, in contrast to the case of smooth flows. This seems at least qualitatively consistent with observed behavior in calculations
of Krasny [13,14J and others, in which a regularization analogous to the blob elements is used to modify the vortex sheet evolution so that calculations can be continued past the time when the sheet
develops singularities. Shelley and Baker [19J have used a different regularization, in which the vortex sheet is replaced by a layer of finite thickness. Both calculations are suggestive of weak
solutions of the Euler equations past the time of first singularity (cf. [17]). Mathematical treatments of tDepartment of Mathematics, Duke University, Durham, NC 27706. Research supported by
D.A.R.P.A. Grant N00014-86-K-0759 and N.S.F. Grant DMS-8800347.
24 the possible nature of measured-valued solutions in 2-D, such as might occur after the singularity in the vortex sheet, have been given in [8,10]. A convergence result for vortex element
approximations to vortex sheets, somewhat complementary to the results presented here, has been given by Caflisch and Lowengrub [3,15]. They show that discrete approximations to a sheet converge in a
much more specific sense for a time interval before the singularity formation, provided the sheet is analytic and close to horizontal. Recently Brenier and Cottet have obtained another convergence
result with vorticity in LP, p > 1. In their case the spacing of the vortex elements can be comparable to the radius of the core, unlike the first result presented here.
It is a pleasure to thank A. Majda for suggesting this investigation and for arranging a visit to the Applied and Computational Mathematics Program at Princeton University, during which this work was
carried out. 1. Discussion of Results. In [6] DiPerna and Majda introduced a notion of measure-valued solution for the Euler equations of three-dimensional incompressible fluid flow, based on the
conservation of energy, which was intended to incorporate possible oscillation and concentration in nonsmooth solutions. They showed that measure-valued solutions exist and can be obtained as a limit
under regularization, but they may not be unique. For the two-dimensional case they used a more special definition of measure-valued solution of the Euler equations [7] which takes into account the
conservation of vorticity as well as energy. They showed in [7] that various regularizations of the 2-D Euler equations converge to measure-valued solutions provided the initial vorticity is a
measure on R2 of bounded support and finite total mass. This includes the important case of vortex sheets. They also showed that certain regularizations produce classical weak solutions (in the
distributional sense) in 2-D provided the initial vorticity is in LP for some p > 1. One regularization studied was an approximation by a finite number of vortex "blobs". They showed that a class of
vortex blob approximations converges to a measurevalued solution of the 2-D Euler equations, provided that the total circulation is zero, but they did not determine whether classical weak solutions
could be obtained in this way when the vorticity is more regular than L1.
In this work we give another treatment of the vortex blob approximations to the 2-D Euler equations. It is similar to that of [7] but more straightforward and direct. With a slightly different choice
of parameters, we show that vortex blob approximations converge to measure-valued solutions, again for initial vorticity which is a measure of bounded support and finite mass. The total circulation
is arbitrary. (In two dimensions the total energy is infinite if the total circulation is nonzero.) In the case of initial vorticity in LP for some p > 1, we show that a classical weak solution is
obtained. A unified treatment is given for the two results; in fact, as is evident in the analysis of [7], the essential points to verify in either case are bounds for the approximate vorticity and
energy and a kind of weak consistency with the Euler equations. For smooth solutions of the Euler equations in two or three dimensions, the blob approximations of vortex methods converge with rates
determined by the two
25 length parameters, the radius of the blob elements (which can be thought of as a smoothing radius) and the spacing of the elements; the radius is usually taken larger than the spacing. Such a
result was proved in two dimensions in [11]. For a summary of the theory, see, e.g., [1,2]. It has recently been shown in [9] that for smooth flows convergence is possible even with point vortices in
place of the blob elements. In the nonsmooth case considered here, however, our results require that the spacing of the blobs is quite small relative to the core radius, and correspondingly the
number of blob elements is large. A result of the sort presented here was given for vorticity in L oo in [18], as well as a treatment of stochastic differential equations of particle paths as a
discretization of viscous flow. Again for vorticity in L oo , it has been shown [4,5] that blob approximations converge for short time with the radius comparable to the spacing. In [5] Cottet
describes an elegant and appealing approach to the consistency of these methods for weak solutions in 2-D; his approach could be applied to the situation studied here. We now describe the
two-dimensional vortex blob approximation to be used. The formulation and notation correspond to Section 2 of [7]. For 2-D incompressible, inviscid flow, the vorticity w is conserved along particle
paths; this is expressed in the equation Wt
+ V· \7w = 0,
where W = V2,1 - Vl,2 is the (scalar) curl of the velocity v. We will approximate the vorticity field at a given time by a sum of vortex "blobs", i.e., translates of a core function with specified
shape. The flow is then simulated by advecting these elements according to the velocity field determined by the approximate vorticity. For this purpose, we choose a core function 1 and has bounded
support. For each E > 0, let fj, XJ(t), ve(x, t)be determined by the vortex blob approximation described above for 0 ~ t ~ T. Assume the parameters are chosen so that 5(E) = E" for some a with 0 < a
< 1/4, and h(E) ~ CE 4 exp( -COC 2 ) for a certain constant Co and any C. Then a subsequence of {v e } converges, as E -+ 0, to a classical weak solution of the Euler equations with initial velocity
Vo and vorticity Wo. The convergence takes place strongly in L2 of any bounded region in space-time with 0 ~ t ~ T. It will be evident below that if p > 2 the initial smoothing is not necessary,
i.e., we may use Wo rather than wg in (1.9). In fact, in the argument below we assume p < 2.
If Wo is a measure, the vortex blob approximation already described leads to a measure-valued solution. We do not give the definition of the measure-valued solution here, but refer instead to [7]. We
now state our result in this case (cf. the result of [7], Section 2). THEOREM 2. Suppose Wo is a Radon measure R2 of finite mass and bounded support, and suppose that the corresponding velocity field
is locally L2 on R2. Define vortex blob approximations as before for 0 ~ t ~ T, with 5(E) = E" for some a with 0 < a ~ 1/7, and choose h(c:) ~ Cc: 6 exp(-CoC 2 ). Then a subsequence of {v e }
converges as c: -+ 0 to a measured-valued solution of the Euler equations with specified initial condition. The convergence is strong in U(n) for 1 < P < 2, and weak in L2(n) , for any bounded region
n of space-time with 0 ~ t ~ T. For a more detailed description of the limit measure-valued solution and the nature of the convergence, see Theorem 1.1 of [7] and the discussion preceding it. It will
be seen below that we obtain Theorem 2 by a simple modification of the proof of Theorem 1. 2. Proof of Theorem 1. To begin the proof of Theorem 1, we discuss bounds on various quantities related to
the vorticity. We will need an estimate for wg in L3. Since wg = j6 * Wo we have 4 3 It is evident that
(2.1 ) so that, assuming p (2.2) with
< 3,
1 p
28 We will use the passive transport w' (x, t) of wg by the flow determined by the blob approximation, i.e., the solution of
V' . v' = 0 the flow is area-preserving, and thus
(2.4) (2.5) Next we consider the sum (1.1) as a discretization of a convolution. Let X! be the flow determined by the blob vorticity; for an initial point a E R2, x(t) = X;(a) is the solution of
~: = v'(x, t),
= a,
with V' given by (1.3). We will need a crude bound for the Jacobian oX!/oa. U sing the above, we have
= "" oK. (x _ xe)r· . J ~ ax J J ,
It is easily seen from (1.4) that
L Ifjl S L
Thus for t
= I.
~ CC;-2. Moreover
Iw81dx =
S CIWolL"
(2.7) Now suppose g(a) is a C 1 function; we compare
g(a)wg(a)da with Lg(aj)rj. J
On R j we have Ig(a) - g(aj)1 ~ hlV'glLoo so that
g(a)wg(a)da - g(aj)rjl
(g(a - g(aj»wg(a)dal
~ hlV'glLoo IwglL'(R;)'
Summing over j gives (2.8)
I g(a)wg(a)da - L g(aj)rjl ~ hlV'glLoo IWolL" J
29 We use this to compare (1.1) with the corresponding integral. Since
we may apply (2.8) with g( 0')
w'(x, t)
= ¥'~( x -
J =J =
X:( 0')). We obtain
¥,,(x - X;(O'))wg(O')dO' ¥,,(x - y)w'(y, t)dy
+ EI
+ EI
= (¥" * w')( x, t) + EI with
IE1(x,t)l:::; hlwol£1IV¥',lu",IJIL= :::; Chc;-3Iwo ILl exp( C1 c;-2Iwo 1£1 T).
The error EI will be small if we choose h small enough relative to c;, as in Theorem 1. Next we show that w'(·,t) is uniformly bounded in LP. We saw above that w' is uniformly close to ¥" * w'. We
know from (2.4) that w', and therefore ¥', * w', is uniformly bounded in LP. First we have the simple estimate (2.10) using (1.9). Now since w' is uniformly bounded in LI, the measure of the set {x:
Iw'(x,t)1 2: I} is also uniformly bounded. On this set, w'(·,t) is close in Loo, and therefore in LP, to ¥" * w'(·,t), which is bounded in LP. Thus the LP norm of w'(·,t) on this set is bounded. On
the remaining set, where Iw'l :::; 1, we have Iw'IP :S lwei, so that the LP norm is bounded in terms of the LI norm. In summary, (2.11)
Iw'(·, t)ILP :S C, c; > 0, 0:::; t :::; T,
the constant depending on Iwo ILp and Iwo 1£1. In just the same way we can argue using (2.5) that (2.12) It follows from (2.11), (2.12) and the Calderon-Zygmund inequality that
IVv'(·,t)ILP :::;
IVv'(·, t)IL3 :::: C5- i3 •
In the first inequality we have used the fact that p we then have (2.15)
Iv'(., t)b· :::: C,
> 1. From Sobolev's inequality
provided p < 2. In order to check the consistency of the vortex blob approximation with the Euler equations as a c; ~ 0, we examine the error E in satisfying the vorticity evolution equation,
E= - at
+v· ·Vw· .
Differentiating (1.1) and substituting from (1.6), we have, as in [7], equation (2.37), (2.16)
E(x, t) = 2)v'(x, t) - v'(X;, t)]. V'P.(x =
V . { 2)v'(x, t) - v'(X;, t)]'P.(x - XJ)rj} == V . F(x, t) )
We will estimate F in LI of space by (2.17)
Fj(x,t) = [v«x,t) -v'(X;,t)]'P.(x -XJ)
= ==
+ (1 -
x)XJ)· (x - XJ)'P«X - XJ)ds
We set Il.(z) = Z'P.(z), so that
We will estimate the x-integral of CIIl(Z/C;),
Iii I using
Holder's inequality. Since Il.( z)
which is small if r < 2. We choose r = 3/2 and bound the other factor in L3. We saw in (2.14) that IVv'IL3 ::; C 8-(3 with some f3 ::; 4/3. Thus after rescaling, the L3 norm of VV'(sx + (1 - 8)XJ),
as a function of x, is bounded by C8- 2 / 3 8-(3. Therefore,
and integrating in s, (2.18)
Combining the last inequality with (2.17), we have
If {j = giT, we have a power of provided (1 < 1/4, and we have
of 1/3 - (1(3
> (1 - 4(1)/3 ==
a; this is positive
1F(',t)ILI :SCg a , some a>O.
We now use (2.19) to check the weak consistency of the vortex blob approximation as g -+ 0; that is, we show that for suitable test functions (x, t) with div = 0,
1 T
(2.20) as
g -+
+ v'· (v'· \l) 2R, each ]{, is just K, and we can write
= L[J(x -
aj) - K(x)]r j .
It is easy to see that this is bounded by
L Ci x l-
2 lrjl
::; CIIwoll Ixl- 2
for Ixl
> 2R, and thus bounded in L2. This completes the verification of (3.2).
It remains to verify the Lipschitz continuity of v e in time. We return to (2.25), which was a consequence of Lemma 2.1. We will choose q = 3 and s = 2, so that eI> E W·,q implies VeI> E L oo and
therefore V'-P E Lr with 3 ::; r ::; 00. We use this fact to estimate the integral on the right in (2.25). We had v' bounded in L2, and v E L3, for example. Thus the product v' . v' is a sum of terms
in LP for 1 ::; p ::; 3/2. Since VeI> is bounded in the dual spaces for such £P, we may estimate the integral using Holder's inequality. We conclude as before that
Iv:(" i)1 IV-',3/>
W- 2 ,3/2.
so that v' is Lipschitz continuous in We have now verified all the conditions (1)-(5) for the convergence to a measure-valued solution.
37 REFERENCES [1] [2] [3] [4]
[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
[18] [19] [20]
J. T. BEALE AND A. MAJDA, High order accurate vortex methods with explicit velocity kernels, J. Comput. Phys., 58 (1985), pp. 188-208. J. T. BEALE AND A. MAJDA, Vortex methods for fluid flow in two
or three dimensions, Contemp. Math., 23 (1984), pp. 221-229. R. CAFLISCH AND J. LOWENGRUB, Convergence of the vortex method for vortex sheets, preprint. J. P. CHOQUIN, G. H. COTTET, AND S.
MAS-GALLIC, On the validity of vortex methods for nonsmooth flows, in Vortex Methods (C. Anderson and C. Greengard, editors), Lecture Notes in Mathematics, Springer-Verlag, pp. 56-67. G.H. COTTET,
These d'Etat, Universite Pierre et Marie Curie. R. DIPERNA AND A. MAJDA, Oscillations and concentrations in weak solutions of the incompressible fluid equations, Commun. Math. Phys., 108 (1987), pp.
667-689. R. DIPERNA AND A. MAJDA, Concentrations and regularizations for 2-D incompressible flow, Comm. Pure Appl. Math., 60 (1987), pp. 301-45. R. DIPERNA AND A. MAJDA, Reduced Hausdorff dimension
and concentration- cancellation for 2-D incompressible flow, J. Amer. Math. Soc., 1 (1988), pp. 59-95. J. GOODMAN, T. Hou, AND J. LOWEN GRUB, Convergence of the point vortex method for the 2-D Euler
equations, preprint. C. GREENGARD AND E. THOMANN, On DiPerna-Majda concentration sets for two-dimensional incompressible flow, Comm. Pure Appl. Math., 41 (1988), pp. 295-303. O. HALO, The convegence
of vortex methods, II, SIAM J. Numer. Anal., 16 (1979), pp. 726-755. R. KRASNY, Desingularization of periodic vortex sheet roll-up, J. Comput. Phys., 65 (1986), pp. 292-313. R. KRASNY, Computation of
vortex sheet roll-up in the Trefftz plane, J. Fluid Mech., 184 (1987), p. 123. R. KRASNY, Computation of vortex sheet mil-up, in Vortex Methods (C. Anderson and C. Greengard, editors), Lecture Notes
in Mathematics, Springer- Verlag, pp. 9-22. J. LOWENGRUB, Convergence of the vortex method for vortex sheets, Thesis, New York University, 1988. A. MAJDA, Vorticity and the mathematical theory of
incompressible fluid flow, Comm. Pure Appl. Math., 39 (1986), pp. SI87-S220. A. MAJDA, Mathematical fluid dynamics: the interaction of nonlinear analysis and modern applied mathematics, to appear in
the Proc. of the Centennial Celebration of the Amer. Math. Society. C. MARCHIOIW AND M. PULVIRENTI, Hydrodynamics in two dimensions and vortex theory, Commun. Math. Phys., 84 (1982), pp. 483-503. M.
SHELLEY AND G. BAKER, On the relation between thin vortex layers and vortex sheets: Part 2, numerical study, preprint. R. TEMAM, The Navier-Stokes Equations, North-Holland, Amsterdam, 1977.
CHEN GUI-QIANG* Abstract. We are concerned with the limit behavior of approximate solutions to hyperbolic systems of conservation laws. Several mathematical compactness theories and their role are
described. Some recent and ongoing developments are reviewed and analyzed. AMS(MOS) subject classifications. 35-02, 41-02, 35B25, 35D05, 35L65, 46A50, 46G10, 65M10.
1. Introduction. We are concerned with the limit behavior of approximate solutions to hyperbolic systems of conservation laws. The Cauchy problem for a system of conservation laws in one space
dimension is of the following form:
(1.1) (1.2)
+ f(u)",
g(x, t,u),
u It=o=uo(x),
where u = u(x, t) E Rn and both f and 9 are smooth nonlinear functions Rn to Rn. The system is called strictly hyperbolic in a domain 'D ifthe Jacobian \1 f( u) has n real and distinct eigenvalues
(1.3) at each state u E 'D. If the Jacobian \1 f( u) has n real and indistinct eigenvalues A;(u)(i = 1,2, ... ,n) in 'D, one calls the system nonstrictly hyperbolic in 'D. An eigenfield corresponding
to Aj is genuinely nonlinear in the sense of Lax [LA2] if A/S derivative in the corresponding eigendirection never vanishes, i.e.,
(1.4) where The system is called genuinely nonlinear if all of its eigenfields are genuinely nonlinear. Otherwise, one calls the system linearly degenerate. The quasilinear systems of conservation
laws result from the balance laws of continuum physics and other fields (e.g., conservation of mass, momentum, and energy) and, therefore, describe many physical phenomena. In particular, important
*Partially supported by U.S. NSF Grant # DMS-850403, by CYNSF, and by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract
W-31-109-Eng-38. Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, NY 10012 U.S.A. Current address: Department of Mathematics, The University of Chicago, Chicago, IL 60637.
examples occur in fluid dynamics (see Section 5), solid mechanics (see Section 4), petroleum reservoir engineering (see Section 4), combustion theory and game theory
[CR]. Since f is a nonlinear function, solutions of the Cauchy problem (1.1)-(1.2) (even starting from smooth initial data) generally develop singularities in a finite time, and then the solutions
become discontinuous functions. This situation reflects the physical phenomenon of breaking of waves and development of shock waves. For this reason, attention focuses on solutions in the space of
discontinuous functions, where one cannot directly use the classical analytic techniques that predominate in the theory of partial differential equations of other types. To overcome this difficulty,
one constructs approximate solutions u« x, t) to the following perturbations: a. Perturbation of equations: One of the perturbation prototypes is the viscosity method; that is, u«x, t) are generated
by the corresponding parabolic system of the form
= g(x, t, u)
u It=O
+ E(D(u)ux)x,
where D is a properly selected and nonnegative matrix. Usually one chooses D to be the unit matrix. b. Perturbation of Cauchy data: u«x, t) are generated by the following Cauchy problem:
= g(x, t, u),
u t=o
c. Perturbation of both equations and Cauchy data: Besides the viscosity method with perturbated Cauchy data u t=O = uij(x) (see (1.5», another perturbation prototype is the difference method; that
is, u«x,t) (E = .6.x, space step length) are generated by the difference equations
Dtu + Dxf(u)
= g«x, t, u),
u t=O
= uo(X; E),
and then one studies limit behaviors of the approximate solutions u«x, t) as E --+ 0: convergence and oscillation. Examples of this approach are the Lax-Friedrichs scheme [LAl], the Glimm scheme
[GL], the Godunov scheme [GO], higher-order schemes (e.g., [LW], [SZ], [TAl) and the fractional step schemes (e.g., [DCL2l). The motivation for using approximate solutions comes from continuum
physics, numerical computations, and mathematical considerations. The system of gas dynamics generally involves viscosity terms, although the viscosity coefficient is very small and can be ignored;
the initial value function is determined only by using statistical data and some averaging methods described by a weak topology. Numerical computations of systems of conservation laws are limited to
calculations of
difference equations and discrete Cauchy mesh data. In game theory with non-zero sum, derivative functions of the stochastic game values and the deterministic game values satisfy systems of
conservation laws with and without viscosity terms, respectively. Therefore, studying the relationship between the stochastic game and the deterministic game when "noise" disappears is equivalent to
studying the limit behavior of the approximate solutions as f -+ 0 (see [CR]). Moreover, one expects to use "good" Cauchy data (e.g., total variation functions) to approximate "bad" Cauchy data
(e.g., LOO functions) to obtain a solution to the Cauchy problem with "bad" Cauchy data. Thus, such a study enables us to understand how the behavior of the system at the microscopic level affects
the behavior of the system at the macroscopic level and, therefore, understand the well-posedness of the Cauchy problem (1.1)-(1.2) in a weak topology. The remainder of this paper has the following
organization. Section 2 focuses on compactness theories. Sections 3 and 4 discuss the limit behavior of approximate solutions to the Cauchy problem for the scalar conservation law and for hyperbolic
systems of conserVation laws, respectively. For concreteness, we focus our attention on homogeneous systems (i.e., h(u) == 0) in Sections 3 and 4. Section 5 focuses on approximate solutions generated
by the Lax-Friedrichs scheme, the Godunov scheme, and the viscosity method for the homogeneous system of isentropic gas dynamics and on the fractional-step Lax-Friedrichs scheme and Godunov scheme
for the inhomogeneous system of isentropic gas dynamics. Section 6 concludes our review with some remarks about distinguishing features of multidimensional conservation laws. The techniques and
strategies developed in this direction should be applicable to other interesting problems of nonlinear analysis and their regularizations. This paper is dedicated to the memory of Ronald J. DiPerna.
His life and his work is an inspiration to the author. 2. Compactness Theories. One of the main difficulties in studying nonlinear problems is that, after introducing a suitable sequence of
approximations, one needs enough a priori estimates to ensure the convergence of a subsequence to a solution. This argument is based on compactness theories. Here we describe several important
compactness theories that have played a significant role in the field of conservation laws.
2.1. Classical Theories The two important compactness theorems provide natural norms for the field of conservation laws in classical theories of compactness: BV and L1 compactness theorems. 2.1.1. BV
Compactness Theorem THEOREM 2.1. There exists a subsequence converging pointwise a.e. in any function sequence that has uniform control on the L OO and total variation norms.
41 In the context of a strictly hyperbolic system of conservation laws, the LOO norm and the total variation norm provide a natural pair of metrics to study the stability of approximate solutions in
the sense of LOO. The LOO norm serves as an appropriate measure of the solution amplitude, while the total variation norm serves as an appropriate measure of the solution gradient. The role of these
norms is indicated by Glimm's theorem [GL] concerning the stability and convergence of the Glimm approximate solutions, provided that the total variation norm of the initial data Uo (x) is
sufficiently small for systems of conservation laws, and by results of Oleinik rOLl, Conway and Smoller [CS], and others concerning the stability and convergence of the Lax-Friedrichs and the Godunov
approximate solutions with large initial data uo( x) for the scalar conservation law. The families of approximate solutions {u'} are stable in the sense that {
lu'(" t)loo
~ const·luol oo ,
TVu'(" t)
const. TV uo,
where constants are independent of € and depend only on the flux function f. Furthermore, there exists a subsequence that converges pointwise a.e. to a globally defined distribution solution u. Until
the end of the 1970s, almost all results concerning the stability and convergence of approximate solutions for conservation laws were obtained with the aid of the BV compactness theorem (e.g., [BA,
DZ, LLO, NI, NS, SRI, TEl, ZG]).
2.1.2. LI Compactness Theorem A more general compactness framework for conservation laws is the LI compactness theorem.
THEOREM 2.2. A function sequence {u'(x)} C LI(n), n cc R n , is strongly compact in LI if and only if (i) Ilu'll£' ~ M, M is independent of f. (ii) {u' (x)} is equicontinuous in the large, i.e., 'if
h all u' E {u'(x)},
> 0, :l5( h) > 0 such that for
10 lu'(x + y) - u'(x)ldx ~ h if only Iyl
< 5(h).
In the context of conservation laws, the role of the LI norm is indicated by Kruskov's theorem [KR] concerning the stability and convergence of the viscosity approximate solutions and the uniqueness
of generalized solutions for the scalar conservation law and by Temple's theorem [TE3] concerning the weak stability of generalized solutions with respect to the initial value for systems of
conservation laws.
2.2. The Theory of Compensated Compactness Weak topology has played an important role in studying linear problems where weak continuity can be used; however, lack of weak continuity in nonlinear
problems has long restricted the use of weak topology. The theory of compensated compactness established by Tartar [TI-T3] and Murat [MI-M4] is intended to render weak topology more useful in solving
nonlinear problems. In other words, the theory deals with the behavior of nonlinear functions with respect to weak topology, for instance, the weak continuity and the weak lower semi continuity of
nonlinear functions. Here we restrict our attention to that partion of the theory relating to conservation laws. As is well known, it is difficult to clarify the conditions to ensure weak continuity
and weak lower semicontinuity for general nonlinear functions (e.g., [DA, MI-M4, TI-T4]). However, for a 2 x 2 determinant, a satisfactory result can be obtained [MI-M2, T2]. THEOREM 2.3. Let ncR x
R+ = R~ be a bounded open set and u< : n -+ R4 be measurable functions satisfying in L~(n),
Then there exists a subsequence (still labeled) u< such that
Iu 0 constant, satisfies
One can construct admissible solutions u« x, t) (e.g., [CH3]) satisfying (4.4)
0< G
A detailed discussion of general ¢> can also be found in [CH4].
Example 2. This example involves a system arising in the polymer flooding of an oil reservoir [PO],
(4.5) where 0 ~ Ul ~ 1, 0 ~ U2 ~ Ul are the concentration of water and the overall concentration of a polymer at any x and t, respectively, and f(Ul,C) is a smooth function such that f(Ul,C)
increases from zero to one with one inflection point where c is constant and such that f( Ul, c) decreases with increasing c for fixed Ul (see [TEl]). The essential feature of system is that the
strict hyperbolicity fails along a certain curve in state space. Using Temple's theorem [TEl], we have a global solution sequence u'(x, t) satisfying (4.2) and (4.5) with the Cauchy data u~(x) of
bounded variation. THEOREM 4.7. [CH3]. The initial oscillations will still propagate along the linearly degenerate field for the nonstrictly hyperbolic system (4.2) and (4.5). Thus if the initial
data sequence u~( x) is a highly oscillatory sequence, the exact solution sequence u'(x, t) is a highly oscillatory sequence, too. One cannot expect convergence of the Glimm approximate solutions
with highly oscillatory initial data.
Remark. The arguments in Frameworks (A)-(C) could be extended to LP approximate solutions. We refer the reader to [DAF1, SH, LPj.
5. The System of Isentropic Gas Dynamics. Here we describe the limit behavior of the approximate solutions u'(x, t), generated from the fractional step Lax-Friedrichs scheme and Godunov scheme to the
inhomogeneous system of isentropic gas dynamics in the Euler coordinate (5.1)
= U(p, u; x, t),
+ (pu)x
+ (pu 2 + p(p))x
(p,U) t=o
= V(p,
u; x, t),
= (po(x),uo(x)),
51 where u, p, and p are the velocity, the density, and the pressure, respectively. For a polytropic gas, p(p) = P p"l, where k is the constant and I > 1 is the adiabatic exponent (for usual gas, 1 <
I ::; 5/3). The system (5.1) with (U, V) #- (0,0) is a gas dynamics model of nonconservative form with a source. For instance, (U, V) = (0, a(x, t)p), where a(x, t) represents a body force (usually
gravity) acting on all the fluid in any volume. An essential feature of the system is a nonstrict hyperbolicity; that is, a pair of wave speeds coalesce on the vacuum p = 0. We also describe the
limit behavior of the approximate solutions u«x,t) (especially generated from the Lax-Friedrichs scheme, the Godunov scheme, and the viscosity method ) to the homogeneous system of isentropic gas
dynamics. The homogeneous system of (5.1) is
+ (pu)x (pu)t + (pu 2 + p(p))x
=0, = 0.
For the Cauchy problem of the homogeneous system (5.3), many existence theorems of global solutions have been obtained [RI, ZG, BA, NI, NS, DZ, LLO, DI2]. The first large-data existence theorem of
global solutions was established by Nishida [NI] for I = 1 by using the Glimm method [GL]. DiPerna [DI2] established a large data existence theorem for 1 +2/(2m+ 1), m ~ 2 integers, by using the
viscosity method and the theory of compensated compactness. Both results assume that the initial density po(x) is away from the vacuum. In this section we describe recent achievements for the
5.1. Compactness Framework THEOREM
5.1. Suppose that the approximate solutions
v«x, t)
= (p«x, t), m«x, t)) = (p«x, t), p«x, t)u«x, t))
to the Cauchy problem (5.1)-(5.2) (1 < I ::; 5/3) satisfy the following framework. (i) There is a constant C >
°such that
0::; p«x,t)::; C,
lu«x,t)l::; C,
(ii) On any bounded domain n c R~ and for any weak entropy pair (1], q) (i.e., 1](0, u) = 0), the measures
Then there exists a subsequence (still labeled) v< such that (p«x, t), m«x, t))
(p(x, t), m(x, t)),
52 This compactness framework is established by an analysis of weak entropy and a study of regularities of the family of probability measure {vx,tl(x,/)ERt' which corresponds to the approximate
solutions. The basic motivation is that the commutativity relation (2.1) represents an imbalance of regularity: the operator on the left is more regular than the one on the right as a result of
cancellation, which forces the measure Vx,1 to reduce to a point mass. We recall that the derivative of a Radon measure in the Lebesque sense vanishes except at one point, implying that the measure
is a point mass. The challenge is to choose the entropy pairs whose leading term is coercive with respect to the 2 x 2 determinant and to show that the coercive behavior guarantees that the
derivative of Vx,1 vanishes except at one point. The essential difficulty is that only a subspace of entropy pairs, weak entropy pairs, can be used in the relation (2.1). The strategy is fulfilled in
[CHl-CH2, DCLl-DCL2]. 5.2. Convergence of the Lax-Friedrichs Scheme and the Godunov Scheme
Using Theorem 5.1 and making several estimates, we obtain the following theorem. THEOREM 5.2. [DCLl, CHl-2]. Suppose that the initial data vo(x) = (Po(x), po(x)uo satisfy
o ~ Po(x) ~ M,
J~oo('I).(vo(x» - 'I).(v) - V''I).(v)(vo(x) - v»dx ~ M,
for some constant state v and the mechanical energy 'I). = pu 2 + "Y( ~~l}' Then there exists a convergent subsequence in the Lax-Friedrichs approximate solutions and the Godunov approximate
solutions vf(x,t) (e = D.x, the space step length), respectively, that have the same local structure as the random choice approximations of Glirnm [GL}, such that
Define u(x,t) = m(x,t)/p(x,t), a.e. Then the pair offunctions (p(x,t),u(x,t» is a generalized solution of the Cauchy problem (5.2)-(5.3) satisfying
o ~ p(x,t)
Im(x, t)1 < C lu(x , t)1 < - p(x, t) - ,
5.3. Convergence of the Viscosity Method Consider the viscosity approximations v«x, t) determined by
= €pxx,
(p, m)
1 / =0
= (p~(x), m~( x»,
= (pHx), mHx» is an approximate sequence of the initial data vo(x) (po( x), Po( x )uo( x».
where vo(x)
53 LEMMA 5.1. Suppose that the initial data (po(x), uo(x)) satisfy
(po(x) - p,uo(x) - u) E L2 n Loo, Po(x) 2:
Then there is an approximate sequence vO( x) satisfying
v~(x) -
in L2,
v E CJ(-oo,oo),
O:s pO(x):S Mo,
such that there exist global solutions (pf, u f) to the Cauchy problem (5.2)-(5.3) satisfying
(pf(., t) - p, uf(., t) - u) E C 1 n Hl,
o :S p' :S M,
lufl:S M,
where both Mo and M are constants independent of €. THEOREM 5.3. [CH1]. Suppose that the initial data vo(x) = (po(x), po(x )uo(x)) satisfy (5.4). Then there exists a convergent subsequence in the
viscosity approximations vo(x, t) such that
Define u(x, t) = a.e. Then the pair of functions (p(x, t), m(x, t)) is a generalized solution to the Cauchy problem (5.2)-(5.3) satisfying
0), however, there are usually no bounded invariant regions. Nevertheless, we use two difference schemes-the fractional-step Lax-Friedrichs scheme and Godunov scheme (see [DCL2]), which are
generalizations of those of Lax-Friedrichs [LA1] and Godunov [GO]-to construct approximate solutions v f (x, t) (€ = ~x, the space step length). If the inhomogeneous terms satisfy conditions C1°-C3°
(see [DCL2]), which contains cases of (0, a(x, t)p), (0, a(x, t)pu), (a(x, t)p, a(x, t)pu), and (0, a(x, t)puln(lul+ 1)), a(x, t) E C(R X R+), we can overcome the difficulty by analyzing the solution
of a nonlinear ordinary differential equation for the fractional-step Lax-Friedrichs scheme and Godunov scheme.
54 THEOREM 5.4. [DCL2j. Suppose that the inhomogeneous term satisfies conditions C1°-C3° (see [DCL2j) and the initial data (po(x), uo(x)) satisfy (5.4). Then there exists a convergent subsequence in
the approximations v«x, t) such that
(p.. - shock. For unattached wave configurations (i.e., R is not attached to the wedge corner), a>.. -shock - which is a characteristic feature of shock wave/boundary layer interactions - often (but
not always) forms in the corner region in experiments. These solutions differ dramatically from the corresponding Euler solutions, but only locally near the corner; in particular, numerical/
experimental comparisons seem to indicate that the important Mach stem region is not affected.
86 (4) Small scales. No matter how small the viscosity, it will destroy any complicated inviscid structure which exists at small enough length scales. The relevance of this factor is debatable. At
this point, no numerical simulation has been carried out with sufficient resolution to capture viscous length scales for the corresponding experimental test gas. For relatively simple flowfields such
as RR and some SMR cases it is very unlikely that any complex structure exists at scales smaller than those already computed for either equations (1) or (3). However, numerical experimentation with
adaptive mesh refinement has revealed considerable small-scale structure for DMR flowfields at low I and high Ms. One is led to wonder whether or not relevant inviscid length scales might become
arbitrarily small as I -+ 1.0 at high Ms. In summary, the evidence indicates that the zero viscosity limit is singular for equations (1), but the significance of this fact is not clear. Stability of
self-similar flowfields with respect to unsteady perturbations has already been considered for the Kelvin-Helmholtz instability of the main slip surface in DMR flowfield calculations at high enough
resolution. This result alone indicates that a wide variety of self-similar solutions are not stable to unsteady perturbations. A more interesting scenario concerns the RR-MR transition. It is
possible that equations (3) have multiple solutions near the transition curve comprising a hysterisis loop with stable and unstable branches; a possible mechanism for jumping between branches in an
experiment or CFD simulation might be a judiciously chosen unsteady perturbation. Finally, very small scale structures in complex situations such as low I DMR flow might well be created or destroyed
by small perturbations in an experiment or calculation. To distinguish such events from true self-similar structure one would have to continue the experiment/calculation long enough so that the
events were no longer small scale. 4. Conclusions. From the discussion of the preceeding section it is seen that the questions of existence and uniqueness for self-similar oblique shock wave
reflection have no obvious answers. However, certain regions of parameter space can be selected where the situation is more clear; we refer to RR cases away from the transition line and to weak SMR
cases. Here, there is no reason to believe other than that solutions exist and are unique. The RR situation is somewhat easier to consider from the PDE point-of-view, primarily because the slip
surface of MR is absent. SMR cases for which the slip surface terminates at the wall boundary stagnation point should not be significantly more difficult. Future CFD studies are indicated in several
areas. For example, the possibility of an 'organizing center' should be carefully looked into. Another area would be to try unsteady perturbations near the RR-MR transition line. Also, the study of
RR-MR transition would be greatly facilitated by a high resolution NavierStokes capability for modelling the boundary layer; it is quite likely that a few such calculations would substantially clear
up some of the ambiguities in the experimental record. Finally, more work is needed in the low I regime.
87 REFERENCES [1]
[2] [3] [4] [5] [6] [7] [8] [9]
[10] [11] [12] [13] [14] [15]
[18] [19]
[23] [24]
S. ANDO, Pseudo-stationary oblique shock-wave reflections in carbon dioxide - domains and boundaries, University of Toronto Institute for Aerospace Studies (UTIAS) Tech. Note No. 231 (1981). T.V.
BAZHENOVA, V.P. FOKEEV AND L.G. GVOZDEVA, Regions of various forms of Mach reflection and its transition to regular reflection, Acta Astronautica, 3 (1976), pp. 131-140. T.V. BAZHENOVA, L.G. GVOZDEVA
AND Yu.V. ZHILIN, Change in the shape of the diffracting shock wave at a convex corner, Acta Astronautica, 6 (1979), pp. 401-412. T.V. BAZHENOVA, L.G. GVOZDEVA AND M.A. NETTLETON, Unsteady
interactions of shock waves, Prog. Aero. Sci., 21 (1984), pp. 249-33l. G. BEN-DoR, Regions and transitions of nonstationary oblique shock-wave diffractions in perfect and imperfect gases, UTIAS Rep.
No. 232, (1978). G. BEN-DoR, Steady, pseudo-steady and unsteady shock wave reflections, Prog. Aero. Sci., 25 (1988), pp. 329-412. G. BEN-DoR AND 1.1. GLASS, Domains and boundaries of non-stationary
oblique shock-wave reflexions. 1. Diatomic gas, J. Fluid Mech., 92 (1979), pp. 459-496. G. BEN-DoR AND 1.1. GLASS, Domains and boundaries of non-stationary oblique shock-wave reflexions. 2. Monatomic
gas, J. Fluid Mech., 96 (1980), pp. 735-756. G. BEN-DoR, K. TAKAYAMA, AND T. KAWAUCHI, The transition from regular to Mach reflexion and from Mach to regular reflexion in truly non-stationary flows,
J. Fluid Mech., 100 (1980), pp. 147-160. G. BEN-DoR AND K. TAKAYAMA, Analytical prediction of the transition from Mach to regular reflection over cylindrical concave wedges, J. Fluid Mech., 158
(1985), pp. 365-380. M. BERGER AND P. COLELLA, Local adaptive mesh refinement for shock hydrodynamics, J. Compo Phys., 82 (1989), pp. 64-84. P. COLELLA, Multidimensional upwind methods for hyperbolic
conservation laws, Lawrence Berkeley Laboratory Rep. LBL-17023 (1984). P. COLELLA AND H.M. GLAZ, Efflcient solution algorithms for the Riemann problem for real gases, J. Compo Phys., 59 (1985), pp.
264-289. P. COLELLA AND L.F. HENDERSON, The von Neumann paradox for the diffraction of weak shock waves, Lawrence Livermore National Laboratory Rep. UCRL-100285 (1988). A.Yu. DEM'YANOV AND A.V.
PANASENKO, Numerical solution to the problem of the diffraction of a plane shock wave by a convex corner, Fluid Dynamics, 16 (1981), pp. 720-725. Translated from the original Russian. R.L.
DESCHAMBAULT AND 1.1. GLASS, An update on non-stationary oblique shock-wave reflections: actual isopycnics and numerical experiments, J. Fluid Mech., 131 (1983), pp. 27-57. J.M. DEWEY AND D.J.
McMILLIN, Observation and analysis of the Mach reflection of weak uniform plane shock waves. Part 1. Observations and Part 2. Analysis, J. Fluid Mech., 152 (1985), pp. 49-8l. 1.1. GLASS, Some aspects
of shock-wave research, AIAA J., 25 (1987), pp. 214-229. See also AIAA Report AIAA - 86 - 0306 with the same title and author. H.M. GLAZ, Numerical computations in gas dynamics with high resolution
schemes, in Shock Tubes and Waves, Proc. Sixteenth Inti. Symp. on Shock Tubes and lVaves, H. Gronig, editor, VCH Publishers (1988), 1988, pp. 75-88. H.M. GLAZ, P. COLELLA, I.I.GLASS AND
R.L.DESCHAMBAULT, A numerical study of oblique shock-wave reflections with experimental comparisons, Proc. R. Soc. Lond., A398 (1985), pp.117-140. H.M. GLAZ, P. COLELLA, I.I.GLASS AND
R.L.DESCHAMBAULT, A detailed numerical, graphical, and experimental study of oblique shock wave reflections, Lawrence Berkeley Laboratory Rep. LBL-20033 (1985). H.M. GLAZ, P.A. WALTER, 1.1. GLASS AND
T.C.J. Hu, Oblique shock wave reflections in SF6: A comparison of calculation and experiment, AIAA J. Prog. in Astr. and Aero., 106 (1986), pp. 359-387. H.M. GLAZ, P. COLELLA, J.P. COLLINS AND R.E.
FERGUSON, Nonequilibrium effects in oblique shock-wave reflection, AIAA J., 26 (1988), pp. 698-705. L.G. GVOZDEVA, T.V. BAZHENOVA, O.A. PREDVODITELEVA AND V.P. FOKEEV, Mach reflection of shock waves
in real gases, Astronautica Acta, 14 (1969), pp. 503-508.
88 [25] [26] [27] [28] [29] [30]
[34] [35] [36] [37] [38] [39] [40]
[41] [42] [43] [44] [45] [46] [47] [48] [49] [50]
L.F. HENDERSON, Regions and boundaries for diffracting shock wave systems, Z. angew. Math. Mech., 67 (1987), pp. 73-86. L.F. HENDERSON AND A. LOZZI, Experiments on transition of Mach reflexion, J.
Fluid Mech., 68 (1975), pp. 139-155. L.F. HENDERSON AND A. LOZZI, Further experiments on transition to Mach reflexion, J. Fluid Mech., 94 (1979), pp. 541-559. R.G. HINDMAN, P. KUTLER, AND D.
ANDERSON, A two-dimensional unsteady Euler-equation solver for flow regions with arbitrary boundaries, AIAA Rep. 79-1465 (1979). H. HORNUNG, Regular and Mach reflection of shock waves, Ann. Rev.
Fluid Mech., 18 (1985), pp.33-58. H.G. HORNUNG AND J.R. TAYLOR, Transition from regular to Mach reflection of shock waves. Part 1. The effect of viscosity in the pseudosteady case, J. Fluid Mech.,
123 (1982), pp. 143-153. T .C.J. Hu AND 1.1. GLASS, Pseudostationary oblique shock-wave reflections in sulphur hexafluoride(SF6}:interferometric and numerical results, Proc. R. Soc. Lond., A408
(1986), pp. 321-344. M. IIMURA, H. MAEKAWA AND H. HONMA, Oblique reflection of weak shock waves in carbon dioxide, in Proceedings of the 1988 National Symposium on Shock Wave Phenomena, K. Takayama,
editor, Tohoku University, Japan, 1989, pp. 1-10. T. IKUI, K. MATSUO, T. AOKI, AND N. KONDOH, Mach reflection of a shock wave from an inclined wall, Memoirs of the Faculty of Engineering, Kyushu
University, 41 (1981), pp. 361-380. V.P. KOROBEINIKOV, ED., Nonstationary interactions of shock and detonation waves in gases, Nauka, Moscow, USSR, 1986. In Russian. P. KUTLER AND V. SHANKAR,
Diffraction of a shock wave by a compression corner:Part I regular reflection, AIAA J., 15 (1977), pp. 197-203. C.K. LAW AND 1.1. GLASS, Diffraction of strong shock waves by a sharp compressive
corner, C.A.S.1. Trans., 4 (1971), pp. 2-12. J.-H. LEE AND 1.1. GLASS, Pseudo-stationary oblique-shock-wave reflections in frozen and equilibrium air, Prog. in Aero. Sciences, 21 (1984), pp. 33-80.
V.N. LYAKHOV, V.V. PODLUBNY, AND V.V. TITARENKO, Influence of Shock Waves and Jets on Elements of Structures, Mashinostroenie, Moscow, 1989. In Russian. A. MAJDA, Compressible Fluid Flow and Systems
of Conservation Laws in Several Space Variables, Springer - Verlag, 1984. K. MATSUO, T. AOKI, N. KONDOH, AND S. NAKANO, An experiment on double Mach reflection of a shock wave using sulfur
hexafluoride, in Proceedings of the 1988 National Symposium on Shock Wave Phenomena, K. Takayama, editor, Tohoku University, Japan (ISSN 0915 - 4884), 1989, pp. 11-20. H. MIRELS, Mach reflection
flowfields associated with strong shocks, AIAA J., 23 (1984), pp. 522-529. D.C. PACK, The reflexion and diffraction of shock waves, J. Fluid Mech., 18 (1964), pp. 549-576. A. SAKURAI, L.F. HENDERSON,
K. TAKAYAMA, AND P. COLELLA, On the von Neumann paradox of weak reflection, Fluid Dynamics Research, 4 (1989), pp. 333-345. G.P. SCHNEYER, Numerical simulation of regular and Mach reflections,
Physics of Fluids, 18 (1975), pp. 1119-1124. A.N. SEMENOV, M.P. SYSHCHIKOVA, AND M.K. BEREZKINA, Experimental investigation of Mach reflection in a shock tube, Soviet Physics - Technical Physics, 15
(1970), pp. 795-803. V. SHANKAR, P. KUTLER, AND D. ANDERSON, Diffraction of a shock wave by a compression corner: Part II - single Mach reflection, AIAA J., 16 (1978), pp. 4-5. M. SHIROUZO AND 1.1.
GLASS, Evaluation of assumptionss and criteria in pseudostationary oblique shock-wave reflections, Proc. R. Soc. Lond., A406 (1986), pp. 75-92. J.T. URBANOWICZ, Pseudo-stationary oblique-shock-wave
reflections in low gamma gasesisobutane and sulphur hexafluoride, UTIAS Tech. Note No. 267,1988. J.T. URBANOWICZ AND 1.1. GLASS, Oblique-shock-wave reflections in low gamma gases sulfurhexafluoride
(SF6) and isobutane [CH(CHa)aj, preprint, 1989. J.M. WHEELER, An interferometric investigation of the regular to Mach reflection transition boundary in pseudostationary flow in air, UTIAS Tech. Note
No. 256,1986.
JAMES GLIMM*H Abstract. The subject of nonlinear hyperbolic waves is surveyed, with an emphasis on the discussion of a number of open problems. Key words. Conservation Laws, Riemann Problems. AMS
(MOS) subject classifications. 76N15, 65M99, 35L65
1. Introduction. The questions which modern applied science asks of the area of nonlinear hyperbolic equations concern an analysis of the equations, a search for effective numerical methods and an
understanding of the solutions. The analysis of equations involves mathematical questions of existence, uniqueness and regularity. It is the special features of nonlinear hyperbolic equations which
give these standard mathematical concerns a broader scientific relevance. The basic conservation laws of physics are hyperbolic, and fit into the discussion here. They do not in general have regular
solutions. An exact classification of the allowed singularities (i.e. the nonlinear waves) is an open problem in the presence of realistic or complex physics, chemistry, etc. and/or higher spatial
dimensions. Existence and uniqueness of solutions is not a consequence of the fact that the equations "come from physics" and thus "must be O.K." The equations come from degenerate simplifications of
physics in which all length scales have been eliminated. Serious mathematical work remains to determine the formulations of these equations which have satisfactory mathematical properties, and thus
provide a suitable starting place for effective numerical computation and for scientific understanding. We next discuss the modification of equations. Simplified versions of complex equations capture
the essential difficulties in a form which can be analyzed conveniently and understood. Equations are also rendered more complex through the inclusion of additional physical phenomena. Such steps are
important for experimental validation and for applications. Often the complication involves additional terms or equations containing a small parameter, and the limit as the parameter tends to zero is
of great interest. This interest in small parameters can be traced back to the physics, where events on very different length and time scales arise in a single problem. Another approach to this
hierarchy of equations, physics, and length and time scales is to analyze asymptotic properties (including intermediate time scales) of the solution as t --> 00. Effective computational measures
address basically the same difficulties and issues, but with different tools and approaches. The presence of widely varying *Department of Applied Mathematics and Statistics, SUNY at Stony Brook,
Stony Brook, NY, 11794-3600. tSupported in part by the Applied Mathematical Sciences Program of the DOE, grant DEFG02-88ER25053 :j:Supported in part by the NSF Grant DMS-8619856
90 length and time scales and in some cases of underspecified physics can degrade the accuracy and the resolution of numerical methods. Three dimensional and especially complex or chaotic solutions
are typically underresolved computationally. The search for effective numerical methods can be broken into two main themes: concerns driven by computer hardware and concerns driven by features of the
solution. Effective numerical methods which are driven by solution features, i.e. by physics, depend on the study of nonlinear waves. Most modern numerical methods for the solution of nonlinear
conservation laws employ Riemann solvers, i.e. the exact or approximate solution of nonlinear wave interactions, as part of the numerical algorithm. Equally important is the use of limiters to avoid
Gibbs phenomena overshoots and oscillations associated with the discrete approximation to discontinuous solutions. Of the many possible issues associated with the analysis of solutions, we focus on
chaos. By chaos, we mean a situation in which the microscopically correct equations are ill posed or ill conditioned (through sensitive dependence on initial conditions) for the time periods of
interest and must be interpreted stochastically. The stochastic interpretation leads to new equations, useful on larger length and time scales. Further background on the topics discussed here can be
found in recent general articles of the author and in references cited there [21-23,26]. 2. Nonlinear Wave Structures. The nonlinear waves which arise in many of the examples of complex physics
(elastic plastic deformation, magneto hydrodynamics, chemically reactive fluids, oil reservoir models, granular material, etc.) are currently being explored. Striking, novel and complex mathematical
phenomena have recently been discovered in these examples, including crossing shocks, bifurcation loci, shock admissibility dependence on viscosity, and inadmissible Lax shocks. The class of solved
Riemann problems continues to increase, as a result of examining these physically motivated examples in detail. The general theories in which these new phenomena are imbedded are to a large extent a
subject for future research. Perhaps the most pressing question in this circumstance is, having found the trees, to discover the forest. Wave interactions are technically related to the subject of
ordinary differential equations in the large. This fact suggests an approach to the construction of general theories for Riemann solutions. We also ask whether these novel wave structures have
counterparts in experimental science. We believe the answer will be positive, and to the extent that this is the case, mathematical theory is ahead of experiment, in making predictions about nature.
The motivating example of three phase flow in oil reservoirs is not a promising place to resolve this question. Three phase flow experiments are difficult, inconclusive and seldom performed. The
equations themselves are not known definitively, and for this reason, the topological argument of Shearer [46] that an umbilic point must occur on topological grounds for any plausible three phase
flow equation is significant. Ting has argued that umbilic points also occur in elastic plastic flow [47]. Gariazar has examined common constitutive laws for common
metals [15] and found that the umbilic point will occur in uniaxial compression, at compressions within the plastic region [16]. It remains to be determined whether these umbilic points are an
artifact of a consititutive law or whether they reflect a true property of nature. In view of the considerable progress which has been made with non strictly hyperbolic conservation laws at the level
of wave interactions and Riemann solutions, it is a very interesting question of examine these same equations from the point of view of the general theory. This means considering general Cauchy data,
not just Riemann (scale invariant) data. We mention two recent results of this type. A general existence theorem for one of the conservation law systems with quadratic flux and an isolated umbilic
point was proved [34] using the method of compensated compactness. At the umbilic point, the entropy functions required by this method have singularities. It was shown that for a restricted subclass
of entropies, the singularity was missing, and that the proof could be completed using this restricted class of entropies. For another system in this class, the stability to perturbations of finite
amplitude of a viscous shock wave was demonstrated [38]. On the basis of these examples and the related work of others, it appears that the general theory of conservation laws will admit extensions
to allow a loss of e.g. strict hyperbolicity or genuine nonlinearity. Since many conservation laws arising in science appear to have such features, such extensions would be of considerable interest.
On the basis of the above discussion, answers to the following questions might help to define the overall structure of nonlinear hyperbolic wave interactions: 1. A complete study of bifurcations. A
classification of generic unfoldings of
the underresolved physics of Riemann problems would be both interesting and useful. A set of bifurcation loci for Riemann problem left states was proved to be full (for left and mid states in the
complementary set, there is no bifurcation resulting from variation of the left state) in [13,14]. Remaining problems concern bifurcation as the conservation law or its viscous terms (used even in
the case of zero viscosity to define admissibility) are varied. Moreover, bifurcation for wave curves passing through the resonant (umbilic) set has yet to be addressed. A classification of the
bifurcation unfoldings which result from left states (or conservation laws, etc.) located on the bifurcation loci has not been given. 2. The local theory of multiple resonant eigenvalues (higher
order umbilic points). For a higher dimensional state space, the resonant (umbilic) set is a manifold with singularities. The local behavior of Riemann solutions near the resonant set will depend on
the dimension and the codimension of the resonant set, and on its local singularity structure as a subset of the state space. Beyond this, there will be some number of "bifurcation parameters" which
partition the Riemann solutions into invariance classes. 3. A topological characterization of removable vs. nonremovable resonance. For what class of problems is a resonance required? Is there an
example where it can be observed experimentally? 4. Asymptotics, large systems with small parameters, rate limiting subsystems
and "stiff" Riemann problems. 5. A resolution of entropy conditions. The physical entropy principle is not only that entropy will increase across a shock, but that the admissible solution will be the
one which maximize3 the rate of entropy production. Entropy is defined in many physical circumstances. For example the equations of two fluid Buckley-Leverett flow in porous media are described by a
single nonconvex conservation law. Entropy, in the sense of thermodynamics can be understood in this context [1], and yields the well known entropy condition of Oleinik in this example. However, to
obtain the Oleinik entropy condition, the above strong form of the physical entropy condition is needed. The equations for three phase flow in porous media inspired much of the recent work on Riemann
solutions, in which it was realized that a number of mathematically motivated entropy conditions were inadequate. Thus it would be worthwhile to return to the first principles of physics and to
formulate a physically based entropy condition for three phase flow and for the quadratic flux Riemann problems. 6. Nonuniqueness and nonexistence of Riemann solutions. Symmetry breaking (non scale
invariant or higher dimensional) solutions for Riemann data. There is enough evidence that this phenomena will occur, but we have neither enough examples nor enough theory to predict under what
circumstances it should be expected. 7. Discretized flux functions. Both the qualitative (wave structure) and the quantitative (convergence rates) aspects of convergence are of interest. How should
flux discretization be performed in order to preserve some specific aspect of the Riemann solution wave structure? 8. Special classes of subsystems containing important examples. E.g. mechanical
systems, with a state space given as a tensor product of configuration space and a momentum space, or more generally as a cotangent bundle over a configuration space manifold. 9. The use of known
Riemann solutions as a test for numerical methods. 3. Relaxation Phenomena. The internal structure of a discontinuity refers to any modification of the equation and the underlying physics which
replaces a discontinuous solution by a continuous one (having a large gradient). The internal structure is of interest partly as a test of admissibility of the discontinuity and partly because of the
more refined level of resolution and physics which is described from this approach. The conservation law is scale invariant, and thus has no length scales in it, while the internal structure
necessarily has at least one length scale (its width) and may have more. For example consider chemically reactive fluid dymanics. With even moderately complex chemistry, there will be multiple
reactions, and reaction zones, each with individual length scales (the width of an individual reaction). In the conservation law, the relative speeds of the interactions are lost, as all times and
lengths have been set to zero. It is in this way that the physics described by the conservation law becomes underspecified. There are two approaches to the internal structure of a discontinuity.
93 new equations are added to enlarge the system or new terms are added to the original system, without a change in the number of dependent variables. There are other and more complex possibilities,
such as the fluid equations giving way to the Boltzmann equation, in which an infinite number of new variables are used and the old variables are not a subsystem of the new, but are only recovered
through an asymptotic limit. Such situations, while they do occur, are outside the scope of the present discussion. Common examples of approximate discontinuities with internal structure are shock
waves, chemical reaction fronts, phase transitions, and plastic shear bands. The internal structure involves concepts from nonequilibrium thermodynamics. The use of higher order terms in the
equations is the simplest and most familiar way to introduce internal structure into a discontinuity. The coefficients (viscosity, heat conduction, reaction rates etc.) in these terms necessarily
have a dimension. The coefficients are known as transport coefficients; they are defined in principle from nonequilibrium thermodynamics. Once the coefficients are known, equilibrium thermodynamics
is used exclusively. The other approach, which in many examples is more fundamental, is to enlarge the system. We regard the nonequilibrium variables and reactions as divided into fast and slow. This
division is relative to the region internal to the discontinuity; even the slow variables could be fast relative to typical fluid processes. Then the fast variables are set to their instantaneous
equilibrium, relative to the specified values of the slow variables. This describes an approximation in which the ratio of the fast to slow time scales becomes infinite. Another description would be
to say that the fast variables are at thermodynamic equilibrium, relative to constraints set by the values taken on by the slow variables. The slow variables are governed by differential equations
derived from nonequilibrium thermodynamics applied to this limiting situation. A typical equation for the slow variable is an ordinary differential equation, i.e. a Lagrangian time derivative set
equal to a reaction or relaxation rate source term. The lower order source terms have a dimension and introduce the length scale which characterizes the internal structure of the discontinuity. The
equations for chemically reacting fluids have exactly this form, and can be regarded as a completely worked out example of the point of view proposed here. A comparison of these two approaches has
been worked out by T.-P. Liu [37], and is summarized in his lectures in this volume. Liu considers the lowest order nonequilibrium contribution to the internal energy of a (diatomic) gas, namely the
vibrational energy in the lowest energy state of the molecule. Thus there are now two contributions to the internal energy, this one vibrational mode and all remaining internal energy contributions.
The vibrational energy has a preferred value as a function of the other thermodynamic variables, namely its equilibrium value. There is also a relaxation rate, defined in principle from statistical
physics, but in practice determined by mearurement, for return of the vibrational energy to this preferred value. The result is an enlarged system, with vibrational energy as the new dependent
variable. Liu's asymptotic analysis, as the relaxation rate goes to infinity, leads to the smaller system, augmented with a higher order viscosity
term, and a computation of the viscosity coefficient in terms of the nonequilibrium relaxation process. For a quantitatively correct description of rarefied gas dynamics, this model is too simple,
and the full chemistry of N 2 , O2 , CO2 , H 2 0, etc., including free radicals, dissociation and partially ionized atoms, is needed. Realistic models of chemistry for rarefied gas dynamics and
internal shock wave structure can involve up to 100 variables. Such systems are typically very stiff, are still approximate, and depend on rate laws which are not known precisely. Liu's analysis
assumes that the original system of conservation laws is strictly hyperbolic and genuinely nonlinear. In a neighborhood of a phase transition and especially along a phase boundary, genuine
nonlinearity typically fails for the conservation laws describing gas dynamics [40J. Presumably dissociation and ionization have similar effects on the convexity of the rarefaction and shock Hugoniot
wave curves, and hence on the structure of the Riemann solutions. The metastable treatment of dissociation assumes that species concentrations are dependent variables, and that their evolution is
governed by rate laws. In the equilibrium description, all reactions have been driven to completion and all concentrations are at equilibrium values. Thus these two descriptions differ in the number
of dependent variables employed. It would be of interest to extend Liu's analysis to a wider range of cases, and to remove the restrictive hypotheses in it. Caginalp [5, 7J considers phase
transitions on the level of the heat equation alone. The simple system describes the Stefan problem, and the augmented system includes a Landau-Ginsberg equation. Caginalp discusses a number of
asymptotic limits of the augmented system [8, 9J, and gives physical interpretations ofthe assumptions on which these limits are based. Anisotropy is important in this context, as it provides the
symmetry breaking which initiates the dendritic growth of fingers [6]. Rabie, Fowles and Fickett [43J replace compressible fluid dyamics by Berger's equation and their augmented system then has two
equations. They examine the wave structure, and compare it to detonation waves, a point of view carried further by Menikoff [41J. Efforts to describe metastable phase transitions by the addition of
higher order terms in the compressible (equilibrium thermodynamics) fluid equations have led to solutions in qualitative disagreement with experiment, as well as with physical principles. It should
be recalled that for common materials and for most of the phase transition parameter space (excepting a region near critical points), the influence of a phase boundary is felt for a distance of only
a few atoms from the phase boundary location. On the length scale of these few atoms, the continuum description of matter does not make a lot of sense. Thus the view, sometimes expressed, that on
philosophical grounds, there should be a gradual transition between the phases, is valid, if at all, only within the context of quantum mechanics and statistical physics. In this case the continuous
variable is the fraction of intermolecular bonds in the lattice, or the quantum mechanical probability for the location of the bonding electrons, etc. Correlation functions for particle density are
studied in this approach. The mathematical structure associated with metastability is further clouded by the occurrence of elliptic regions in some formulations of the equations. According
95 to a linearized analysis, the equations are then unstable, and presumably unphysical. They are at least ill posed in the sense of Hadamard. Detailed mathematical analyses of a Riemann problem with
an elliptic region [31,32,35] did not reveal pathology which would disqualify these equations for use in physical models. The theory in these examples is not complete, and especially the questions of
admissibility of shock waves and the stability of wave curves should be addressed. Examples of computational solutions for equations with elliptic regions are known to have solutions without obvious
pathology as well [4, 18]. There are examples, such as Stone's model for three phase flow in porous media, where the elliptic region appears to have a very small influence on the overall solution. In
most cases, the elliptic regions result from the elimination of some variable in a larger system. In this sense they are not fundamentally correct. Whether they are acceptable as an approximation in
a specific case seems to depend on the details of the situation. A correct theory should predict a number which can be verified by experiment. For conservation laws, the wave speed is a basic
quantity to be predicted. In the case of metastable phase transitions, this task is complicated, for some parameter values, by the occurrence of interface instabilities, which lead to fingers
(dendrites) and which produce a mixed phase mushy zone. The propagation speed of this dynamic mushy zone is not contained in a one dimensional analysis using microscopically correct thermodynamics
and rate laws. In other words, there are no physcially admissible Riemann solutions to the one dimensional conservation laws in such cases. The equations of chemically reactive fluids may also fail
to have physically admissible one dimensional Riemann solutions. For some parameter ranges, the wave front may lose its planar symmetry and become krinkled or become fully three dimensional, through
the interaction with chaotically distributed hot spot reaction centers. Recent progress on this issue has been obtained [39]; older literature can be traced from this reference as well. The question
of complex, or chaotic internal interface structure suggests the following point of view. In such cases, the question of physical admissibility is a modeling question, i.e. a judgement to be made on
the basis of the level of detail desired in the model equations. The admissible solutions for microscopic physics and for macroscopic physics need not be the same. A change in admissibility rules is
really a change in the meaning of the equations, i.e. a change in the equations themselves. This point of view can be taken further, and of course we realize that there is no need for the equations
of microscopic and macroscopic physics to coincide, even when they are both continuum theories. The relation between these two solutions (or equations) is the topic of the next section. Specific
questions posed by the above discussion include: 1. In various physical examples of relaxation phenomena, it would be desirable to determine correct equations, mathematical properties of the
solutions, including the structure of the nonlinear waves, and asymptotic limits giving relations between various distinct descriptions of the phenomena. 2. Which properties of a larger system lead
to elliptic regions in an asymptotically limiting subsystem? 3. Is there a principle, similar to the Maxwell construction, which will replace
96 a system of conservation laws having an elliptic region with a system having an umbilic line, or surface of codimension one in state space? Does this construction depend on additional physical
information, such as the specification of the pairs of states on the opposite sides of the elliptic region joined by tie lines, as in the case of a phase transition? Is there a physical basis for
introducing tie lines in the case of the elliptic region which arises in Stone's model? 4. What is the proper test for dimensional symmetry breaking of a one dimensional Riemann solution? Symmetry
breaking should be added to the admissibility criteria, and when the criteria fails, there would be no admissible (one dimensional) Riemann solution. The same comments apply to the breaking of scale
invariance symmetry. 5. The mathematical theory of elliptic regions needs to be examined more fully, especially to determine the importance of viscous profiles and conditions for uniqueness of
solutions. 4. Surface Instabilities. The nonlinear waves considered in the two previous sections are one dimensional, and in three dimensions, they define surfaces. Sometimes the surfaces are
unstable, and when this occurs, a spatially organized chaos results. Examples are the vortices which result from the roll up (Kelvin-Helmholtz instability) of a slip surface, and the fingers which
result from a number of contexts: the Taylor-Saffman instability in the displacement of fluids of different viscosity in porous media, the Rayleigh-Taylor and Richtmyer- Meshkov instabilities
resulting from the acceleration of an interface between fluids of different densities, the evolution of a metastable phase boundary giving rise to the formation of dendrites and a multiphase
transitional mushy zone between the two pure phases. Instabilities in chemically reactive fronts were referred to in the previous section. Surface instabilities give rise to a chaotic mixing region,
which can be thought of as an internal layer between two distinct phases, fluids, or states of the conservation law. In the case of vortices, the mixing occurs first of all in the momentum equation,
and for this reason is modeled at the simplest level by a diffusion term in this equation. The coefficient of the diffusion term is viscosity, and the required viscosity to model the turbulent mixing
layer is larger than the microscopically defined viscosity; it is called eddy viscosity to distinguish it from the latter. Similarly the simplest model of fingering induced mixing is a diffusion term
in the conservation of mass equation. Again it has a much larger coefficient than the mass diffusion terms of microscopic physics. We call these simple mixing theories the effective diffusion
approximation. In the language of physics, they provide a renormalization, in which bare, or microscopically meaningful parameters are replaced by effective or macroscopically meaningful ones. For
many purposes the effective diffusion approximation does not give a sufficiently accurate description of the mixing layer. The effective diffusion approximation gives a smeared out boundary in
contrast to the often observed sharp boundary to the mixing region. The theories which set the effective diffusion parameter (the eddy viscosity, etc.) are phenomenological and tend to be very
context dependent.
For this reason, the key parameter in this theory is known with assurance only if it has been experimentally determined. Of even greater importance, the effective diffusion approximation contains no
length scales beyond the total width of the mixing region. It represents an approximation in which all mixing occurs at a microscopic scale. The internal structure of the mixing layer is more
complicated. It is less well mixed and somewhat lumpy, as we now explain. The initial distribution of unstable modes (vortices or fingers) on the unstable interface is governed by the theory of the
most unstable mode. The pure conservation laws are unstable on all length scales, with the shortest length scales having the most rapid growth. For this reason, these equations must be modified by
the inclusion of length dependent terms (interface width, surface tension, (microscopic) viscosity, curvature dependent melting points, etc.) which stabilize all but a finite number of wave lengths.
Of the remaining unstable modes, the one with the fastest growth rate is called the most unstable. That mode (or that range of wave lengths) is presumed to provide the initial disturbance to the
interface, in the absence of some explicit initialization containing other length scales. However initialized, the modes grow and interact. There is a significant tendency for merger and growth of
wave lengths. Presumably this is due to our picture of the mixing region as a thin layer or thickened interface, and to the well known tendency in two dimensional turbulence for length scales to
increase. In any case the merger of modes and the growth of length scales produces a dynamic renormalization in the dimensionality of the equation, and a change in the algebraic growth rate of the
interface thickness. The distribution of length scales in the mixing layer can be thought of as a random variable. It is time dependent, and ranges from the minimum size of the most unstable wave
length (or what is typically nearly the same, the smallest unstable wave length) up to a possible maximum value of the current interface thickness. The distribution of length scales is also typically
spatially dependent, and is a function of the distance through the mixing layer. Thus the mixing layer need not be homogeneous, but may contain distinct sublayers, with different statistical
distributions of length scales within each layer. This statistical distribution of spatially and temporally dependent length scales is completely missing in the effective diffusion approximation. We
now specialize the discussion to the Rayleigh-Taylor instability and we consider only one aspect of the spatially dependent length scale distribution, namely the interface width as a function of
time. According to experiment [44J, the interface thickness, or height, h(t), has the form h = agAt2, where 9 is the accelerational force on the interface, t is time, and A = ~+is the Atwood number
characterizing PI P2 the density contrast at the interface, with Pi, i = 1,2 denoting the densities in the two fluids. The first computations of the unstable interface which show quantitative
agreement with the experimental value of a for a time regime up to and beyond bubble merger are reported in [25J. Control of numerical diffusion through a front tracking algorithm appears to be the
essential requirement in obtaining this quantitative agreement with experiment. See also the paper of Zhang in this volume, where the computations are discussed in more detail and also see the
related com-
putations of Zufiria [49], who also obtains agreement with the experimental value of Q!, for a more limited time and parameter range. Because of the sensitivity of the unstable interfaces to
modifications of the physics, the computations are no doubt also sensitive to details in the numerical methods. For this reason it is very desirable to present not only carefully controlled analyses
of each method, but also of carefully controlled comparisons between the methods. The outer edge of the Rayleigh-Taylor mixing region adjacent to the undisturbed heavy fluid is dominated by bubbles;
for this reason we refer to it as the bubble envelope. Now we adapt the language of multi phase flow and consider the transition from bubbly to slug flow. These two bubbly and slug flow regimes have
distinct equations, or constitutive laws, but are both derived in principle from the same underlying physics, namely the Euler or Navier-Stokes equations. In this sense the regimes can be thought of
as phases in the statistical description of the flow in terms of bubbles and droplets. Taking this point of view, the transition between the regimes is a phase transition. The order paramater of this
transition is the void fraction; for small void fraction, bubbly flow is stable and for large void fraction, slug flow is stable. The metastable process for the bubble to slug flow transition to take
place is bubble merger, which is exactly the dominant process at the Rayleigh-Taylor bubble envelope. From this point of view, the role and importance of statistical models for the bubble merger
process [17,24,25,45,50] becomes clear. These models have as their goal to yield rate laws and constitutive relations for the metastable transition regions. In particular they should yield the
internal structure of the Rayleigh-Taylor bubble envelope. Major questions concerning the theory of unstable interfaces and chaotic mixing layers are open. 1. The importance of microscopic length
scales, viscosity, surface tension, in-
terfacial thickness, (mass) diffusion or compressibility, has been an area of active research. However, the exact role of these features in regularizing the equations to the extent that the solutions
are well defined for all time has not been established. Without regularization, the solutions are known or suspected of containing essential singularities in the form of cusps, which appear to
preclude existence beyond a limited time. 2. Does the mixing zone have a constant width or grow as some power ot t? Usually the power of t is known with some level of assurance, but the coefficient
in front of the power and the dimensionless groups of variables it depends upon may not be known, depending on the specific situation. 3. Mode splitting, coupling, merging and stretching are the
important ingredients of the dynamics of mixing layer chaos. Theories for the rates governing these events are needed. 4. Distributions of length scales within the mixing zone are needed. Stable
statistical measures of quantities which are reproducible, both experimentally and computationally are needed. Point measures of solution variables are not useful in the description of chaos, while
statistical correlation functions have proven useful in the study of turbulence, for example.
99 5. Does the idea of fractal dimension, or of a renormalization group fixed point have a value in this context? 6. Chaotic mixing layers are very sensitive to numerical error and difficult to
compute. An analysis of the accuracy of numerical methods for these problems would be very useful. For the same reason, comparison to experiment is important. 5. Stochastic Phenomena. The issues to
be discussed here are similar to those raised in §4. The main difference is that stochastic phenomena do not always concern mixing and whether or not it is concerned with mixing, they do not have to
have to be concentrated in or caused by instabilities of thin layers. To illustrate this point, as well as to introduce the next section, we refer to the problem of determining constitutive relations
and properties of real materials. It is well known that the atomic contribution to material strength will give properties of pure crystals, which are very different from (normal) real materials.
Common materials are not pure crystals, but have defects in their crystal lattice structure, impurities, voids, microfractures and domain walls, each of which can be modeled on a statistical basis,
in terms of a density. Similarly, the heterogeneities in a petroleum reservoir occur on many length scales. Some heterogeneities are not accessible for measurement, and can be infered on a
statistical basis. Others, such as the vertical behavior in the vicinity of a well bore, can be measured at a very fine scale, but it is not practical to use this detail in a computation, so again a
statistical treatment is called for. Weather forecasting data also illustrates the point that the available data may be too fine grained to be usable in a practical sense, and averaged data,
including the statistical variability of averaged data may be a more useful level of description of the problem. 6. Equations of State. The equation of state problem extends beyond the fluid
equilibrium thermodynamic equations of state, to elastic moduli, constitutive relations, yield surfaces, rate laws, reaction rates and other material dependent descriptions of matter needed to
complete the definition of conservation laws. It is the portion of the conservation law which is not specified from the first principles of physics on the basis of conservation of mass, momentum,
etc. The comments of this section apply as well to the transport coefficients, which are the coefficients of the higher order terms which are added to the conservation laws, such as the coefficients
of viscosity, diffusion, thermal conductivity, etc. There are two aspects to this problem. The first is: given the equation of state, to determine its consequences for the solution of the
conservation laws, the nonlinear wave structure, and the numerical algorithms. This problem is the topic of §2. The second problem is to determine the equation of state itself. With the increasing
accuracy of continuum computations, we may be reaching a point where errors in the equation of state could be the dominant factor in limiting the overall validity of a computation. Equations of state
originate in the microphysics of subcontinuum length scales, and their specification draws on subjects such as statistical physics, many body theory and quantum mechanics at a fundamentalleve!.
100 Thus an explanation is needed to justify the inclusion of this question in an article oriented towards a continuum mathematics audience. Although the equations of state originate in the
subcontinuum length scales, for many purposes the problems do not stay there. In many cases, there are important intermediate structures which have a profound influence on the equation of state, and
which are totally continuum in nature. This is exactly the point of the two previous sections, which we are now repeating using different language. Thus, for example, one could use a continuum theory
to study cracks in an elastic body, and then, in the spirit of statistical physics, combine the theories of individual cracks or groups of them in interaction, to give an effective theory for the
strength of a material with a given state of micro-crack formation. In other words, important aspects, and in a number of cases, the most important aspects, of the determination of the equation of
state are problems of continuum science. To further illustrate the point being made, consider the example of petroleum reservoir simulation. Here the relative permeability !'Ind the porosity are
basic material response functions, in the sense of the equation of state as discussed above. Measurements can be made on well core samples, typically about six inches long. This defines the length
scale of the microscopic physics for this problem. (We do not enter into the program of predicting core sample response functions from physics and rock properties at the scale of rock pores, i.e. the
truly microscopic physics of the problem.) The next measurable length scale is the inter-well spacing, about one quarter mile. However, information is needed on intermediate scales by the
computations. On the basis of statistics and geology, one reconstructs plausible patterns of heterogeneity for the intermediate scales. This is then used to correct the measured relative permeability
functions. The modified relative permeability functions are known as pseudo-functions, and they are supposed to contain composite information concerning both the intermediate heterogeneities and the
permeabilities as measured from core samples. This range of questions concerning the scale up of predictions and measurements from the microscopic to the macroscopic levels is of basic importance to
petroleum reservoir engineering and is an area of considerable current activity. 7. Two Dimensional Wave Structures. The wave interaction problem is a scattering problem [20]. The data for a Riemann
problem is by definition scale invariant, and thus defines the origin as a scattering center. At positive times, elementary waves (defined by the intersection of two or more one dimensional wave
fronts) propagate away from the scattering center. The elementary waves are joined by the one dimensional wave fronts which, through their intersections, define these elementary waves. At large
distance from the scattering center, the solution is determined from the solution of one dimensional Riemann problems. Going to reduced variables, 1] = 7' ( = 7, the time derivatives are eliminated
from the equations, and in the new variables, the system is hyperbolic at least for large radii, with the radially inward direction being timelike. It has known Cauchy data (at large radii). However,
in general there are elliptic regions at smaller radii, when the solution is considered in the reduced variables, or partially elliptic regions, where
101 some but not all of the characteristics are complex. Analysis of any but the simplest problems of this type in two dimensions will require the type of functional analysis estimates and
convergence studies which are needed for the analysis of general data in one space dimension. The study of a single elementary wave uses ideas similar to those found in the study of one dimensional
Riemann problems, with the distinction that here the nonlinearities and state space complications tend to be more severe. In this context, the wave curves are known as shock polars, and the analysis
of a single elementary wave involves the intersection of various shock polars, one for each one dimensional wave front belong to the elementary wave. The intersections may be nonunique, or may fail
to exist, indicating that given wave configurations may exist only over limited regions of parameter space, and that the possibility of non-uniqueness is more of a problem for higher dimensional wave
theory than it is in typical one dimensional wave interactions. As in the earlier sections, non-uniqueness, admissibility, entropy conditions and internal structure are closely related topics. Not
very much is known about the internal structure of higher dimensional elementary waves, and so we indicate two approaches which might be fruitful. Characteristic energy methods were developed by Liu
[36] and extended by Chern and Liu [12] to study large time asymptotic limits and to develop the theory of diffusion waves in one space dimension. The proof of convergence of the Navier-Stokes
equation to the Euler equation [30] also uses characteristic energy methods, as well as an analysis of an initial layer, and the evolution of an initial shock wave discontinuity as Navier-Stokes
data. For the purposes of the present discussion, we note that within the initial layer, there are three nonlinear waves, which are geometrically distinct, but still in interaction. The mechanism of
their interaction and time scale for the duration of the initial layer is set by the diffusive (parabolic) transport of information between the distinct waves. These initial layer and characteristic
energy techniques may be useful in two dimensions for the study of internal structure of two dimensional elementary waves. The classic approach to internal structure for a single wave in one
dimension is through the analysis of ODE trajectories which describe the traveling wave in state space. To apply this method to Riemann problems it is necessary to join such curves, each nearly equal
to a trajectory for a single such traveling wave. In the approximation for which each one dimensional wave is exactly a traveling wave or jump discontinuity, the method of intersecting shock polars
gives a geometric construction of the solution. The method of formal matched asymptotic expansions have also proved useful for the study of two dimensional wave interactions. Here the matching is
used to join the distinct one dimensional waves, while the formal asymptotic expansions describe the single waves in the approximation of zero wave strength (the acoustic limit). This method was
recently applied to the study of the kink mode instability in a shear layer discontinuity at high Mach number. The kink mode wave pattern was known from shock polar analysis, see [10,11,33] and from
computations [48]. The expansions showed the instability of the unperturbed shear flow and thus gave a theory of the initiation of this wave configuration from
102 an unperturbed shear flow state [2]. An extension of this analysis concerned the bifurcation diagram of the shock polars [3]. Matched asymptotic expansions have also been used in a rigorous
theoretical analysis for the large time limit in one space dimension [27]. On this basis, we mention asymptotic methods for use in mathematical proofs, in the study of two dimensional wave
interaction problems. The above techniques succeed in joining one dimensional elementary waves in regions where the solution is slowly changing and the waves themselves are widely separated. The
problem we pose has waves meeting at a point, so the juncture occurs where the solution is rapidly changing. For this reason one should not rule out the occurrence of new phenomena. A solved problem
for the interaction of viscous waves is the shock interaction with a viscous boundary layer [42]. This interaction produces a lambda shock, which is a structure which would not be predicted either
from the inviscid theory of a shock interacting with a boundary, or from the viscous theory of a single shock wave. A one dimensional analog problem with similar mathematical difficulties would be to
understand the internal structure (viscous shock layers) associated with the crossing point of two shock waves. The questions discussed in the previous sections will all be important for higher
dimensional wave interactions as well. In addition, we pose a few specific questions. 1. Generalize the classification of [19] for two dimensional elementary waves to
general equations of state, as formulated by [40]. 2. Prove an existence theorem for the oblique reflection of a shock wave by a ramp. The case of regular reflection is easiest and is the proper
starting point. Weak waves can be assumed if this is helpful. A more detailed discussion of this problem, including ideas for the construction of an iteration scheme to prove existence are presented
in [19]. The major interest in this problem derives from unresolved differences between proposed bifurcation criteria or possible nonuniqueness for the overlap region in which both regular reflection
and Mach reflection are possible on the basis of simple shock polar analyses. 3. Determine bifurcation criteria for two dimensional elementary waves. There is a large amount known concerning this
problem. See [29] for background information and a deeper discussion of this area. A recent paper of Grove and Menikoff involves bifurcations in non localized wave interactions arising from
noncentered rarefaction waves [28], an issue which is part of the general bifurcation problem. 4. What is the role of scale invariance symmetry breaking for higher dimensional elementary waves? 5.
The correct function space for a general existence theory for higher dimensional conservation laws depends on the equation of state, because local singularities are allowed, and occur in centered
(cylindrical) waves. The order of the allowed singularity, and the Lp space it belongs to is limited by the equation of state. This relation has not been worked out, and so the existence theory and
large time asymptotics for radial solutions would be of interest.
103 References
1. A. Aavatsmark, To Appear, "Capillary Energy and Entropy Condition for the Buckley-Leverett Equation," Contemporary Mathematics.
2. M. Artola and A. Majda, 1987, "Nonlinear Development of Instabilities in Supersonic Vortex Sheets," Physica D 28, pp. 253-281. 3. M. Artola and A. Majda, 1989, "Nonlinear Kink Modes for Supersonic
Vortex Sheets," Phys. Fluids. 4. J. B. Bell, J. A. Trangenstein, and G. R. Shubin, 1986, "Conservation Laws of Mixed Type Describing Three-Phase Flow in Porous Media," SIAM J. Appl. Math. 46, pp.
1000-1017. 5. G. Caginalp, 1986, "An Analysis of a Phase Field Model of a Free Boundary," Archive for Rational Mechanics and Analysis 92, pp. 205-245. 6. G. Caginalp, 1986, "The Role of Microscopic
Anisotropy in the Macroscopic Behavior of a Phase Field Boundary ," Ann. Phys. 172, pp. 136-146. 7. G. Caginalp, To Appear, Phase Field Models: Some Conjectures on Theorems for their Sharp Interface
8. G. Caginalp, To Appear, Stefan and Hele-Shaw Type Models as Asymptotic Limits of the Phase Field Equations 9. G. Caginalp, To Appear, "The Dynamics of a Conserved Phase Field System: Stephan-like,
Hele-Shawand Cahn-Hilliard Models as Asymptotic Limits," IMA J. Applied Math ..
10. Tung Chang and Ling Hsiao, 1988, The Riemann problem and Interaction of Waves in Gas Dynamics (John Wiley, New York). 11. Guiqiang Chen, 1987, "Overtaking of Shocks of the same kind in the
Isentorpic Steady Supersonic Plane Flow," Acta Math. Sinica 7, pp. 311-327. 12. I-Liang Chern and T.-P. Liu, 1987, "Convergence to Diffusion Waves of Solutions for Viscous Conservation Laws," Comm.
in Math. Phys. 110 , pp. 503-517. 13. F. Furtado, 1989, "Stability of Nonlinear Waves for Conservation Laws," New York University Thesis. 14. F. Furtado, Eli Isaacson, D. Marchesin, and B. Plohr, To
Appear, Stability of Riemann Solutions in the Large 15. X. Garaizar, 1989, "The Small Anisotropy Formulation of Elastic Deformation," Acta Applicandae Mathematica 14, pp. 259-268. 16. X. Garaizar,
1989, Private Communication
104 17. C. Gardner, J. Glimm, O. McBryan, R. Menikoff, D. H. Sharp, and Q. Zhang, 1988, "The Dynamics of Bubble Growth for Rayleigh-Taylor Unstable Interfaces," Phys. of Fluids 31, pp. 447-465. 18.
H. Gilquin, 1989, "Glimm's scheme and conservation laws of mixed type," SIAM Jour. Sci. Stat. Computing 10, pp. 133-153. 19. J. Glimm, C. Klingenberg, O. McBryan, B. Plohr, D. Sharp, and S. Yaniv,
1985, "Front Tracking and Two Dimensional Riemann Problems," Advances in Appl. Math. 6, pp. 259-290. 20. J. Glimm and D. H. Sharp, 1986, "An S Matrix Theory for Classical Nonlinear Physics,"
Foundations of Physics 16, pp. 125-141. 21. J. Glimm and David H. Sharp, 1987, "Numerical Analysis and the Scientific Method," IBM J. Research and Development 31, pp. 169-177. 22. J. Glimm, 1988,
"The Interactions of Nonlinear Hyperbolic Waves," Comm. Pure Appl. Math. 41, pp. 569-590. 23. J. Glimm, Jan 1988, "The Continuous Structure of Discontinuities," in Proceedings of Nice Conference. 24.
J. Glimm and X.L. Li, 1988, "On the Validation of the Sharp- Wheeler Bubble Merger Model from Experimental and Computational Data," Phys. of Fluids 31, pp. 2077-2085. 25. J. Glimm, X. 1. Li, R.
Menikoff, D. H. Sharp, and Q. Zhang, To appear, A Numerical Study of Bubble Interactions in Rayleigh-Taylor Instability for Compressible Fluids 26. J. Glimm, To appear, "Scientific Computing: von
Neumann's vision, today's realities and the promise of the future," in The Legacy of John von Neumann, ed. J. Impagliazzo (Amer. Math. Soc., Providence). 27. J. Goodman and X. Xin, To Appear, Viscous
Limits for Piecewise Smooth Solutions to Systems of Conservation Laws 28. J. W. Grove and R. Menikoff, 1988, "The Anomalous Reflection of a Shock Wave through a Material Interface," in preparation.
29. 1. F. Henderson, 1988, "On the Refraction of Longitudinal Waves in Compressible Media," LLNL Report UCRL-53853. 30. D. Hoff and T.-P. Liu, To Appear, "The Inviscid Limit for the Navier-Stokes
equations of Compressible, Isentropic flow with shock data," Indiana J. Math .. 31. H. Holden, 1987, "On the Riemann Problem for a Prototype of a Mixed Type Conservation Law," Comm. Pure Appl. Math.
40, pp. 229-264.
105 32. H. Holden and 1. Holden, To Appear, "On the Riemann problem for a Prototype of a Mixed Type Conservation Law II," Contemporary Mathematics. 33. Ling Hsiao and Tung Chang, 1980Acta Appl. Math.
Sinica 4, pp. 343-375. 34. P.-T. Kan, 1989, "On the Cauchy Problem of a 2 x 2 System of Nonstrictly Hyperbolic Conservation Laws," NYU Thesis. 35. B. Keyfitz, To Appear, "Criterion for Certain Wave
Structures in Systems that Change Type," Contemporary Mathematics. 36. T.-P. Liu, 1985, "Nonlinear stability of shock waves for viscous conservation laws," Memoir, AMS:328, pp. 1-108. 37. T.-P. Liu,
1987, "Hyperbolic Conservation Laws with Relaxation," Comm Math Phys 108, pp. 153-175. 38. T.-P. Liu and X. Xin, To Appear, Stability of Viscous Shock Wave Asociated with a System of Nonstrictly
Hyperbolic Conservation Laws 39. A. Majda and V. Roytburd, To Appear, "Numerical Study of the Mechanisms for Initiation of Reacting Shock Waves," Siam J. Sci Stat Compo 40. R. Menikoff and B. Plohr,
1989, "Riemann Problem for Fluid Flow of Real Materials," Rev. Mod. Phys. 61, pp. 75-130. 41. R. Menikoff, 1989, Private Communication 42. R. vo~ Mises, 1958, Mathematical Theory of Compressible
Fluid Flow (Academic Press, New York). 43. R. 1. Rabie, G. R. Fowles, and W. Fickett, 1979, "The Polymorphic Detonation," Phys. of Fluids 22, pp. 422-435. 44. K. I. Read, 1984, "Experimental
Investigation of Turbulent Mixing by RayleighTaylor Instability," Physica 12D, pp. 45-48. 45. D. H. Sharp and J. A. Wheeler, 1961, "Late Stage of Rayleigh-Taylor Instability," Institute for Defense
Analyses. 46. M. Shearer, 1987, "Loss of Strict Hyperbolicity in the Buckley-Leverett Equations of Three Phase Flow in a Porous Medium.," in Numerical Simulation in Oil Recovery, ed. M. Wheeler
(Springer Verlag, New York). 47. Z. Tang and T. C. T. Ting, 1987, "Wave Curves for the Riemann Problem of Plane Waves in Simple Isotropic Elastic Solids," Int. J. Eng. Science 25, pp. 1343-1381. 48.
P. Woodward, 1985, "Simulation of the Kelvin-Helmholtz Instability of a Supersonic Slipsuface with a Piecewise Parabolic Method," Proc. INRIA Workshop on Numerical Methods for Euler Equations, p.
106 49. J. A. Zufiria, , "Vortex-in-Cell Simulation of Bubble Competition in RayleighTaylor Instability," Preprint, 1988. 50. J. A. Zufiria, 1988, "Bubble Competition in Rayleigh-Taylor Instability,"
Phys. of Fluids 31, pp. 440-446.
THE GROWTH AND INTERACTION OF BUBBLES IN RAYLEIGH-TAYLOR UNSTABLE INTERFACES JAMES GLIMMa,b,c, XIAO LIN LIc,d, RALPH MENIKOFF e,/, DAVID H. SHARp e,/ AND QIANG ZHANGc,g Abstract. The dynamic behavior
of Rayleigh-Taylor unstable interfaces may be simplified in terms of dynamics of fundamental modes and the interaction between these modes. A dynamic equation is proposed to capture the dominant
behavior of single bubbles and spikes in the linear, free fall and terminal velocity stages. The interaction between bubbles, characterized by the process of bubble merger, is studied by
investigating the motion of the outer envelope of the bubbles. The front tracking method is used for simulating the motion of two compressible fluids of different density under the influence of
gravity. Key words. Bubble, Rayleigh-Taylor Instability, Chaotic Flow. AMS(MOS) subject classifications. 76-04, 76N10, 76T05, 76E30
1. Introduction. The Rayleigh-Taylor instability is a fingering instability between two fluids with different density. Although the system is in equilibrium when the light fluid supports the heavy
fluid by a flat interface with its normal direction parallel to the direction of gravity or external forces, such equilibrium is unstable under the influence of these forces. Any small perturbation
will drive the system out of this unstable equilibrium state. Then an instability develops and bubbles and spikes are formed. A bubble is a portion of the light fluid penetrating into the peavy fluid
and a spike is a portion of the heavy fluid penetrating into the light fluid. At a later stage of the instability, spikes may pinch off to form droplets.
The problem of mixing of two fluids under the influence of gravity was first investigated by Rayleigh [1] and later by Taylor [2]. Since then, various methods have been used to study this classical
problem, such as nonlinear integral equations [3,4], boundary integral techniques [5,6], conformal mapping [7], modeling [8,9], vortex-in-cell methods [10,11], high order Godunov methods [12], front
tracking [13,14,15] etc. Most of this work has been carried out in the limit of incompressible fluids or in the limit of single component systems. (The other component is a vacuum.) For a review of
Rayleigh-Taylor instability and its applications to science and engineering, see reference [16]. We present here the results of our study on the development of single mode Rayleigh-Taylor
instability, i.e. the development of spikes and bubbles, and on the interactions between the bubbles in compressible fluids. 4 Department of Applied Mathematics and Statistics, SUNY at Stony Brook,
Stony Brook, NY, 11794-3600. bSupported in part by the Applied Mathematical Sciences Program of the DOE, grant DEFG02-88ER25053 "Supported in part by the NSF Grant DMS-8619856 dDepartment of Applied
Mathematics, New Jersey Institute of Technology, Newark, NJ 07102 "Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 'Supported by the U.S. Department of Energy. 9Courant
Institute of Mathematical Sciences, New York University, New York, NY 10012
For two dimensional compressible, inviscid fluids, the motion of the fluids is governed by two dimensional Euler equations,
o(pu 2 + P) ox
opv ot
opuv ox
at +
o[pu(e + PV)] ox
opuv _ 0 oz - ,
o(pv 2 + P) _ 0 oz -,
o[pv(e + PV)] oz = pvg,
where u is the x-component of the velocity, v is the z-component of the velocity, e is the specific total energy, P is pressure, V is specific volume and g is gravity. Here we have assumed that the
gravity points in the positive z direction. Our systems are characterized by two dimensionless quantities, the Atwood number A = a=.i!J.+- and the compressibility M2 = ~ and by the equation of state.
Here ~ ~ ~ Ph is the density of heavy fluid, PI is the density of light fluid, .x is the wavelength of the perturbation and Ch is the speed of sound in heavy fluid. We used the polytropic equation of
state e = (-r-:)P with 'Y = 1.4 in our simulations. A range of density ratios and compressibility were studied. The numerical data on single mode systems were analyzed by using an ODE which models
the entire motion of the bubble or spike. The results on single mode systems provide a basis for the study of the interaction between bubbles of different modes. We observed that, in chaotic flow,
the magnitude of the terminal velocity of a large bubble exceeds the value for the corresponding single mode system due to the interaction between the bubbles. A superposition hypothesis is proposed
to capture the leading order correction to the bubble velocity. Our simulations show agreement between the superposition hypothesis and numerical results. The agreement is better in with high density
ratio and low compressibility than in systems with low density ratio or high compressibility. The cause of such phenomena will be discussed. We use the front tracking method to study the motion of a
periodic array of bubbles and spikes (i.e. single mode system) and to study the interactions between bubbles of different modes. The front tracking method contains a one dimensional moving grid
embedded in the two dimensional computational grid. It preserves sharp discontinuities and provides high resolution around the areas of interest, i.e. nearby and on the interface between two mixing
fluids. 2. Motion of single mode bubbles and spikes. When two fluids are separated by a flat interface with its normal vector parallel to the direction of gravity or external forces, the solution of
the Euler equations is an exponentially stratified distribution of density and pressure along the direction of gravity or external forces. For systems with small deviations from such a flat
interface, the Euler equations can be linearized in terms of the amplitude of the perturbation [14,17]. When Fourier analysis is applied to the perturbation, the Fourier modes do not couple with each
other in the linearized equations. An analytic solution exists for the linearized Euler equations. In our simulation, we use the solution of the linearized Euler equations to initialize our system
and the full Euler equations with front tracking to update the evolution of the system. In this section, we consider the single mode system, which is a periodic array of bubbles and spikes, or
equivalently a single bubble or spike with periodic boundary conditions. The top and bottom of the computational domain are reflecting boundaries. When a bubble or a spike emerges from a small
sinusoidal perturbation on a flat interface, it follows the stages of linear growth, free fall and terminal velocity. For a single component system, the asymptotic behavior of the spike is free fall.
In the linear regime, the dynamics of the system is mainly governed by the linearized Euler equations. The velocity grows exponentially with time. The exponential growth rate 17 is determined by a
transient equation derived from the linearized Euler equations. In the free fall regime, the acceleration reaches a maximum absolute value, which we call the renormalized gravity gR. The velocity
varies linearly with time in the free fall regime. In the terminal velocity regime, the velocity approaches a limiting value (terminal velocity v oo ) with a decay rate b. A comparison of the
numerical results and the asymptotic behavior of a spike in each regime is given in Fig. 1. Here we use a dimensionless acceleration a dimensionless velocity a dimensionless length
and a dimensionless time 1l.!.. Ch
The entire motion of bubble and spike may be described by an ODE dv dt
+ (1 -
I7v(1 - v:')
..lL.) + [~ Voco 9R
+ VIK)2]..lL.(1 _ 1i Voo
..lL.)' VOX)
which has the solution
1 Vt 1 1 1 2 1 1 v00 t-to=-ln(-)+[--(-+-) -](Vt-vo)--ln( 17
Voo -
Each term on the right hand side of the above expression has a clear physical meaning. The first term is the contribution from the linear regime; the second term is that of free fall regime and the
third term is the contribution from the asymptotic regime. Extensive validation of this model has been performed for a range of Atwood numbers A and compressibility M. The A and M dependence of the
parameters l7,gR,b and Voo have been explored in [18]. In Fig. 2 we show an example of the comparison between the results of numerical simulation of the full two dimensional Euler equations and the
results from fitting the solution given above. In Figs. 3 and 4, we show the interface at successive time steps and density and pressure contour plots for systems with A = and M2 = 0.5. For systems
with small Atwood number A, the interface consists of two interpenetrating fluids of similar shape. Secondary instabilities appears along the side of spike. (See Fig. 3.) As A -+ 0, the pattern of
the two fluids will be symmetric with phase difference 1r. For high density ratio systems, the spike is thinner with less roll up shed off the edge of its tip. (See Fig. 4.) For systems of high
compressibility, the velocity of the bubble or spike becomes supersonic relative to the sound speed in the heavy
i, fr
110 VspikelCh
/ / /
I / / / /
/ /
/ / /
O~--------~/--~------------~----L-----~----------~~ gt/Ch o 2.1
Vspike l g
I I I I / / / / / /
\ ~ \ \ \ \ \ \
0 ' : : - - - - - - - ' - - - - - - - - ' - - - - - - - ' - - - - - - 7 " ' gt I Ch
Figure 1. The comparison of the spike velocity and the spike acceleration of the numerical result to its linear and large time asymptotic behavior for D = 2, M2 = .5 and I = 1.4. The solid lines are
the numerical res11lts obtained by using a 80 by 640 grid in a computational domain 1 X 8.
,V~SP~i~ke~/_C~h____. -__________, -__________. -________- .
Numerical result
Fitting result
OL-________~__________~________~________~~ gt/Ch
O~V~b~~b~le~/~Ch~__. -________- .__________. -________- .
Numerical result -.28l,-________--'-__________-'--________- - l ._ _ _ _ _ _ _ _----,~ g tI Ch o 21
Figure 2. Plots of spike velocit.y and bubble belocity versus time are compared with the best three parameter fit to the solution of the ODE superimposed, for the values A = 1/3, M2 = 0.5, 'Y = 1.4.
The numerical results are obtained by using a 80 by 640 grid in a computational domain 1 X 8.
material at the late times, but it remains subsonic relative to the sound speed in the light material. The effects of grid size, the remeshing frequency, the amplitude of perturbation and boundary
effects at the top and bottom of the computational domain have been tested and studied. We refer to reference [14] for the details of these studies.
gt/Ch == 0
gt /Ch == 3.2 gtlC/z == 4.2 gtlCh == 5.3
Figure 3: Plots of the interface position, density and pressure contours for A = 1/5, M2 = 0.5, i = 1.4 in a computation domain 1 x6 with a 40 by 240 grid. Only the upper two thirds of the
computational region is shown in the plot because nothing of interest occures in the remainder of the computation. (a) The interface position for successive time steps. (b) The densi ty contour plot.
(c) The pressure contour plot. From a dimensional argument, the terminal velocity of bubble should be proportional to ..;>:9, i.e. Voo = Cl
is constant of proportionality and it is a function of the dimensionless
parameters A,M and I only. In Fig. 5 we plot Cl for a range of Atwood number A and compressibility M. It shows that Cl has a strong dependence on M and for a given value of M2, the dependence on A is
approximately v'A in systems with low compressibility. Since we used the same value (1.4) for I in all of our simulations, the dependence of cIon I is not explored in this study.
l)\ gtlCh =0
g t 1Ch = 1.1 gtfCh = 1.6 gtlCh =2.6
Figure 4: Plots of the interface position, density and pressure contours for A = 0.01, }o.1 2 = .5, I = 1.4 in a computation domain 1 x 10 with a 20 by 200 grid. Only the upper four fifths of the
computational region is shown in the plot because nothing of interest occurs in the remainder of the computation. (a) The interface position for successive time steps. (b) The density contour plot.
(c) The pressure contour plot.
rC~l______________. -______________. -__--.
M2 =0.5
o M2 =0.2 o M 2 =0
+ +
++ o
D 0.0'>.Jgt/Ch
Figure 7: The plot of bubble velocities vs. time for the two bubble merger simulation. The result shows that the small bubble is accelerated at the beginning and is then decelerated after about gt/Ch
= .4.2. The small bubble is washed downstream after its velocity is reversed. The large bubble is under constant acceleration. The smooth curves represent the bubble motion as predicted by the
superposition hypothesis.
The superposition hypothesis has been compared with the experimental data of Read [19] and with the results of our numerical simulations of the full Euler equations. The relative error between
superposition theory and the results of numerical simulations or experimental data is less than 20% for systems with A > ~ and M2 ::; .1, and about 30% for systems with small density ratio or large
compressibility. In the latter case, the superposition principle is valid only for a finite time interval. This time interval can be understood as resulting from a nonlinearity in the bubble
interaction due to density stratification [15].
Figure 8: The interface evolution of a five bubble simulation. The compressibility in this case is M2 = 0.1 and the Atwood number is A = 1/11. The velocity analysis showed that the superposition
model is applicable to the largest bubble within an error of 15%. In Fig. 6, we show the interface between two fluids at successive times in a two bubble merger process. The comparison of the result
of the superposition hypothesis and the numerical result of Fig. 6 is given in Fig. 7. The behavior at small bubble velocities indicates clearly the contribution from the envelope. Initially, the
mode bubble velocity dominates the total velocity since the envelope has small growth rate due to its long wavelength. When contributions from the single bubble and the envelope have the same
magnitude but opposite signs, the bubble stops accelerating. After that point, the velocity of the envelope dominates the total velocity. Then the small bubble de-accelerates and washed downstream
quickly. Similar behavior shows up in a simulation for a system of five bubbles. (See Figs. 8 and 9.)
ghld .00 r=---=---u----r-->
-.15 -.17
-.20 '--_ _ _--"-_ _ _ _-"--.J(gtl Ch)2 o 1.4 2.8
'--_ _--'-_ _ _.L:...L._ _- - '
gtl Ch
Figure 9: The left plot displays dimensionless bubble heights vs. dimensionless t 2 in a simulation with 5 initial bubbles. The Atwood number in this case is A = 1/11, and the compressibility is M2 =
0.1. The right picture shows the dimensionless velocity vs. dimensionless t in the same case. The superposition model of the bubble velocity is valid up to gt/Ch = 0.9. By dimensional arguments, one
expects that the position of the bubble will be proportional to time t. However, for chaotic flow, the radius of a large bubble will
Figure 10: Plots of interfaces in the random simulation of Rayleigh-Taylor instability. The density ratio is A = 1/3 and the compressibility is M2 = 0.1. The acceleration of the bubble envelope is in
good agreement with the experiment of Read for 1~ generations of bubble Illerger. The acceleration decreases after this time due to the multiphase connectivity, which is different in the exactly two
dimensional computation and the approximately two dimensional experiments.
increase due to interactions between the bubbles. Consequently, the terminal velocity of the large bubble increases. By taking this into account, one can show that the position of the bubble is
proportional to t 2 , i.e. z = aAgt 2 • Read reported a range of values for a in his experiments [19], with a = .06 being a fairly typical value. Values of a in the range of 0.04 ~ 0.05 and 0.05 ~
0.06 were reported respectively
120 by Young [12] and by Zufuria [10] on the basis of their numerical simulations. In our study, we found that 0: is not a constant. 0: fall in the range 0.055 '" 0.065 at early stages of interaction
and in ranges 0.038 '" 0.044 at late stage of the simulations [15]. In Fig. 9, the slope of the large bubble curves corresponds to the value of 0:. We observe that the reduction of 0: from about .06
to about .04 is due to the multi-connectivity of the interface in the deep chaotic regime. In Young's numerical simulations, the interface between two fluids was not tracked [12]. Therefore effective
multi-connectivity occurred during early stages of his simulations. We propose this as a possible explanation for the small values of 0: which be observed. The discrepancy between the value of Q
observed at late times in our numerical simulations and the value observed in experiments [19] results from the difference between exact two dimensional numerical simulations and an approximately two
dimensional experiment. For example, the ratio of thickness to width is 1:6 in Read's experiments [19]. The computationally isolated segments of fluids in the x - Z plane may be connected in the
third dimension (y direction) in experiments. Such discrepancies may be resolved in three dimensional calculations which will provide a more realistic approximation to the experimental conditions.
The interface configuration of a random system at the initial and final times of simulation is shown in Fig. 10. We see that the small structures (bubbles) merge into large structures. Due to the
exponential stratification of the density distribution of the unperturbed fluid, the effective Atwood number decreases as the bubble moves into the heavy fluid. The reduction of Atwood number results
in a non-monotonicity of the bubble velocity. A turnover of bubble velocity is observed in our numerical simulation. Since such turnover phenomena have not been taken into account in the single mode
theory, the superposition theory is not applicable when the effective Atwood number has been reduced substantially. To get a better understanding of the phenomenon of velocity turnover in a single
mode system and the failure of the superposition hypothesis in a umlti-mode system, we use the initial density distribution of light and heavy fluid to approximate the dynamic effective Atwood number
Aeffective' For a flat interface, the density distribution is
Pi(Z) = Pi(O)Cl'p(Y;Z), i = l,h. c·
When a bubble reaches the position z, we approximate the effective Atwood number as
Ph(Z) - PI(Z) Ph(Z)+PI(Z)
+ A)exp(-yM2-Yxf) -
(1- A)
= (I+A)exp(-yM21~AAf)+(I-A)'
For a single mode system, the turnover phenomenon should occur before the effective Atwood number Aeffective vanishes. For a multi-mode system, the superposition theory is applicable as long as Aef
fcclive ~ A = Aef fective(Z = 0). In fig. 11, we plot the approximate effective Atwood number VB. f. Since Aef fective decreases more rapidly in a system of small density ratio or large
compressibility, the superposition theory fails at small value of f in these systems.
Figure 11: The plot of approximate effective Atwood number as bubble reaches position z. Aeffective decreases more rapidly in the system with small initial Atwood number or large compressibility than
in the system with large Atwood number and small compressibility. The decreasing of the effective Atwood number is the source the turnover phenomenon in single mode system and the failure of
superposition theory in multi-mode system. One should not confuse the turnover of the bubble velocity in a single mode system with the turnover of the velocity of the small bubble in the multi-mode
system. The former is due the stratified density distribution and latter is due to the interactions between bubbles, i.e. the contribution of the envelope velocity to the total velocity of the small
4. Acknowledgement. We would like to thank the Institute of Mathematics and its Applications for providing us on a CRAY-2 for portions of our study of the single mode problem.
122 REFERENCES
[3] [4] [5]
[6] [7] [8] [9] [10]
[11] [12] [13] [14] [15] [16] [17] [18] [19]
LORD RAYLEIGH, Investigation of the Character of the Equilibrium of An Incompressible Heavy Fluid of Variable Density, Scientific Papers, Vol. II (Cambridge Univ. Press, Cambridge, England, 1900),
p200. G. I. TAYLOR, The instability of liquid surfaces when accelerated in a direction perpendicular to their plan'es. I, Proc. R. Soc. London Ser. A 201,192 (1950). G. BIRKHOFF AND D. CARTER, Rising
Plane Bubbles, J. Math. Mech. 6, 769 (1957). P. R. GARABEDIAN, On steady-state bubbles generated by Taylor instability, Proc. R. Soc. London A 241, 423 (1957). G.R.BAKER, D.I.MEIRON AND S.A.OIlSZAG,
Vortex simulation of the Rayleigh-Taylor instability, Phys. Fluids 23, 1485 (1980). D.I.MEIRON AND S.A.ORSZAG, Nonlinear Effects of Multifrequency Hydrodynamic Instabilities on Ablatively Accelerated
thin Shel1s, Phys. Fluids 25, 1653 (1982). R.MENIKOFF AND C.ZEMARK, Rayleigh-Taylor Instability and the Use of Conformal Maps for Ideal Fluid Flow, J. Comput. Phys. 51, 28 (1983). D.H.SHARP AND
J.A.WHEELER, Late Stage of Rayleigh-Taylor Instability", Institute for Defense Analyses, (1961). J.GLIMM AND X.L.LI, Validation of the Sharp-Wheeler Bubble Merge Model from Experimental and
Computational Data, Phys. of Fluids 31, 2077 (1988). JUAN ZUFIRIA, Vortex-in-Cel1 simulat,ion of Bubble competition in Rayleigh-Taylor instability, Phys. Fluids 31, 440 (1988). G. TRYGGVASON,
Numerical Simulation of The Rayleigh-Taylor Instability, Journal of Computational Physics 75, 253 (1988). D. L. YOUNGS, Numerical Simulation of Turbulent Mixing by Rayleigh-Taylor Instability,
Physica D 12, 32 (1984). J.GLIMM, O.McBRYAN, R.MENIKOFF AND D.ILSHARP, Front Tracking applied to RayleighTaylor Instability, SIAM J. Sci. Stat. Comput. 7, 177 (1987). C.L.GARDNER, J.GLIMM,
O.McBIlYAN, R.MENIKOFF, D.H.SHARP AND Q.ZHANG, The dynamics of bubble growth for Rayleigh- 'll.ylor unstable interfaces, Phys. Fluids 31,447 (1988). J .GLIMM, R.MENIKOFF, X.L.LI, D.II .SnAIlP AND Q.
ZHANG, A Numerical Study of Bubble Interactions in Rayleigh-Taylor Instability for Compressible Fluids, to appear. D.H.SHARP, An Overview of Rayleigh-Taylor Instability, Physica D 12, 32 (1984).
I.B.BERNSTEIN AND D.L.BoOK, Effect of compressibility on the Rayleigh-Taylor instability, Phys. Fluids 26, 453 (1983). QIANG ZHANG, A Model for the Motion of Spike and Bubble, to appear. K.I.READ,
Experimental Investigation of Turbulent Mixing by Rayleigh-Taylor, Physica D 12,45 (1984).
FRONT TRACKING, OIL RESERVOIRS, ENGINEERING SCALE PROBLEMS AND MASS CONSERVATION JAMES GLIMM,* BRENT LINDQUISTt AND QIANG ZHANGt Abstract. A critical analysis is given of the mechanisms for mass
conservation loss for the front tracking algorithm of the authors and co-workers in the context of two phase incompressible flow in porous media. We describe the resolution to some of the
non-conservative aspects of the method, and suggest methods for dealing with the remainder. Key words. front tracking, mass conservation AMS(MOS) subject classifications. 76T05, 65M99, 35L65
1. Introduction. Two phase, incompressible flow in porous media is described by a set of PDEs consisting of a subsystem of hyperbolic equations, which describe conservation of the fluid components
that thermodynamically combine into the two distinct flowing phases, coupled to a subsystem of equations of elliptic type. The parametric functions in these equations describe the physical properties
of the reservoir (petrophysical data) and the physical/thermodynamic properties of the flowing fluids (petrofluid data). Engineering scale problems involve the use of tabulated petrophysical and
petrofluid data applicable to real reservoir fields. Such data includes discontinuous rock properties in addition to smooth variations.
We adopt an IMPES type solution method for this set of equations; namely the two subsystems are treated as parametrically coupled, and each subsystem is solved in sequence using highly adapted
methods. For the hyperbolic subsystem we use the front tracking algorithm of the authors and co-workers; for the elliptic subsystem we use finite elements. In the original form of the method the
solution conserves mass only in the limit of arbitrarily small numerical discretization. We have performed a critical analysis to understand the mechanisms of conservation loss and present here a
brief discussion of our conclusions as well as corrections that have been or are in the process of being implemented. Our goal is a front tracking method for flow in porous media that is conservative
on all length scales of numerical discretization. *Department of Applied Mathematics and Statistics, SUNY at Stony Brook, Stony Brook, NY, 11794-3600. Supported in part by the Army Research
Organization, grant DAAL03-89-K0017; the National Science Foundation, grant DMS-8619856; and the Applied Mathematical Sciences subprogram, U.S. Department of Energy, contract No. DE-FG02-88ER25053.
tDepartment of Applied Mathematics and Statistics, SUNY at Stony Brook, Stony Brook, NY, 11794-3600. Supported in part by the Applied Mathematical Sciences subprogram, U.S. Department of Energy,
contract No. DE-FG02-88ER25053. tCourant Institute of Mathematical Sciences, New York University, 251 Mercer St., New York, NY, 10012. Supported in part by the Applied Mathematical Sciences
subprogram, U.S. Department of Energy, contract No. DE-FG02-88ER25053.
124 1.1 The system of equations. Consider as example the flow of two immiscible, incompressible fluid phases. The system of equations which describe the flow of these two phases in a porous medium
a(x)(x) ~:
+ \7. a(x)V(x)f(s) = \7. a(x)ii(x)
= 0,
= -A(s,x)K(x)· \7P'
Equation (1.1a) is a single equation representing the conservation of volume of the two incompressible phases, phase 1 occupying fraction s of the available pore volume in the rock and phase 2
occupying 1 - s of the available pore volume. In general a fraction Se of phase 1, and a fraction Sr of phase 2 are inextricably bound to the rock, therefore s varies between Se and 1- Sr. A region
of the reservoir in which s is constant and equal to one of its limiting values is a region of single phase flow. In a region in which s varies smoothly, and lies strictly within its bounding limits,
both phases are flowing. The discontinuity waves that occur in the solution of (1.1a) describe discontinuous transitions in s. The first of equations (LIb) expresses the incompressibility of the
flowing fluids; the second, Darcy's law, relates the total fluid velocity ii to the gradient of the pressure field P in the reservoir. In (1.1a), f(s)ii(x) is the fraction of the total fluid velocity
carried by phase 1. For simplicity of presentation, we neglect gravitational terms in (1.1) though they are included in the analysis. For simplicity, we also neglect point source and sink terms which
describe point injection and production sites (wells) for the fluid phases, and appear, especially in two dimensional calculations, on the left hand sides of (lola) and the first of (LIb). The
effects of other neglected terms such as surface tension, chemical reactions, compressibility, etc. present in more complex flows are not included in our analysis. The other parameters in (1.1)
specify the petrophysical data (PPD) and petrofluid data of the reservoir:
a(x) is a geometrical factor accounting for volume affects not specifically accounted for by the independent spatial variables in (1.1);
1) a( x) is the cross-sectional area of a 1 dimensional reservoir. 2) a( x) is the thickness in the third dimension for a two dimensional reservOIr.
3) a( x) = 1 for a fully three dimensional calculation. ( x) is the porosity (volume fraction of pore space) of the rock medium.
K( x) is the rock permeability tensor describing the connectedness of the geometrical pathways through the rock pores.
A( s, x) is the total relative fluid transmissibility describing how the presence of phase 1 affects the flow of phase 2 and vice-versa. It has explicit x dependence as its functional form may differ
according to the local rock type. In an engineering scale problem, this data is usually specified in tabulated form and may contain information on the location of sharp transitions across faults,
layer structures, and barrier regions.
125 1.2 Conservation form for hyperbolic equations. Consider the system of conservation laws
+ F(s)x = O.
The conservative formulation for a finite difference scheme is based on the integral (weak) formulation of (1.2)
lxi+l/2 xi-l/2
over a numerical grid block centered at
dx(st Xi,
+ F(s)x) =
and can be expressed in the form
where S is the volume integrated mass in the mesh block centered at Xi. The numerical flux G, defined over a stencil of p + q + 2 grid blocks, must satisfy G(S, ... ,S)
= F(S),
and, more trivially, but important for our considerations, (1.5) 1.3 Conservation form for elliptic equations. DEFINITION. A solution if of the elliptic system (l.lb) is conservative with respect to
a grid oflines G, if it satisfies if· ftd£ = 0 for every closed path consisting of lines of G.
n if it
A solution if of the elliptic system (l.lb) is conservative in a region
i if· ftd£ = 0 for every closed path in n.
1.4 The solution method and mass conservation. The front tracking scheme of the authors and co-workers [7,8] for solving (1.1a) consists of a conservative scheme of the form (1.4) defined on a
regular rectangular, two dimensional grid G H, in conjunction with moving one dimensional grids (curves) to track the evolving discontinuity surfaces. The propagation of the tracked curves (the front
solution) is achieved via spatially local Riemann problem solutions in the direction normal to the curve. The method also takes into account flow of fluid tangential to the discontinuity curves. The
solution away from the tracked curves (the interior solution) is obtained using the grid G H with an upwind scheme of the form (1.4). These front and interior schemes are coupled, the tracked curves
providing 'boundary values' for the interior solution and the Riemann problem data taking into account interior solution values.
The elliptic system (LIb) is solved by combining the two equations into a single elliptic equation for the pressure field P, and solving by finite elements [7,8,11]. The
finite element mesh G E is a mixture of rectangles and triangles, whose edges match all discontinuity surfaces in the solution and in the PPD. The finite elements are standard; tensor product
Lagrangian basis functions on the rectangles, and triangle basis elements. The velocity field is obtained by analytic differentiation of the basis function representation of the pressure field. Five
items have been identified as responsible for loss of mass conservation of the method. These items are 1) the discretization of the medium properties, 2) the physical limits for the solution variable
sc:S s :S 1 -
3) the implementation of the conservation form (1.4) near physical discontinuities in the medium properties (faults, layers and barriers), 4) the conservative properties of the velocity field V, 5)
the tracking of moving fluid discontinuities. As these items interconnect, it is difficult to state precisely their relative order of importance; in our test example the first three issues are more
crucial for mass conservation achievement than the last two. In the remainder of this paper we consider each of these five items separately. 2. Discretization of the medium properties. It is
important in the front tracking method, for both mass conservation and resolution of the phase discontinuity behavior, that the PPD be represented in a smooth (CO) fashion away from faults rather
that in a piecewise constant (block centered) manner. Consider the first order Enquist-Osher scheme for (1.4). It has the form (in one space dimension)
The unspecified arguments (.) in (2.1) include the explicit x dependence of F required for (1.1a),
F(S,x) = a(x)v(x)f(S). Using block centered PPD, a logical choice might be
Fi-1(') = F(Si-l,Xi-d, Fi(·) = F(Si,Xi). However, with cell centering, the requirement (1.5) of cell boundary continuity on the numerical fluxes implies that, for intervals (Si-1, Si) over which aF
jaS changes sign, the evaluation of the last term in (2.1) be done as [S(X'_l/') } S'_l
laF(S(X),x)ldS+ [s,
} S(X'_l/')
This in turn requires a map of the interval (Xi-1,Xi) onto (Si-1,Si) due to the discontinuity in the PPD at xi-1/2' While it is possible to devise ways to achieve this, such a choice is equivalent to
an ad hoc smoothing of the data.
127 Further, the use of block centered PPD requires the specification of greater amounts of data in order to perform calculations on refined meshes. While the specification of such additional data
can be automated, this again results in an ad hoc smoothing method that is intimately coupled to the grid used in the solution of (lola). In addition, as not all PPD remains smooth as the numerical
discretization lengths t.x --+ 0; these physical discontinuities are an inherent aspect of the PPD data that must be discretized accurately on all length scales. The use of block centered PPD can
also play a role in introducing spurious instabilities for front tracking algorithms. In Figure 1 we show a tracked curve passing through two mesh blocks. If point PIon the tracked curve is
propagated using the PPD from mesh block B I , and P2 is propagated using the PPD from B 2, a kink will develop in the front, which under physically unstable flow regimes will grow.
Figure 1 A jump discontinuity in the PPD, as in the specification of piecewise constant data, between mesh blocks BI and B2 can result in numerically based discontinuous propagation behavior of the
two points PI and P2 on the tracked curve. It is relatively easy to provide a representation of the PPD that is continuous in the appropriate regions, and resolves the discontinuous structure as
well. We illustrate an automatic discretization method which achieves this. The method has the additional feature that it discretizes the PPD on a grid that is independent of the grids G H and G E.
This allows a representation of the PPD that can be held fixed while mesh refinement studies are done for the solution methods used on (1.1a) and (LIb). While the idea behind this discretization
method is not new [12,14}, we reiterate that smooth representation of the PPD is necessary for use in conjunction with the front tracking method.
This discretization method is illustrated in Figure 2. Figure 2a depicts the two dimensional areal plan of an inclined reservoir bed. The reservoir contains two fault lines F 1 and F 2. The two
dimensional slice follows the local inclination of the middle
Y=Y2 Y=Y2
Y=Y3 (2b)
Figure 2 a) An areal view (x vs y) of a reservoir field having two fault lines F 1 and F 2' b) Three vertical plans (x vs z) through the reservoir field. c) A demonstration of the placement (dark
circles) of given field petrophysical data. d) An enlargement of a region in c) showing a possible choice of points where additional field data is required in order to fit the fault and boundary
~~ ~ ~ ~ ~ ~ \/ ~ /
I \/ ~ II ~V II ~v '-II I\) ~ II -;; r\
V~ ~K/ IIIDk1
II IDk1 II 16k1 16 II k1 1/
~ ~ ~ ~ ~I ~ ~ ~
I II II II II II II II II II II
Figure 2e A tessellation of the geophysical structure of the reservoir to produce Co continuity of the petrophysical data.
thickness (reconstructed)
Figure 3 Comparison of algorithmically smoothed reservoir thickness data with field measured data. The numerical smoothing algorithm used field data defined on a rectangular 9 by 12 grid and
knowledge of the fault locations to produce a smooth approximation to the reservoir thickness.
of the reservoir bed; three vertical plans of the bed are depicted in Figure 2b. PPD was specified from field readings at the corners (black points in Figure 2c) of a rectangular grid GR. To obtain
the required representation of this data, additional PPD is required along each side of the fault lines, and along the boundaries of the computational domain. The unshaded points in Figure 2d (a
close up of a small area of Figure 2c) demonstrate one possible placement for specification of this additional data. A tessellation T of the grid G R into a mesh of rectangles and triangles is
achieved by triangulating those rectangles of G R that are cut by faults, or lie next to the computational boundary, in such a manner that - the faults are coincident with triangle sides, - triangle
nodes lie either at the corners of G R, on the fault lines, or on the boundaries. Such a tessellation is shown in Figure 2e. CO smoothness of the PPD away from the discontinuities is then achieved by
employing, for example, linear (bilinear) interpolation on the triangles (rectangles) of T. The efficacy of such a discretization is demonstrated in Figure 3. Figure 3b shows initially specified
contours of a( x ) for the reservoir under discussion in Figure 2. PPD were specified on the corners of the rectangular 9 by 12 grid G R of Figure 2c. Data on each side of the fault lines and at the
boundaries of the reservoir were obtained by constant extrapolation from the closest point of GR. The resultant piecewise continuous discretization of the data is shown as a contour plot in Figure
3a. In spite of the coarseness of the grid G R, the resultant piecewise continuous discretization of the PPD on T agrees extremely well with the measured data in the large area A3 and in A 2 . The
representation in the small triangular region Al is not as good. However, in this particular calculation, the active region of the reservoir was constrained (by specification of the rock permeability
values) to lie only in A3, so no effort was made to improve the representation of the data in AI. While T is useful for providing interpolation of the PPD, it is inappropriate to compute numerical
derivatives of this data directly from the linear/bilinear representation it provides (which would result in piecewise constant/linear derivatives). This is due to the extreme aspect ratios that may
develop for some triangles. Rather, derivatives such as those required to compute the local gravitational strength (neglected in (1.1)), can be achieved by usual finite differences. Figure 4
illustrates this. The gradient d5/ dx of some petrophysical quantity 5 at the point PO can be obtained by central differencing over a distance of 2h (Figure 4a). The values of 5 at P_I and PI are
obtainable by interpolation on the tessellation T. This finite difference scheme must be modified near faults and boundaries in an appropriate one-sided manner as illustrated in Figures 4b,c and d.
In Figure 4b, the centered difference at Po is based on an irregular stencil of length hI + h 2 . If Po lies exactly on a horizontal fault as in Figure 4c, two derivatives are required, one on each
side of the fault. If the fault kinks, it may be necessary to resort to a difference based on a triangle for one of the sides of the fault, as illustrated in Figure 4d.
3. Physical solution limits. For incompressible flow the fluid volume fraction s, and hence the numerical integrated volume fraction S, are bounded above and
. ....0 - - - - -..... ....O------t.~ • P-I Po PI
(4b) (4a)
• •
• •
Po (4c)
• P-I
Figure 4 A centered finite difference stencil a) used to compute derivatives (here we illustrate d/dx) of the petrophysical data must be modified in appropriate one-sided ways b) c) and d) in the
vicinity of discontinuities and the reservoir boundary. below, Furthermore, functional evaluations may be defined only for s (S) lying within this bounded domain. The numerical scheme (1.4), or its
two dimensional extension to irregularly shaped domains, while conservative, provides no guarantee that the numerical solution will remain within these bounds. In fact (1.4) only guarantees
conservation if the numerical flux function G is definable for every numerical value of S generated. In practice only a finite extension of the domain [Se, 1 - Srl is required. Given petrophysical
and petrofluid data based on tabulated experimental data, even limited extension is usually impractical. One is then forced to truncate the solution whenever it reaches its limiting values. In order
to maintain mass conservation, this truncated mass must be reintroduced into the solution in a physically realistic manner. We discuss a fully two-dimensional, unsplit version of (1.4) which includes
the reintroduction of truncated mass. (Directionally split schemes are less preferred as truncated mass must be stored each directional sweep, and the excess masses reintroduced after the final
.. ;
.•.. f , . . - -
..- ....•
.............. .
. ..............
Figure 5 Schematic apportioning of truncated mass stream (indicated by flux direction F) mesh blocks.
into down-
Consider a rectangular mesh block ij removed from faults or moving fronts. Let Sij+l denote the solution obtained for ij by applying (1.4) in an unsplit, two dimensional form,
sn+l = .)
s-n+l ij < SC
s-n+l ij > 1 - Sr
S c,
1 - S T>
S!,+l , )
otherwise .
Let tij == Sij+l - Sij+l represent the clipped mass for block ij that must be restored to the solution. For tij > 0 « 0), we apportion the clipped mass into appropriate downstream (upstream) blocks
in proportion to the carrying capacities of these blocks. This is indicated schematically in Figure 5. If no appropriate downstream (upstream) blocks are available, or they have insufficient carrying
capacity, any unallocated clipped mass is accumulated until it can be distributed. For mesh blocks cut by faults, a clipped mass is calculated for each polygonal area into which the mesh block is cut
by the fault. Distribution of the clipped mass again takes place, but now into the appropriate polygon areas. The algorithm consists of two passes over the mesh. On the first pass the tij are stored;
on the second the clipped mass is distributed. If the number of mesh blocks having t of 0 is dense in the mesh, (i.e. on the average, any given mesh block lies downstream from several mesh block
containing excess mass) the apportioning of the tij becomes a constrained optimization problem amongst coupled mesh blocks. If
134 however, the number of mesh blocks in which Eij # 0 is sparse, such that downstream mesh blocks receiving mass are in one-to-one correspondence with mesh blocks having excess mass, the
distribution of this mass can be done by a direct sweep through the mesh treating one mesh block at a time. Based on our early experience we expect the number of mesh blocks containing truncated mass
to be relatively sparse; therefore we have implemented the latter, simpler scheme for distributing the truncated mass. 4. The conservative form in the vicinity of faults. With the PPD smoothly
discretized on a fixed grid, we now turn to the solution of (l.la) in the vicinity of faults. A common implementation of (3.1) in the vicinity of faults is to stair-case the faults to conform to the
grid G H, the stair-casing becoming finer as GH is refined. However this choice leads to spurious bending of the tracked discontinuity surfaces and the potential for spurious fingering in unstable
flow regimes. This is illustrated in Figure 6 where a tracked interface traveling obliquely to a stair-cased fault encounters a series of corners. The front movement around these corners results in
Fault line
Figure 6 Bending of a tracked propagating discontinuity wave D(t), (ti > tj for i > j), traveling obliquely to a fault line represented in a stair-case fashion. One is then constrained to exact
representation of the faults (as in Figure 7) which is achieved in the front tracking scheme by representing them as tracked, unmoving waves. The conservative scheme (3.1) must be modified to handle
those mesh blocks of G H cut by such faults. An appropriately volume averaged solution value must be stored for each of the separate regions produced (Figure 7). (3.1) is modified in the obvious way
as a sum of fluxes flowing normally through each of the sides of the irregularly shaped polygons thus formed. The problem one is now forced to solve is the restriction due to the CFL condition which
reduces the allowed maximum timestep by the ratio of the smallest area of all such polygons
Figure 7 Modification of grid and solution representation required for exact representation of fault lines. formed to that of the regular rectangles on G H, min {Apolgon} Aregular rectangle Several
authors (Leveque [9,10], Chern and Colella [5], Berger and Leveque [1]) have treated this problem for the Euler equations of compressible fluid dynamics in order to overcome the CFL restrictions. It
is a common feature of such conservative interface methods to allow excess entropy production, resulting in shock wall heating, slip line heating, or fluid mixing and entrainment in compressible
fluid flow. The approach we take here is similar to that of Chern and Colella, in that conservation is restored by placement of mass in adjacent cells when CFL limits are encountered. See §3. 5.
Conservative properties of v. As the subsystem (l.la) contains the non-hyperbolic variable V, it is necessary that this velocity field be conservative with respect to the grid G H to ensure that an
algorithm of the form (3.1) remains conservative when applied to (l.la). However, in order to avoid an undesirable coupling between the grids G Hand G E, it is then desirable that be conservative in
the complete computational domain, i.e. must be divergence free everywhere.
The finite element method currently in use for front tracking calculations [7,8,11] does not have this conservative property. The velocity field it produces is, in general, not conservative in the
region of computation, and can have spurious source/sink regions especially near corners and boundaries. Raviart and Thomas [13] have developed a mixed finite element method for solving
v = j,
Figure 8 Schematic illustration of issues to be dealt with in deriving a conservative formulation for propagating tracked waves. Solutions for v and u are developed in two separate spaces, Vi. and
Uh, of polynomial elements. Through judicious choice of the properties of the basis functions in these two spaces the numerical solution v solves (6.1a) exactly. Chavant, Jaffre et. al. [2,3,4] and
Douglas, Ewing and Wheeler [6] have adapted this mixed finite element approach to two phase incompressible flow in two dimensions. This body of work is characterized by also solving (l.la) by a
Galerkin procedure. We are in the process of implementing this mixed finite element method for the solution of (l.lb) in combination with front tracking for the solution of (l.la).
6. Tracking of moving fronts. The tracking curves are propagated [7,8] in a non-conservative manner. As mentioned in §1.4, these one dimensional grids are composed of piecewise linear bonds. The
movement of the tracked curve is achieved by propagating the end points of each bond via information from Riemann solutions. Figure 8a illustrates the propagation of a bond of length ~£. The movement
of the bond's two end points results in the movement of an amount of mass along the
137 entire bond. This method of propagating the front will conserve mass only in the limit M --,t O. We are in the process of investigating different approaches for achieving a conservative front
propagation algorithm. The most likely approach would be one consistent with the integral formulation (1.3). One such bond oriented version is indicated by the integration path (dashed line) shown in
Figure 8b. However, the front propagation is not usually as straightforward as suggested by Figure 8b. One possible complication is depicted in Figure 8c. Further complications exist at points where
two or more tracked fronts join (Figure 8d), or when separate tracked curves interact. In addition the details of the coupling of a cO,nservative approach for the front to the method (3.1) used in
obtaining the solution away from the front remain to be worked out. Preferably, any proposed conservative scheme for the fronts should be extendible to systems and compressible flow.
7. Example calculations. Figure 9 shows the results of a calculation for the areal plan of the reservoir field shown in Figure 2. The PPD (top of formation, Q, 1/>, rock permeability K) were
discretized according to §2, based upon a 9 by 12 rectangular grid GR. Local gravitational strengths were calculated using finite differences as discussed in §2. The hyperbolic equation (lola) was
solved using front tracking. The faults F 1 and F 2 were represented as tracked, unmoving waves. Water was injected at constant rate into well 11, and fluid pumped at constant rates from wells PI and
P2. The interface between the resultant two phase, water swept region and the single phase, undisturbed oil region was tracked. The solution in the region away from the tracked discontinuities was
calculated using the first order Enquist-Osher scheme on a 9 by 12 regular rectangular grid. Since the Raviart-Thomas based mixed finite element method has not as yet been completely installed, (LIb)
was solved with the original finite element method described in §1.4. Linear/bilinear basis functions were used on the mesh G E . This mesh adapts to the moving interfaces, hence it changes each
timestep. There is no correlation between the grids G E and G H. Figure 9 shows the tracked phase discontinuity interface at selected times during the first 33 years of the calculation. Figure 10
shows the percentage mass balance error for the water component as a function of time. The mass balance error is defined as (7.1 )
EM == Mpresent - Minitial - Minjected + Mproduced Minjected
where M represents water mass. Note that EM is a 'forgiving' dimensionless measure of mass balance error since, as .Mil1jected typically increases in time, EM can decrease in spite of an increase in
the absolute magnitude of the numerator of (7.1). The calculation was performed using the mass conservation corrections discussed in §§2, 3 and 4. The velocity field obtained was not conservative
(§4) over the entire domain of the calculation, and no correction was applied for the front movement (§6).
Figure 9 Calculation for the reservoir field described in figures 2d and 3. The phase discontinuity delineating the two phase region swept by the injected water from the single phase, undisplaced oil
region is shown at selected times.
tilDe (yrs)
Figure 10 Water mass balance error EM for the calculation of Figure 9. The resultant mass balance error after 33 years of simulation time is ~ -6%. The mass balance error is indeterminate at t = 0 as
the denominator of (7.1) goes
to zero. The initial mass balance errors are dominated by three things: 1) the nonconservative front propagation scheme (§6), 2) the lack of correction of mass balance errors for the scheme (1.4) in
mesh blocks cut by the initial phase discontinuity interface (which encloses an area smaller than the size of a grid block for a range of initial times), 3) the inability of the finite element method
implementation to resolve the velocity field around the point source. This last cause is the most critical and has long been of problematic concern in reservoir simulation (see for example the
treatments in [4] and [6]). An analytic treatment of the velocity divergence in the vicinity of wells has been included in this calculation, but match-up with the finite element solution is
problematic. The initial error is also amplified by the smallness of the denominator. At late times, the error is primarily due to the nonconservative use of the velocity field.
Acknowledgements. The authors wish to thank Statoil, Norway for supplying the realistic petrophysical and petrofluid field data used in this study, and for their support of the development of front
tracking for reservoir calculations. We also gratefully acknowledge the continuing support of the Institute for Energy Technology, Norway. REFERENCES [1]
[2] [3]
[5] [6]
[7] [8]
[9] [10] [11] [12] [13]
M. BERGER AND R. J. LEVEQUE, An adaptive cartesian mesh algorithm for the Euler equations in arbitrary geometry., AlA A 89-1930, 9th Computational Fluid Dynamics Conference, Buffalo, NY, June 1989.
G. CHAVENT AND J. JAFFRE, Mathematical Models and Finite Elements for Reservoir Simulation, North Holland, Amsterdam, 1986. G. CIIAVENT, G. COHEN, J. JAFFRE, M. DuPUY, AND 1. RIBERA, Simulation of
two dimensional waterflooding by using mixed finite elements, Soc. Pet. Eng. J., 24 (1984), pp. 382-389. G. CHAVENT, J. JAFFRE, R. EYMARD, D. GUERILLOT, AND L. WEILL, Discontinuous and mixed finite
elements for two-phase incompressible flow, SPE 16018, 9th SPE Symposium on Reservoir Simulation, San Antonio. I-L. CHERN AND P. COLELLA, A conservative front tracking method for hyperbolic
conservation laws, J. Computational Physics (to appear). J. DOUGLAS, JR., R. E. EWING, AND M. F. WHEELER, The approximation of the pressure by a mixed method in the simulation of miscible
displacement, R.A.1.R.O. Analyse numerique, 17 (1983), pp. 17-33. J. GLIMM, E. ISAACSON, D. MARCHESIN, AND O. McBRYAN, Front tracking for hyperbolic systems, Adv. Appl. Math., 2 (1981), pp. 91-119.
J. GLIMM, W. B. LINDQUIST, O. McBRYAN, AND L PADMANABHAN, A front tracking reservoir simulator, five-spot validation studies and the water coning problem, SIAM Frontiers in Appl. Math., 1 (1983), pp.
107-135. R. J. LEVEQUE, Large time step shock-capturing techniques for scalar conservation laws, SIAM J. Numer. Anal., 19 (1982), pp. 1091-1109. , A large time step generalization of Godunov's method
for systems of conservation laws, SIAM J. Numer. Anal., 22 (1985), pp. 1051-1073. O. McBRYAN, Elliptic and hyperbolic interface refinement, in Boundary Layers and Interior Layers - Computational and
Asymptotic Methods, J. Miller (ed.), Boole Press, Dublin, 1980. L PADMANABHAN. Chevron Oil Field Research, private communication. P. A. RAVIART AND J. M. THOMAS, A mixed finite element method for
second order elliptic problems, in Mathematical Aspects of Finite Element Methods, Lecture Notes in Mathematics 606, Springer-Verlag, New York, 1977, pp. 292-315. Y. SHARMA. Cray Research Inc.,
private communication.
J. M. GREENBERG*
Introduction. In this note we shall examine special collisionless solutions to the four velocity Broadwell equations. These solutions are new and seem to have gone unnoticed by other investigators
who have worked in this areal. These solutions are apparently stable; that is in numerical simulations they appear as the asymptotic state of the evolving system. The basic quantities of interest are
particle densities
r, i, u,
r(x, y, t) represents the number of particles per unit area at (i:, iJ) at time l travelling
with velocity weI. The densities i, u, and d have a silnilar interpretation except that the particles travel with velocities -weI, we2, and -we2 respectively. The evolution equations for the
densities are
~i+ w: x = e } Ii - wl x = e ul +wu y =-e di-wdy =-e
where the collision term
e is given by e = 1«ud - i'l).
Dimensional consistency implies that
d· (1 a and rt
+ rx = 0,
a < x < (a
+ t) and t > 0
and thus the interaction region was the strip -a < x < a and t > O. The principal result of [2] was that in the interval -a < x < a:
lim (r(x,t),I(x,t),(ud)(x,t»
= (0,0,0),
= max( u(x, 0) -
d(x, 0), 0),
t .... =
lim u(x, t)
t ....
142 and (1.14)
lim d(x, t)
= max(O, d(x, 0) -
u(x, 0».
Additional exponential decay estimates were obtained for the more customary form of the Broadwell equations
where now
p = r
+ 2c + I.
The equations (1.8) and (1.9) reduce to this system on the manifold u == d ~f c. In this note we shall confine our attention to special collisionless solutions of the full two-dimensional system
(1.7) and (1.8). That such solutions were possible was suggested by computations we performed on (1.7) and (1.8) for a varietyor' initial and boundary conditions. These computations suggested that
the long time behavior of the system was characterized by such solutions and motivated our trying to establish that the system did in fact support such solutions. The collisionless solutions are
nonconstant, positive solutions to (1.7) and (1.8) which satisfy the additional identity that ud - rl = O. In section 2 we demonstrate that (1.7) and (1.8) does indeed support such solutions and in
section 3 we shall demonstrate how these solutions emerge and characterize the long time behavior of the system. It should be noted that these collisionless solutions are valid for both the collision
term considered in our simulations, namely e = (ud - rl)j p, and the more customary collision term e = (ud - rl). 2. Collisionless Solutions. In this section we shall exhibit a class of nonconstant,
positive solutions to (1. 7) and (1.8) which satisfy the additional constraint:
ud - rl == O.
(2.1) Such solutions must be of the form
rex, y, t) = R (x - t, y)} 1
= Ll(X +t,y)
= U1 (x,y -
d(x,y,t) = D 1 (x,y+t)
or equivalently
r(x,y,t) = R2(x+ y -t,x_ y _t)} l(x,y,t)=L2(x+y+t,x-y+t) .
u(x, y, t) = U2(X + y - t, x - y + t) d(x,y,t) = D2(x + y + t,x - Y - t)
The fact that (2.1) must hold implies that R 2,L 2,U2, and D2 must satisfy
+ y + t, x - Y + t) Y + t)D 2(x + y + t, x -
R2(X + Y - t, x - y - t)L 2(x = U2(x
+y -
t, x -
Y - t).
If we let
R3 = InR2} L3 = InL 2
U3 =lnU2
= InD2
then (2.4) is equivalent to
R3(X + Y - t, x - Y - t) = U3(x
+y -
t,x -
+ L3(x + y + t, x - Y + t) Y + t) + D3(X + y + t,x -
Y - t).
Moreover, if we let
B1 = x
+y -
t, B2 = X - Y - t, B3 = X + y + t,
and B4 = x - y + t,
and insist that R 3, L 3, U3, and D3 satisfy
for all B1, B2 , B3, and (J4, then the functions R 3, L 3, U3, and D3 will also satisfy (2.6). The last relation implies that if R 3, L 3, U3, and D3 are C2, then
or equivalently that
R3 = h(Od + !2(02)} L3 = h(03) + 14(04) U3 = 15(Od + 16(04) .
D3 In order that the functions 11 -
h, 16
= h(03) + IS(02) Is
satisfy (2.8), we must also have
= 14
,h =!J
If we now let
Fl = exp(h), F2 = exp(!2), F3 = exp(!J), and F4 = exp(h) ,
then the collisionless solution, (2.3), reduces to
= Fl(X + Y - t)F2(x - Y - t)} = F3(X + y + t)F4(x - Y + t) u(x, y, t) = Fl(x + Y - t)F4(x - Y + t) rex, y, t) lex, y, t)
d(x, y, t) = F3(X and the density, p = r
+ y + t)F2(x - Y - t)
+ 1+ u + d, is given by
+Y -
+ F3(X + y + t))(F2(x -
Y - t) + F4(X - Y + t))
where Fl - F4 are arbitrary positive functions. The solutions given by (2.13) satisfy no obvious boundary conditions. Of somewhat more interest are collisionless solutions which satisfy
reO, y, t) = 1(0, y, t) and r(l, y, t) = 1(1, y, t) ,0 < y < 1,
= d(x,O,t)
and u(x,l,t)
= d(x,l,t)
,0. = 3 while in simulation 2 we take Ul = 5, U2 = 0.2, and>' = 3. For simulation 2 the density is initially constant and equal to 7.2. For each simulation we show the initial stages of the motion at
times t from to 1.75 in increments of .25 and the latter stages at times t from 46 to 49.75 in increments of .25. We also show for each simulation four summary diagnostics graphs which demonstrate
that the solutions being computed converge to collisionless solutions of the type described in Section 2 (see (2.20)).
Each summary graph has the following layout. In the upper left hand corner is a graph of r(O, 0, t) versus time over the interval 46 ::; t ::; 50; in the upper right hand corner is a graph of p(O, 0,
t)/4 = (r(O, 0, t) + 1(0, 0, t) + u(O, 0, t) + d(O, 0, t))/4 versus time over the interval 46 ::; t ::; 50; in the bottom left hand corner we show error(t)
~f max[lr(O, 0, t) -
1(0,0, t)l, Ir(O, 0, t)l, -u(O, 0, t)l, Ir(O, 0, t) - d(O, 0, t)1l
versus time over 46 ::; t ::; 50, and finally in the bottom right hand corner we show maxcollision(t) = max I( ud - rl)(x, y, t)1 versus time over 46 ::; t ::; 50. It is the (x,y)
structure of the last graph which demonstrates that our solutions have converged to the collisionless waves described in (2.20). REFERENCES
[1] [2]
T. PLATKOWSKI AND R. ILLNER, Discrete Velocity Models of the Boltzmann Equation: A Survey of the Mathematical Aspects of the Theory, SIAM Review, 30, (1988), pp. 213-255. J. M. GREENBERG AND L. L.
AIST, Decay Theorems for the Four Velocity Broadwell Equations, submitted to Arch. Rat. Mech. and Anal.
148 SIMULATION I
t== 0.00 0
t== 0.250
== 0.500
== 1.000
t ==
== 1.500
== 1.750
149 SIMULATION I
t = 46.250
t= 46.500
t= 46.750
t = 47.000
t = 47.250
t= 47.750
150 SIMULATION I
48.0 00
= 48.500
== 49.000
== 49.500
t== 48.250
= 48.750
== 49.250
t== 49.750
151 SIMULATION I
0.5 '-------'------'-----'------' 46 47 48 49 50 1 xlO-4
0.5 '-------'----'----'------' 47 48 49 50 46
152 SIMULATION II
t ==
t ==
t ==
t ==
t ==
t == 1.250
== 1.500
== 1.750
153 SIMULATION II
t::: 46.250
== 46.750
== 47.250
== 47.750
50 t ~ 48.2
== 48.500
== 48.75
t == 49.2
0 == 49.75
155 SIMULATION II
6 xlO-4
0.Q15 .--_--,-----=:m;:;:ax::.:;c::..:o;=:lli="s"-,io",,n'--T_ _-,
ANOMALOUS REFLECTION OF A SHOCK WAVE AT A FLUID INTERFACE* JOHN W. GROVEt AND RALPH MENIKOFF:j:
Abstract. Several wave patterns can be produced by the interaction of a shock wave with a fluid interface. We focus on the case when the shock passes from a medium of high to low acoustic impedance.
Curvature of either the shock front or contact causes the flow to bifurcate from a locally self-similar quasi-stationary shock diffraction, to an unsteady anomalous reflection. This process is
analogous to the transition from a regular to a Mach reflection when the reflected wave is a rarefaction instead of a shock. These bifurcations have been incorporated into a front tracking code that
provides an accurate description of wave interactions. Numerical results for two illustrative cases are described; a planar shock passing over a bubble, and an expanding shock impacting a planar
contact. Key words. anomalous reflection, front tracking AMS(MOS) subject classifications. 76-06 76L05
1. Introduction.
The collision of a shock wave with a fluid interface produces a variety of complicated wave diffractions [1,2,12). In the simpliest case these consist of pseudostationary self-similar waves that can
be described by solutions to Riemann Problems for the supersonic steady-state Euler equations. In more complicated cases and in particular when one or both of the colliding waves is curved, these
regular diffraction patterns can bifurcate into complex composites of individual wave interactions between the scattered waves. The purpose of this analysis is to understand the particular
bifurcation behavior of the collision of a shock in a dense fluid with an interface between the dense fluid and a much lighter fluid. Two basic cases will be considered. The collision of a shock in
water with a bubble of air, and the diffraction of a cylindrically expanding underwater shock wave with the water's surface. It will be seen that initially these interactions produce regular shock
diffractions with reflected Prandtl-Meyer waves. Subsequently these regular waves bifurcated to form anomalous waves that are analogous to non-centered Mach reflections whose reflected waves are
rarefactions. We will describe a method to include this analysis into a front tracking numerical method that allows enhanced resolution computations of these interactions. *This article is a
condensed version of reference [9] which will appear elsewhere. tDepartment of Applied Mathematics and Statistics, State University of New York at Stony Brook, Stony Brook, NY 11794. Supported in
part by the U. S. Army Research Office, grant no. DAAL03-89-K-0017. tTheoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545. Supported by the U. S. Department of Energy.
157 2. The Equations of Motion.
In the absence of heat conduction and viscosity, the fluid flow is governed by the Euler equations that describe the laws of conservation of mass, momentum and energy respectively.
OtP + V' . (pq) = 0,
Ot(pq) + V' . (pq @ q) + V' P
= pg,
Ot(pf) + V'. pq(f + VP) = pq. g.
Here, p is the mass density, q is the particle velocity, g is the gravitational acceleration, f = Iql2 + E is the total specific energy, E is the specific internal energy, and P is the pressure.
Gravity will be neglected since the interactions considered here occur on short time-scales. The equilibrium thermodynamic pressure P(V, E), where V = II p is the specific volume, is referred to as
the equation of state and describes the fluid properties.
It is well known that system (2.1) is hyperbolic, and the characteristic modes correspond to the propagation of sound waves and fluid particles through the medium. The sound waves propagate in all
directions from their source with a velocity c with respect to the fluid, where the sound speed c satisfies c2 = oP I op at constant entropy. Another important measure of sound propagation is the
Lagrangian sound speed or acoustic impedance given by pc.
3. Elementary Wave Nodes and the Supersonic Steady State Riemann Problem. An elementary wave node is a point of interaction between two waves that is both stationary and self-similar [7]. It can be
shown [6, 13 pp. 405-409] that there are four basic elementary nodes. These are the crossing of two shocks moving in opposite directions (cross node), the overtaking of one shock by another moving in
the same direction (overtake node), the splitting of a shock wave due to interaction with other waves or boundaries to produce a Mach reflection (Mach node), and the collision of a shock with a fluid
interface (diffraction node). All of these waves are characterized by the solution of a Riemann problem for a steady state flow, where the data is provided by the states behind the interacting waves.
We will primarily be concerned with the last of these interactions, but bifurcations in this node will lead to the production of all of the other elementary nodes. For a stationary planar flow,
system (2.1) reduces to a 4 x 4 system that is hyperbolic in the restricted variables provided the Mach number M = Iql/c is greater than one, i.e., the flow is supersonic. The streamlines or particle
trajectories define the time-like direction. The hyperbolic modes in this case are associated with two families of sound waves, and a doubly linearly degenerate characteristic family. If () and q are
the polar coordinates of the particle velocity q, then the sonic waves have characteristic directions with polar angles () ± A, where A is the Mach angle, sin A = M- 1 • Waves of these families are
either stationary shock waves or steady state centered rarefaction waves also called Prandtl-Meyer waves. Waves of the degenerate family are a combination of a contact discontinuity and a vortex
158 sheet across which the pressure and flow direction (J are continuous while the other variables may experience jumps. Following the general analysis of systems of hyperbolic conservation laws
[14], we see that the wave curve for a sonic wave family consists of two branches corresponding to either a shock or a simple wave. The shock branch is commonly called a shock polar [4, pp. 294-317]
and actually forms a closed and bounded loop where the two sonic families meet at the point where the stationary shock is normal to the incoming flow. If we let the state ahead of the wave be denoted
by the subscript 0, a straightforward derivation of the Rankine-Hugoniot equations for the system (2.1) shows that the thermodynamics of the states on either side of the shock are related by the
Hugoniot equation (3.1)
= Eo + -2-(Vo - V).
A similar derivation applied to the steady state Euler equations shows that the flow velocities on either side of a stationary oblique shock satisfy
(3.2) where H by (3.3)
= E + PV is the specific enthalpy. tan( (J - (Jo)
The jump in the flow direction is given
= ± [ poqo2 P- -(PPo-
D) cot f3 .
Here f3 is the angle between the incoming streamline and the shock wave, and is given by sinf3 = u/qO, where u = Vom is the wave speed of the shock wave with respect to the fluid ahead and m is the
mass flux across the shock, m 2 = -!::'P /!::. V. The difference between the flow direction on either side of the shock is called the turning angle of the wave. The same analysis when applied to the
simple wave curves shows that the entropy is constant inside a Prandtl-Meyer wave. The flow speed and flow direction are related by (3.2) where H = H(P, So) and (3.4)
= (Jo 1=
COSAI dP pcq
In analogy to the shock polar defined by (3.1)-(3.3) we will call this locus of states a rarefaction polar. It is easily checked that the two branchs of (3.4) are respectively associated with the (J
± A characteristic directions in the sense of Lax. Similarly it can be shown [8] that for most equations of state, the two branches of (3.3) are also associated with the (} ± A characteristics in the
sense of Lax provided the state downstream from the shock is supersonic. Since (} and P are constant across waves of the degenerate middle family, the Riemann problem for a stationary two-dimensional
flow can be
159 solved by finding the intersection of the projections of the wave curves in the () - p phase plane. The are two major differences between the solution to the Riemann problem for a stationary flow
and that of a one-dimensional unsteady flow. The Mach number behind the shock wave is given by 2
m = -(1 + -P cot 2 (3) pc P5
For most equations of state [15] m < pc and is a monotone function of the pressure along the shock Hugoniot. Thus if f3 is sufficiently close to ~ the flow behind the shock will be subsonic and the
steady Euler equations ceases to be hyperbolic. The second reason is that for an normal angle of incidence, the turning angle through the shock is zero. This means that the two branches of the shock
polar meet at this point forming a closed and bounded loop. These two issues together imply a loss of existence and uniqueness for the solution to the two dimensional stationary Riemann problem. The
resolution is that a bifurcation occurs from a stationary solution to a time dependent solution of the full two dimensional Euler equations. The actual shape and properties of the shock and
rarefaction polars depends on the equation of state. We will make no use of a specific choice of equation of state in our analysis, but we will need to assume that the equation of state satisfies
appropriate conditions to guarantee that the shock polar has a unique point at which the state behind the shock becomes sonic, and a unique local extremum in the turning angle. These conditions are
satisfied by most ordinary equations of state, and in particular by the polytropic and stiffened polytropic equations of state used in the numerical examples. 4. Anomalous Reflection.
As was mentioned in the introduction, the simpliest case of shock diffraction is that in which the flow near a point of diffraction is scale invariant and pseudostationary. This will be the case
provided the flow is sufficiently supersonic when measured in a frame that moves with the point [8]. Then the data behind the incoming waves define Riemann data for the downstream scattering of the
interacting waves. A representative shock polar diagram for a regular shock diffraction producing a reflected Prandtl-Meyer wave is shown in Fig. l. Diffractions of these types have been studied
experimentally by several investigators [1,2,11,12]' as well as numerically [3,8]. Longer time simulations of the resulting surface instabilities in the fluid interface (called the RiclItmyer-Meshkov
instability [16,19]) are found in [8,17,20]. One of the interferogrames, Fig. 14 of [12] shows an irregular wave pattern that corresponds to what we call an anomalous reflection. In this wave the
angle between the incident shock and the material interface is such that the state behind the shock has become subsonic. We consider the perturbation of a regular shock diffraction that produces a
reflected Prandtl-Meyer wave. Suppose that initially the state behind the incident shock is close to but slightly below the sonic point on the incident shock polar.
160 (b)
Incident Wave Shock Polar
1.:1 Q'.
fJ:) fJ:)
!:l -
Stream Direction -
Transmil1cd Wave
Reflected Wave Rarefaction Polar
Shock Pol
FIG. 1. A sketch of the wave pattern and polar diagrams for a regular shock-contact diffraction that produces a reflected rarefaction wave.
We allow the incident angle to increase while keeping the other variables constant so that the state behind the incident shock passes above the sonic point. Such a situation might occur as a shock
diffracts through a bubble as illustrated in Fig. 2. When this happens, the solution can no longer be self-similar since a PrandtlMeyer wave can only occur in a supersonic flow. Instead the reflected
wave begins to overtake and interact with the incident shock, Fig. 2c. This interaction dampens and curves the incident shock near its base on the fluid interface allowing the flow immediately behind
the node to return to a supersonic condition. The single point of interaction bifurcates into a degenerate overtake node where the leading edge of the reflected rarefaction overt ales the incident
shock, and a sonic diffraction node at the fluid interface. This interaction is a two-dimensional version of the onedimensional overtaking of a shock by a rarefaction. The composite configuration is
in many ways analogous to a regular Mach reflection. In this case the reflected wave is a Prandlt-Meyer wave and instead of a single point of Mach reflection the interaction is spread over the region
where the rarefaction interacts with the incident shock. The "Mach" stem an be regarded as the entire region from the point where the incident shock is overtaken by the rarefaction to its base on the
fluid interface. If we allow the incident angle to increase further we will eventually see a second bifurcation in the solution, Fig. 2d. As the material interface continues to diverge from the
incident shock, the Mach number near the trailing edge of the reflected rarefaction continues to decrease. The characteristics behind the incident shock are almost parallel to the shock interface
near the base of the anomalous reflection. The flow there becomes nearly one-dimension and the rarefaction wave eventually overtakes the incident shock. If there is a great difference in the acoustic
impedance between the two materials as in the numerical cases studied here, this second bifurcation will occur as the strength of the incident shock at the fluid interface reduces to zero. The now
non-centered rarefaction breaks loose from the fluid interface and begins to propagate away. This second configuration is also analogous to a Mach
161 (a) time 0.0 ~ec
(b) time O. 15 ~sec
incident shock wave
regular diffraction (c) time 0.6 ~ec
(d) time 1.0 ~ec
anomalous reflection
10 L\x = 10 fly FIG. 2. The collision of a shock wave in water with an air bubble. The fluids ahead of the shock are at normal conditions of 1 atm. pressure, with the density of water 1 glcc and air
0.0012 g/cc. The pressure behind the incident shock is 10 Kbar with a shocked water density of 1.195 g/ec. The grid is 60x 60.
reflection. Here the Mach node corresponds to the interaction region between the rarefaction and incident shock, while the Mach stem is the degenerate wave portion from the trailing edge of the
rarefaction to the fluid interface.
5. The Tracking of the Anomalous Reflection Wave. The qualitative discussion of the anomalous reflection in the previous section can be incorporated into a front tracking code to give an enhanced
resolution of the interaction. The tracking of a regular shock diffraction was described in [8]. The first step in the propagation is the computation of the velocity of the diffraction node with
respect to the computational (lab) reference frame. Suppose at time t the node is located at point Poo. The node position at time t + dt is found by computing the intersection between the two
propagated segments of the incident waves. If this new node position is Po, then the node velocity is given by (Po - Poo)/dt. This velocity defines the Galilean transformation into a frame where the
node is at rest. When the state behind the incident shock is supersonic in this frame, it together with the state on the opposite side of the fluid interface provide data for a supersonic steady
state Riemann problem whose solution determines the outgoing waves. The
outgoing tracked waves are then modified to incorporate this solution. A bifurcation will occur if the calculated node velocity is such that the state behind the incident shock is subsonic in the
frame of the node. If the reflected wave is a Prandtl-Meyer wave this will result in an anomalous reflection. The front tracking implementation of this bifurcation is a straightforward application of
the analysis described in the previous section. First the leading edge of the reflected rarefaction is allowed to break loose from the diffraction node. The intersection PI between the propagated
rarefaction leading edge and the incident shock is computed and a new overtake node is installed at PI by disconnecting the rarefaction leading edge from the diffraction node and connecting it to Pl.
If this reflected rarefaction edge is untracked, then PI is found by calculating characteristic through the old node position corresponding to the state behind incident shock and computing the
intersection of its propagated position with propagated incident shock. This characteristic makes the Mach angle A with streamline through the node. Since the bifurcation occurs between times t and t
+ dt, M ~ 1 at time t and A is real. This wave moves with sound speed in its normal direction. In this case no new overtake node is tracked. the the the the
We are now ready to compute the states and position of the point of shock diffraction after the bifurcation. As was mention previously, the rarefaction expands onto the incident shock causing it
weaken. This in turn slows down the node causing the incident shock to curve into the fluid interface. The diffraction node will slow down to the point where the state immediately behind the node
becomes sonic. After this the configuration near the node can be computed using the regular case analysis. The adjusted propagated node position is computed as follows, see Fig. 3. For each number s
sufficiently small, let p( s) be the point on the propagated material interface that is located a distance s from Po when measured along the curve, the positive direction being oriented away from the
node into the region ahead of the incident shock. Let f3( s) be the angle between the tangent vector to the material interface at p( s) and the directed line segment between the points p( s) and Pl.
Let v( s) be the node velocity found by moving the diffraction node to position p( s), and let q( s) be the velocity of the flow ahead of the incident shock in the frame that moves with velocity v(
s) with respect to the lab frame. The mass flux across this shock is given by
= polq(s)lsinf3(s).
Given m( s) and the state ahead of the incident shock, the state behind the shock and hence its Mach number M(s) can be found. The new node position is given by pesO), where s* is the root of the
equation M(s*) = 1. Finally, the state behind the incident shock with mass flux m( SO) together with the state on the opposite side of the contact are used as data for a steady state Riemann problem
whose solution supplies the states and angles of the transmitted shock, the trailing edge of the reflected rarefaction, and the downstream material interface.
163 (a)
incident sImek
water air
FIG. 3. A diffraction node initially at POD bifurcates into an anomalous reflection. The predicted new node position at po yields a Mach number of 0.984 behind the incident shock. The leading edge of
the reflected PrandtlMeyer wave breaks away from the diffraction node to form an overtake node at Pl. The propagated position of the diffraction node is adjusted to return the flow to sonic behind
the node.
The subsequent propagation of the anomalous reflection node is performed in the same way. The bifurcation repeats itself as more of the reflected rarefaction propagates up the incident shock. The
leading edge of the reflected rarefaction wave that connects to the diffraction node is not tracked after the first bifurcation. The secondary bifurcations that occur when the trailing edge of the
rarefaction overtakes the incident shock are detected in a couple of ways. If the incident shock is sufficiently weak, i.e., the normal shock Mach number is close to 1, then it is possible for the
numerically calculated upstream Mach number to be less than one. This is a purely numerical effect since physically the upstream state is always supersonic. However in nearly sonic cases such
numerical undershoot can occur. If such a situation is detected the trailing edge of the reflected rarefaction wave is disengaged from the anomalous reflection node and installed at a new overtake
node on the incident shock. The residual shock strength for the portion of the incident shock behind the rarefaction wave is small and the diffraction node at the material interface reduces to the
degenerate case of a sonic signal diffracting through a material interface.
The second way in which the secondary bifurcation is detected occurs when the trailing edge of the rarefaction overtakes the shock. Here a new intersection between the incident shock and the trailing
edge characteristic is produced. As before the tracked characteristic is disengaged from the diffraction node and a new overtake node is installed at the point of intersection. The residual shock
strength at the node is non-zero so the diffraction at the material interface produces an additional expansion wave behind the original one. This new expansion wave is not tracked. It is possible to
make a few remarks about the amount of tracking required for these problems. Since the front tracking method is coupled to a finite difference method for the solution away from the tracked interface
(the interior solver), there is always an option between tracking a wave or allowing it to be captured. Of course capturing can result in a considerable loss in resolution in the waves as compared to
tracking [5], but it will also simplify the resolution of the interactions. The secondary bifurcations described above are only tracked when the trailing edge of the reflected Prandtl-Meyer wave is
tracked. The current algorithm is structured so that at a minimum the two interacting incoming waves are tracked. At this extreme none of the outgoing waves are tracked and no explicit bifurcations
in the tracked interface occur. More commonly, the material interface separates different fluids and so must be tracked on both sides of the interaction.
Also, instabilities in the finite difference approximation can affect the accuracy of the solution near the node, especially for stiff materials such as water. Tracking the additional waves seems to
considerably reduce these problems. Tracking also allows the use of a much coarser grid, which is important when the diffraction occurs in a small but important zone of a larger simulation. It allows
the entire region of diffraction to extend only over only a fraction of a grid block. These remarks show that the amount of tracking is problem dependent, and a compromise can be made between the
increased accuracy and stability of front tracking, and the simplicity of a capturing algorithm.
6. Numerical Examples. Fig. 4 shows a series of frames documenting the collision of a 10 Kbar shock wave with a bubble of air in water. Note in this case the trailing edge of the reflected
Prandtl-Meyer wave is not tracked. The states ahead of the incident shock are at one atmosphere pressure and standard temperature. Under these conditions, water is about a thousand times as dense as
air. During the initial stage of the interaction regular diffraction patterns are produced. In less than half of a microsecond an anomalous reflection has formed, and by one microsecond the trailing
edge of the rarefaction has also overtaken the incident shock. It is interesting to note that this interaction causes the bubble to collapse into itself. Long time simulations are expected to show
the initial bubble split, and the resulting bubbles going into oscillation as they are overcompressed and then expand. This process is important in the transfer of energy as a shock passes through a
bubbly fluid. The first diffraction considerably dampens the shock, and much of this energy will eventually be returned to the shock wave in the form of compression waves generated by the expanding
(a) time 0.0
(b) time 0.15 ).Lsec
incident shock wave
regular diffraction
anomalous reflection
FIG. 4. Log(l + pressure) contours for the collision of a shock wave in water with an air bubble. The fluids ahead of the shock are at normal conditions of 1 atm. pressure, with the density of water
1 glce and air 0.0012 g/cc. The pressure behind the incident shock is 10 Kbar with a shocked water density of 1.195 glee. The tracked interface is shown in a dark line. The grid is 60 x 60.
Fig. 5 shows the diffraction of an expanding underwater shock wave through the water's surface. Initially a ten Kbar cylindrically expanding shock wave with a radius of one meter is placed two meters
below the water's surface. The interior of the shock wave contains a bubble of hot dense gas. The states exterior to the shock are ambient at one atmosphere pressure and normal temperature. A
gravitational acceleration of one 9 has been added in this case, but due to the rapid time scale on which the diffractions occur the effect of gravity is negligible. Here the entire reflected
Prandtl-Meyer wave is captured rather than tracked. The pressure contour plots show that by six milliseconds an anomalous reflection has developed as indicated in the blowup of Fig. 5b shown in Fig.
6. Another interesting feature of this problem is the acceleration of the bubble inside the shock wave by the reflected rarefaction wave. This causes the bubble to rise much faster than it would
under just gravity. When the bubble reaches the surface it expands into the atmosphere leading to the formation of a kink in the transmitted shock wave between the region ahead of the surfacing
bubble, and the rest of the wave. This kink is an untracked example of the elementary wave called the cross node where two oblique shocks collide.
166 (a) Lime 0.0 msec
(b) Lime 6.0 mse 0 under more restrictive hypotheses on Wo and the state functions p and T. We have: THEOREM 2. Assume tbat p and T satisfy tbe conditions of a "near ideal gas"; tbat is, in addition
to conditions (8), Pv sbould be negative and lTv I sbould be sufficiently small tbat, for values (v,e) E J{, tbe quantity Q appearing in (6) is
n,gat1~. Let W ~ [~l
Th= 1), =d 1,t W,
b, a
,o~t=t v~tor w;,h (B, ') E K' (K' ;,
~ [ : 1h, Cauchy data ,atmyillg'
(a) (vo(x),eo(x)) E J{' a.e.; (b) Vo - iJ E L2 n Ll n BV , Uo - U E L2
n BV,
eo -
e E L2 n Ll ;
(c) tbe Ll, L2, and BV norms indicated in (b) are sufficiently small.
Tben tbe Caucby problem (1)-(2) bas a weak solution defined for all oft> O. Theorem 2 is proved by deriving time-independent estimates for the local solution Wand its various derivatives, starting
from the entropy equality
j J(;~ + ~~)
Here S is the physical entropy defined by Sv = directly from (1) and from the jump condition (3)
, Se
[~x ] = O.
(9) then follows
Time-independent L2
177 bounds for v-v and e-e are then obtained from (9) by expanding S about its value at (v, e) and controlling the first order terms via the hypothesis Vo - v, eo - e ELI. These estimates, together
with the smallness conditions, then enable us to bound various higher derivatives, so as to obtain time-independent pointwise bounds for v and e. We remark that the condition W o( -00) = W o( +00) in
Theorem 2b is an essential one. Indeed, any global analysis of solutions of (1 )-(2) is likely to include information about the asymptotic behavior of the solution, and this behavior can be quite
complicated when W o( -00) =I W o( +00). One result in this direction is that of Hoff and Liu [3], in which we obtain both the asymptotic behavior (t --+ 00) as well as the strong inviscid limit (E
--+ 0) of solutions of the isentropic/isothermal version of (1) with Riemann shock data. The proof of Theorem 2 is given in [2], which also includes the following result concerning continuous
dependence on initial data: THEOREM 3. In addition to the hypotheses of Theorem 2, assume that eo E BV and that p and T satisfy the conditions of an ideal gas, pv = const. T and T = T( e). Then the
solutions constructed in Theorem 2 depend continuously on their initial values in the sense that, given a time to, there is a constant C such
that, ;[ W,
~ [:: 1,i ~ 1,2, = wi" tion, of (1) d~crib'" in Th,=m 2, th,~ &
for t E [0, to],
IIW2(·, t) - WI (', t)IIL2 (10)
s C(IIW2("
sup Var[v2(" t) b-a=I
0) - WI (', 0)IIL2
vI (',
sup Var[v2(', 0) b-a=I
C depends on t o ,]{, and on upper bounds for the norms in Theorem 2b of the solutions WI and W2 •
We remark that the local variation of V2
is included in the norm in (10) in
order to deal with terms arising from the differencing of
(Uvx )xand (~x ) x.
the other hand, given that VI and V2 are discontinuous variables, it would no doubt be useful to prove continuous dependence in the L2 norm alone. Finally, we point out that the existence,
regularity, and continuous dependence results of Theorems 1-3 can be effectively employed in the design and rigorous analysis of algorithms for the numerical computation of solutions of (1 )-(2).
Indeed, Roger Zarnowski [5] has applied the present analysis to prove convergence of certain finite difference approximations to discontinuous solutions of the isentropic/isothermal version of (1).
His scheme can be implemented under mesh conditions essentially equivalent to the usual CFL conditions for the corresponding hyperbolic equations (E = 0 in (1)); and he proves that, for piecewise
smooth initial data, the error is bounded by i3.x I / 6 in the norm of (10). Observe that, while the
convergence rate is somewhat low, the topology is quite strong, dominating the sup norm of the discontinuous variable v. REFERENCES [1)
DAVID HOFF, Discontinuous solutions of the Navier-Stokes equations for compressible flow, (to appear in Arch. Rational Mech. Ana).
DAVID HOFF, Global existence and stability of viscous, nonisentropic flows, (to appear).
DAVID HOFF AND TAl-PING Lw, The in viscid limit for the Navier-Stokes equations of compressible, isentropic flow with shock data, (to appear in Indiana Univ. Math. J).
DAVID HOFF AND JOEL SMOLLER, Solutions in the large for certain nonlinear parabolic systems, Ann. Inst. Henri Poincare, Analyse Non lineare 2 (1985), 213-235.
ROGER ZARNOWSKI AND DAVID HOFF, A finite difference scheme for the Navier-Stokes equations of one-dimensional, isentropic, compressible flow, (to appear).
NONLINEAR GEOMETRICAL OPTICS JOHN K. HUNTER* Abstract. Using asymptotic methods, one can reduce complicated systems of equations to simpler model equations. The model equation for a single, genuinely
nonlinear, hyperbolic wave is Burgers equation. Reducing the gas dynamics equations to a Burgers equation, leads to a theory of nonlinear geometrical acoustics. When diffractive effects are included,
the model equation is the Z[( or unsteady transonic small disturbance equation. We describe some properties of this equation, and use it to formulate asymptotic equations that describe the transition
from regular to Mach reflection for weak shocks. Interacting hyperbolic waves are described by a system of Burgers or Z[( equations coupled by integral terms. We use these equations to study the
transverse stability of interacting sound waves in gas dynamics.
O. Introduction. Geometrical Optics is the name of an asymptotic theory for wave motions. It is based on the assumption that the wavelength of the wave is much smaller than any other characteristic
lengthscales in the problem. These lengthscales include: the radius of curvature of nonplanar wavefronts; the lengthscale of variations in the wave medium; and the propagation distances over which
dissipation, dispersion, diffraction, or nonlinearity have a significant effect on the wave. When this assumption is satisfied, we say that the wave is a short, or high frequency, wave. For short
waves, the wave energy propagates along a set of curves in space-time called rays. This is one reason why geometrical optics is such a powerful method: it reduces a problem in several space
dimensions to a one dimensional problem. For a single weakly nonlinear hyperbolic wave, this one dimensional problem is the inviscid Burgers equation (1.6), as we explain in section 1. When
diffraction effects are important in some part of the wave field, one must modify the straightforward theory of geometrical optics. For linear waves, this modified theory is called the geometrical
theory of diffraction. In section 2, we analyze the diffraction of weakly nonlinear waves. One obtains the ZK equation (2.2), which is a two dimensional Burgers equation. Unfortunately, little is
known about the ZK equation, and this makes it difficult to develop a nonlinear geometrical theory of diffraction. As an example, we use the Z K equation to formulate asymptotic equations which
describe the transition from regular to Mach reflection for weak shocks. Unlike linear waves, nonlinear waves interact and produce new waves. For multiple waves, nonlinear geometrical optics leads to
a coupled system of Burgers equations (3.3). In section 3, we formulate asymptotic equations (3.6) which describe the diffraction of interacting waves. We use these equations to study the transverse
stability of interacting sound waves in gas dynamics. Keller [18] reviews linear geometrical optics. Other reviews of geometrical optics for weakly nonlinear hyperbolic waves are given by Nayfeh
[27], Majda [24], and Hunter [13]. *Department of Mathematics, Colorado State University, Fort Collins, CO 80523. Present Address: Department of Mathematics, University of California, Davis, CA
180 1. Single Waves. 1.1 The eikonal and transport equations. We consider a hyperbolic system of conservation laws in N + 1 space-time dimensions. N
L:/(x,u)",. = O. i=O
Short wave solutions of (1.1) are solutions which vary rapidly normal to a set of wavefronts t/J( x) = constant. We call t/J the phase of the short wave. We look for small amplitude, short wave
solutions of (1.1), with an asymptotic approximation of the form.
(1.2) The amplitude in (1.2) is of the order of the wavelength. We choose this particular scaling because it allows a balance between weakly nonlinear and nonplanar effects. Multiple scale methods
[14] show that the phase in (1.2) satisfies the eikonal equation associated with the linearized version of (1.1), namely
(1.3) In (1.3), Ai(x) = V' ufi(x, 0). We denote left and right null-vectors of the matrix in (1.3) by f!(x, V't/J) and r(x, V't/J) respectively. Associated with the phase is an N -parameter family of
rays or bicharacteristics. The rays are curves in space-time with equation x = X( s;;3) where dXi f! . Ai r. Ts=
Here, s E R is an arclength parameter along a ray, while ;3 E RN is constant on a ray. We assume that the transformation between space-time coordinates x and ray coordinates (s,;3) is smooth and
invertible. This assumption is not true at caustics, and then the simple ansatz in (1.2) does not provide the correct asymptotic solution. Instead, diffractive effects must be included (see section 2
and [23], [15], [13]). The explicit form of the asymptotic solution (1.2) is
(1.4) where the scalar function a( (J, x) is called the wave amplitude. The dependence of a on (J describes the wave-form. For oscillatory wavetrains, a is a periodic or an almost periodic function
of (J; for pulses, a is compactly supported in (J; for wavefronts, the derivative of a with respect to (J jumps across (J = 0, etc. The dependence of a on x describes modulation effects such as the
increase in amplitude caused by focusing and the nonlinear steepening of the wave-form.
Multiple scale methods also imply that the wave amplitude satisfies a nonlinear transport equation,
a, + M aag + Qa = O.
In (1.5), a/as is a derivative along a ray,
aN. a
- = L:e.A'r-.
The coefficient M measures the strength of the waves quadratically nonlinear selfinteraction, and is given by N
M(s,;3) = L:x;e. v u 2 i(x,O)' (r,r). i=O
M is nonzero for genuinely nonlinear waves and M is zero for linearly degenerate waves. The coefficient Q describes the growth or decay of the amplitude due to focusing of the wave and
nonuniformities in the medium. It is given by
= L:e. aXi (Air). ,=0
Since r depends on v, Q involves second derivatives of . It is therefore unbounded near caustics, where the curvature of the wavefronts is infinite. There is one Burgers equation (1.5) for each ray.
Solving them, together with appropriate initial data obtained from initial, boundary, or matching conditions, gives a(O,s,;3). Finally, evaluating 0 at €-l(x) in the result gives the asymptotic
solution (1.4). The transport equation (1.5) can be reduced to a standard form by the change of variables
= E- 1 (s,;3)a(s,;3,O)
x = 0, [=
M (s',;3)E(s',;3)ds',
E(s,;3) We assume that that M (1.6)
= exp
[-1' Q(sl,;3)ds l] .
O. The result is that u(x,l;;3) satisfies Ut
+ UUi = O.
Thus, (1.6) is the canonical asymptotic equation for a genuinely nonlinear, hyperbolic wave. We remark that if weak viscous effects are included, then, instead of (1.6), one obtains a generalized
Burgers equation, (1.7) The viscosity v is constant only for plane waves in a uniform medium. In that case, (1.7) can be solved explicitly by the Cole-Hopf transformation [32]. If v is not constant,
then (1. 7) cannot be solved explicitly, and numerical or perturbation [28] methods are required.
182 1.2 Nonlinear geometrical acoustics. Sound waves in a compressible fluid are a fundamental physical application of the above ideas. The resulting theory is called nonlinear geometrical acoustics
(NGA). For reviews of NGA, see [7], [8], [9].
The equations of motion of an inviscid, compressible fluid are Pt
+ div (pu) =
div (pu@ u - pI)
= pf,
[pGu.u+e)L + div [puGu.u+e) -pu] =0. Here, p is the fluid density, p is the pressure, e is the specific internal energy, and u is the fluid velocity. We include a given body force f(x, t) and we
neglect any heat sources. For simplicity, we consider a polytropic gas for which
e = - - -. ,-1 p
Here, the constant, > 1 is the ratio of specific heats. Similar results are obtained for general equations of state. Suppose that p = Po(x, t), P = Po(x, t), u = uo(x, t)
is a given smooth solution of (1.8). We denote the corresponding sound speed by c = co(x, t). The NGA solution for a sound wave propagating through this medium IS
(1.9) Here, we define the local frequency w, the wavenumber k, and the Doppler shifted frequency Q by w = - ~ (2.22) hyperbolic, e+ ~ < ~ (2.22) elliptic. U
However, (2.22) - (2.25) does not model all features of the gas dynamics problem. Complex Mach reflection and double Mach reflection are not observed for weak shocks, so (2.22) - (2.25) is unlikely
to describe those phenomena. A simple local analysis shows that regular reflection is impossible for 0 < a < 21 / 2 • We can approximate a regularly reflected solution near the point where the
incident and reflected shocks meet the wedge by a piecewise constant solution,
> ay + Vt,
> 0; = 1, v = -a, -j3y + Vt < x < ay + Vt, = UL, V = 0, x < -j3y + Vt, Y > O.
= v = 0,
y > 0;
The jump conditions (2.19) imply that UL
= 1 +-, j3
where j3 is a solution of (2.26) The reflected shock is admissible if j3 > O. Equation (2.26) has two positive roots for j3 when a > 21/2. The equation has no positive roots when 0 < a < 21/2. One
interesting explicit solution of (2.22) can be obtained from (2.14) with
= ±I x 11/2,
U(x,O) =0,
x < 0, x~O.
Taking the minus sign, the corresponding solution for u and v is (2.27) U
= v = 0,
p > O.
190 Taking the plus sign, the solution is
u=1+(1_p)I/2, u = v =
Here, p = ~ + /4. Equation (2.27) describes an outgoing cylindrical expansion wave; (2.28) describes an outgoing cylindrical shock. Equation (2.22), with different boundary conditions, also arises as
a description of weak shocks at a singular ray [10], [12], [34]. This equation may serve as a model equation for two dimensional Riemann problems in general. 3. Diffraction of Interacting Waves.
3.1 Diffraction of interacting waves. The ZK equation is a generalization of Burgers equation that includes diffraction effects. Interacting hyperbolic waves are described asymptotically by a system
of Burgers equations coupled by integral terms. In this section, we generalize these equations to include diffraction. The result is a coupled system of ZK equations. An asymptotic theory for weakly
nonlinear interacting hyperbolic waves is developed in [16], [25]. We shall briefly describe that theory in the simplest case. We consider a hyperbolic system of conservation laws in one space
dimension, (3.1)
+ f(u)z = o.
Suppose that there are three interacting periodic waves which satisfy the resonance condition WI
+W2 +W3 = 0,
+ k2 + k3
= 0,
= 1,2,3.
Here, Wj and kj are the frequency and wavenumber of the jth wave, and Aj is the linearized wave velocity. The asymptotic solution for the interacting waves is then 3
u = € ~.:>j [€-I(kjx - Wjt),t]
+ 0(€2),
as € -> 0 with x, t = 0(1). In (3.2), rj is a right eigenvector of \7 uf(O) associated with the eigenvalue Aj, and the wave amplitudes aj( 8, t) are 271'-periodic functions ofthe phase variable 8.
The amplitudes solve the following system of integro-differential equations,
191 where (j, p, q) runs through cyclic pennutations of (1,2,3). The coefficients are
= V'uAj(O)· r = Pj · V'u 2 f(O)· (rj,rj),
fj = Pj . V'u 2 f(O) . (rp, rq). Here, Pj is a left eigenvector of V'uf(O) associated with Aj. It is nonnalized so that Pj • rj = l. To analyze the effects of wave diffraction, we consider a two
dimensional version of (3.1), namely (3.4)
+ f(u)x + g(u)y = O.
For simplicity, we assume that (3.4) is isotropic, meaning that is is invariant under -+ 0 (x, y) for all orthogonal transformations O. The rays associated with the phase cPj = kjx - Wjt are then cPj
= constant, y = constant. Thus, the transverse variable tP = y is constant on the rays associated with each phase cPj. Complications arise when the transverse variable is not constant on all sets of
rays. This case may occur for anisotropic waves, and we shall not consider it further here.
(x, y)
The generalization of (3.2) that includes weak diffraction in the y-direction is then
=fL 3
aj [f-I(kjx - Wjt), f- I / 2y, t] rj
+ 0(f3/2).
The amplitudes aj( 8,7], t) satisfy
8e{ ajt(8,7],t) (3.6)
+ f j 2~
+ Mjaj(8,7],t)aje(8,7],t)
ap ( -8 -
~, 7], t)aqe(~, 7], t)d~ } + 2t ajqq(8, 7], t) = O.
For solutions which are independent of 7], (3.6) reduces to (3.2), after an integration with respect to 8. For a single wave (a2 = a3 = 0), it reduces to the ZK equation for al. 3.2 Transverse
stability of interacting waves in gas dynamics. There are three wave-fields in one dimensional gas dynamics. They are the left- and rightmoving sound waves, and a stationary entropy wave. According
to the asymptotic theory described in section 3.1, the entropy wave decouples from the sound waves. Consequently, the system of three equations in (3.3) reduces to a pair of equations for the sound
wave amplitudes. These equations describe the resonant reflection of sound waves off a periodic entropy perturbation. After rescaling to remove inessential coefficients, the equations are Ut
+ UU x + 271"1 Jot" K(x -
+ VVx -
Ov(~, t)d~
= 0,
1 f2" 271" K( -x + Ou(~, t)d~ = O.
192 In (3.7), K is a known kernel, which is proportional to the derivative of the entropy wave amplitude. The dependent variables u(x, t) and v(x, t) are proportionaUo the amplitudes of the
right-moving and the left-moving sound waves. The soundwave amplitudes and the kernel are 21l"-periodic, zero-mean functions of the phase variable, x. These equations are derived in [25], and they
are further analyzed in
[26]. Pego [29] found an explicit smooth travelling wave solution of (3.7), in the special case of a sinusoidal kernel, K(x) = sinx. His solution is (3.8)
u = uo(x - ct) = u[e + bf(x - ct; 0:)]' V
= vo(x -
= 0" [e + bf(x -
ct : 0"0:)].
In (3.8), 0" E {-1,+1},0: E [0,1],
f(8;0:) = [1 + 0: cos 8]1/2 ,
(3.9) and
= -1
= -~ 1271"(1 + o:cose)1/2de.
cose(1 + o:cose)1/2de,
There are two families of travelling waves (3.8), depending on the choice of 0". They exist only up to a finite amplitude. The wave of maximum amplitude, corresponding to 0: = 1, has a corner in its
crest or trough. We shall show that the waves with 0" = +1 are unstable to transverse perturbations when 0: is small and when 0: is close to one. We remark that the stability of these waves to one
dimensional perturbations has not been studied. Our stability analysis is essentially the same as the use of the K P equation to study the transverse stability of K dV solitons [1], [17]. The
generalization of (3.7), with K(x) (3.11)
= sinx, that includes weak diffraction is
Ox {Ut + UU x + Xv} + Uyy = 0, Ox {Vt + vVx + Xu} + Vyy
= 0,
where (3.12)
1 1271" sin(x - ou(e, y, t)de. 21l" 0
Xu(x, y, t) = -
The choice K(x) = sinx simplifies some of the subsequent algebra. However, transverse perturbations of interacting waves for general kernels K can be analyzed in a similar way. Let T denote
translation by 1l" in x. Then TX = XT = -X. The change of variables u -4 -u, V -4 -Tv, x -4 -x, Y -4 y, t -4 t maps the solution in (3.8) with 0" = -1 onto the solution with 0" = +1, and it
transforms (3.11) to (3.13)
+ UU x + Xv} Ox {Vt + vVx + Xu} Ox {Ut
Uyy Vyy
= 0, = 0.
193 We shall consider solutions of (3.11) or (3.13) with u = v. (This assumption does not alter the final result.) It therefore suffices to consider transverse perturbations of u = uo(x - ct), where
+ UU x +Xu} + UU yy = o.
We seek an expansion for long-wavelength transverse perturbations of the travelling wave solution (3.8) in the form (3.15) In (3.15), the multiple scale variables are evaluated at B = x - ct - ,
where j is given in (3.9). these integrals are functions of the amplitude parameter a, and they are related by 1
= -3a2
[(a 2 + 2)H - 2M].
In addition, from (3.10), 2
b = - 2 [(a 2 3a
l)H + M],
= -bM.
All these functions can be expressed in terms of complete elliptic integrals of the first and second kinds,
In particular, H =
~(1 + a)-1/2 K (~) , 7r 1 +a
2 + a) 1/2 E = -(1 7r
(2a - - ). l+a
The solution of (3.27) is
(3.31 )
= J2 + bH -
H J' b(J-b) J2 +bH -HJ
Equations (3.26) and (3.31) are the explicit solution of (3.21). Using (3.26), (3.28), and (3.30) in (3.23) implies that (3.32) For general values of a, (3.32) must be evaluated numerically. Here, we
shall calculate " for small amplitude waves (a -+ 0) and waves close to the limiting wave (a-+1). For small amplitudes, (3.28) and (3.29) imply that (3.33)
196 Then, using (3.29) - (3.33), we find that (3.34)
= -32I7a- 2
+ 0(1).
It follows that small amplitude travelling waves (3.8) with 17 = +1 are unstable, while those with 17 = +1 are linearly stable to long transverse perturbations.
For large amplitudes, standard asymptotic expansions of complete elliptic integrals imply that (3.35) as a
-+ 1-.
= 21 /12 7r log (13_2a ) + 0(1),
After some algebra, (3.29) - (3.32) and (3.35) show that
(3.36) Thus, the wave with 17 with 17 = -1 is stable.
2 (37r 2 7r2 _
3177r = -"""32
= +1
16) + 0(1).
is also unstable at large amplitudes, while the wave
Acknowledgements. This work was supported in part by the Institute for Mathematics and its Applications with funds provided by the NSF, and by the NSF under Grant Number DMS-8810782. REFERENCES [I]
[2] [3] [4] [5]
[6] [7]
[8] [9] [10]
[11] [12] [13]
ABLOWITZ, M.J., AND SEGUR, H., Solitons and the Inverse Scattering Transform, SIAM, Philadelphia (1981). BAMBERGER, A., ENQUIST, B., HALPERN, L., AND JOLY, P., Parabolic waVe equations and
approximations in heterogeneous media, SIAM J. App!. Math., 48 (1988), pp. 99-128. CATES, A., Nonlinear diffractive acoustics, fellowship dissertation, Trinity College, Cambridge, unpublished,
(1988). CHANG, T., AND HSIAO, L., the Riemann Problem and Interaction of Waves in Gas Dynamics, Longman, Avon (1989). COLE, J.D., AND COOK, L.P., Transonic aerodynamics, Elsevier, Amsterdam (1986).
CRAMER, M.S., AND SEEBASS, A.R., Focusing of a weak shock at an arete, J. Fluid Mech, 88 (1978), pp. 209-222. CRIGHTON, D.G., Model equations for nonlinear acoustics, Ann. Rev. Fluid Mech., 11
(1979), pp. 11-13. CRIGHTON, D.G., Basic theoretical nonlinear acoustics, in Frontiers in Physical Acoustics, Proc. Int. School of Physics "Enrico Fermi", Course 93 (1986), North-Holland, Amsterdam.
HAMILTON, M.F., Fundamentals and applications of nonlinear acoustics, in Nonlinear Wave Propagation in Mechanics, ed. T.W. Wright, AMD-77 (1986), pp. 1-28. HARABETIAN, E., Diffraction of a weak shock
by a wedge, Comm. Pure App!. Math., 40 (1987), pp. 849-863. HORNUNG, H., Regular and Mach reflection of shock waves, Ann. Rev. Fluid Mech., 18 (1986), pp. 33-58. HUNTER, J.K., Transverse diffraction
and singular rays, SIAM J. App!. Math., 75 (1986), pp. 187-226. HUNTER, J.K., Hyperbolic waves and nonlinear geometrical acoustics, in Transactions of the Sixth Army Conference on Applied Mathematics
and Computing, Boulder CO (1989), pp. 527-569. HUNTER, J .K., AND KELLER, J .B., Weakly nonlinear high frequency waves, Comm. Pure App!. Math., 36 (1983), pp. 547-569.
197 [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
[25] [26] [27]
[28] [29] [30] [31] [32] [33] [34] [35]
HUNTER, J.K., AND KELLER, J.B., Caustics of nonlinear waves, Waye motion, 9 (1987), pp. 429-443. HUNTER, J.K., MAJDA, A., AND ROSALES R.R., Resonantly interacting weakly nonlinear hyperbolic waves,
II: several space variables, Stud. Appl. Math., 75 (1986), pp. 187-226. KADOMTSEV, B.B., AND PETVIASHVILI, V.I., On the stability of a solitary wave in a weakly dispersing media, SOY. Phys. Doklady,
15 (1970), pp. 539-541. KELLER, J.B., Rays, waves and asymptotics, Bull. Am. Math. Soc., 84 (1978), pp. 727-750. KODAMA, Y., Exact solutions of hydrodynamic type equations having infinitely many
conserved densities, IMA Preprint # 478 (1989). KODAMA, Y., AND GIBBONS, J., A method for solving the dispersionless KP hierarchy and its exact solutions II, IMA Preprint # 477 (1989). KUZNETSOV,
V.P., Equations of nonlinear acoustics, SOY. Phys. Acoustics 16 (1971), pp. 467-470. LIGHTHILL, M.J., On the diffraction of a blast I, Proc. R. Soc. London Ser. A 198 (1949), pp. 454-470. LUDWIG, D.,
Uniform asymptotic expansions at a caustic, Comm. Pure Appl. Math. 19 (1966), pp. 215-250. MAJDA, A., Nonlinear geometrical optics for hyperbolic systems of conservation laws, in Oscillation Theory,
Computation, and Methods of Compensated Compactness, Springer-Verlag, New York, IMA Volume 2 (1986), pp. 115-165. MAJDA, A. AND ROSALES, R.R., Resonantly interacting hyperbolic waves, I: a single
space variable, Stud. Appl. Math. 71 (1984), pp. 149-179. MAJDA, A., ROSALES, R.R., AND SCHONBEK, M., A canonical system ofintegro-differential equations in resonant nonlinear acoustics, Stud. Appl.
Math., 79 (1988), pp. 205-262. NAYFEH, A., A comparison of perturbation methods for nonlinear hyperbolic waves, in Singular Perturbations and Asymptotics, eds. R. Meyer and S. Parter, Academic Press,
New York (1980), pp. 223-276. NIMMO, J.J .C., AND CRIGHTON, D.C., Nonlinear and diffusive effects in nonlinear acoustic propagation over long ranges, Phil. Trans. Roy. Soc. London Ser. A 384 (1986),
pp. 1-35. PEGO, R., Some explicit resonating waves in weakly nonlinear gas dynamics, Stud. Appl. Math., 79 (1988), pp. 263-270. STURTEVANT, B., AND KULKARNY, V.A., The focusing of weak shock waves,
J. Fluid Mech., 73 (1976), pp. 1086-1118. TIMMAN, R., in Symposium Transsonicum, ed. K. Oswatitsch, Springer-Verlag, Berlin, 394 (1964). WHITHAM, G.B., Linear and Nonlinear Waves, Wiley, New York
(1974). ZABOLOTSKAYA, E.A., AND KHOKHLOV, R.V., Quasi-plane waves in the nonlinear acoustics of confined beams, SOy. Phys.-Acoustics, 15 (1969), pp. 35-40. ZAHALAK, G.I., AND MYERS, M.K., Conical
flow near singular rays, J. Fluid. Mech., 63 (1974), pp. 537-561. JOHNSON, R.S., Water Waves and Korteweg-deVries Equations, J. Fluid. Mech., 97 (1980), pp. 701-719.
GEOMETRIC THEORY OF SHOCK WAVES* TAl-PING LIUt Abstract. Substantial progresses have been made in recent years on shock wave theory. The present article surveys the exact mathematical theory on the
behavior of nonlinear hyperbolic waves and raises open problems. Key words. Conservation laws, nonlinear wave interactions, dissipation and relaxation. AMS(MOS) subject classifications. 76N15, 35L65
1. Introduction. A large class of nonlinear waves in mechanics, gas dynamics, fluid mechanics and the kinetic theory of gases are nonlinear hyperbolic waves in that the behaviors of these waves are
governed in a basic way by certain a-priori determined characteristic values. Of these the most important ones are the shock waves. The strong nonlinear nature of shock waves makes the theory
interesting and rich. Because most physical models carrying hyperbolic waves are not scalar but systems, waves of different characteristic families interact. Understanding this nonlinear coupling of
waves is the essence of the theory for hyperbolic conservation laws, which is described in the next section. With the inclusion of dissipative mechanism, as in the compressible Navier-Stokes
equations, we have viscous conservation laws. The inclusion is due to the importance of dissipative mechanisms in the study of shock layer, initial and boundary layers, and wave interactions. It is
also to check the validity of hyperbolic conservation laws through the zero dissipation limits. These issues are considered in Section 3. The phenomenon of relaxation occurs in many physical
situations: gas dynamics with thermo-non-equilibrium elasticity with fading memory, kinetic theory of gases, etc. Conservation laws with relaxation are in some sense more singular a perturbation of
hyperbolic conservation laws than that of viscous conservation laws. This and the dual nature of hyperbolicity and parabolicity for a relaxation system are explained in Section 4. Conservation laws
with reaction and diffusion may be highly unstable. In Section 5 a class of such systems originated from the study of nozzle flow is described. Nonstrictly hyperbolic conservation laws are important
in the study of MHD, elasticity and multiphase flows. Behavior of waves for such a system with or without the effect of damping or dissipation is discussed in Section 6. Finally several concluding
remarks are made in the last section. 2. Hyperbolic Conservation Laws. Hyperbolic conservation laws
af(u) _ 0
at + ax - ,
*This paper was written while t.he aut.hor was visiting t.he Institute for Mathematics and Its Applications, University of Minnesota, Minneapolis, MN 55455 tCourant Institute, New York University,
251 Mercer Street New York, NY 10012
199 are the basic model for shock wave theory. In this section we assume that it is strictly hyperbolic, i.e. f'(u) has real and distinct eigenvalues >'l(U) < >'2(U) < ... < >'n(u). The compressible
Euler equations have this property. Each characteristic value >'i( u) carries a family of waves. The interactions of these waves are the essence of the theory for (2.1). For system of two
conservation laws the interactions of waves of different characteristic families are weaker because of the existence of the Riemann invariants for two equations. Mainly because of this, behavior of
solutions whose initial data oscillating around a constant has been studied only for two conservation laws, Glimm-Lax [4], which are genuinely nonlinear in the sense of Lax [8]. For general systems
for which each characteristic field is either genuinely nonlinear or linear degenerate, the large-time behavior of solutions with small total variation has been studied satisfactorily, Lin [11]. The
regularity and large-time behavior for general systems not necessarily genuinely nonlinear are studied in Liu [12], though with no rate of convergence to asymptotic states. The above results are
obtained through a principle of nonlinear superposition, Glimm [3], Glimm-Lax [4] and Liu
[10]. There is no satisfactory uniqueness theory for any physical system. The best result so far is that of DiPerna [2], which shows that for genuinely nonlinear two conservation laws a piecewise
smooth solution with no compression wave is unique within the class of solutions of bounded total variation. The problem is important because of the need of the entropy condition for the hyperbolic
shock wave theory. Existence theory for large data has been obtained for certain two conservation laws using the theory of compensated compactness, see articles on the theory in this volume. The
problem makes sense in general only for certain physical systems, since it is easy to construct systems for which Riemann problem with large data is not solvable. It would be interesting to study the
problem for the compressible Euler equations. Study of interaction of rarefaction waves, Lin [9], has led us to conjecture that when the data does not yield vacuum immediately then the solution does
not contain vacuum and is of bounded local variation for any positive time. 3. Viscous Conservation Laws. Consider the viscous conservation laws
(3.1) For physical systems, the viscosity matrix is in general not positive. The system then becomes hyperbolic-parabolic and not uniformly parabolic. This has the effect that discontinuity in the
initial data may not be smoothed out. A more important hyperbolic character of the system comes from the nonlinearity of the flux function f( u), it is there even if B( u) is positive. The behavior
of a nonlinear wave can be detected through the characteristic values. Viscous shock waves are compressive and therefore is stable in a different sense from the expansion waves, Liu [13], LiuXin
[15], and references therein. Diffusion waves are at worst weakly expansive and quite stable, Chern-Liu [1]. It is therefore natural to study these waves through a hyperbolic-parabolic technique, Liu
[13]. The technique needs to be refined to study the stability of a general wave pattern consisting of both compression and expansion waves.
200 The central question for (3.1) is to understand the behavior of a general flow as the strength of the viscosity matrix B( u) varies, in particular when it tends to zero. The interesting case, of
course, is when the corresponding inviscid solution for (2.1) is not smooth. Hopf-Liu [6] solves the problem for a single shock wave. The interaction of initial and shock layers are studied there.
For viscosity matrix not positive, discontinuities in the data propagate into the solution [6] and reference therein. Recently Goodman-Xin [5] uses the technique of matched asymptotic expansion and
characteristic-energy method to show that for a given piecewise smooth inviscid solution there exists a sequence of viscous solutions converging to the given inviscid solution. While the inviscid
solution in [5] is more general than that of [6], [5] does not address the formation and interaction of shock waves, and the initial data for viscous solutions are not fixed. The difference of
results in these works provided interesting problems. One possible way to make further progress in this area is to refine and generalize the techniques in [13]. 4. Relaxation. In many physical
situations effects of nonequilibrium, delay, memory and relaxation are important. The mathematical models usually take the form of either hyperbolic conservation laws with integral or lower-order
terms. Such an effect has a partial smoothing property much like the viscous conservation laws except that it is weaker and does not smooth out strong shock wave. Although equations in various
physical situations, e.g. gas dynamics with thermo-non-equilibrium, elasticity with fading memory, kinetic theory of gases, take different forms, there are common features, see the mathematical
analysis for a simple model in [14] and physical models in the references therein. Mathematical models in phase transition and multi-phases often are ill-posed. It is hoped that the inclusion of the
nonequilibrium would make the equations well-posed. When succeeded, the question is then to study the stability and instability of nonlinear waves. These, however, remain challenges for future
researches. As mentioned above; hyperbolic conservation laws with relaxation have some dissipative behavior not dissimilar to viscous conservation laws. However, the former is less parabolic than the
latter. There is a hierarchy of hyperbolicity and parabolicity. Mathematic!'Ll analysis suitable to study general nonlinear behavior of these various degree of hyperbolicity and parabolicity of these
physical models remains far from being complete. Here we mention a specific problem. Consider a simple model as in [14].
ov ot +f(v,w)=O
Ow og 7ft + ox(v,w) = h(v,w)
where v represents the conserved quantity and w the relaxing quantity and h( v, w) may take the form
h(v,w) '"
w*(v) - w rev) ,
r( v) > 0 the relaxation time and w*( v) the equilibrium value for v. One may view
201 the system as the perturbation of the equilibrium conservation law (4.2)
~~ + :xf(v,w*(v)) = 0
The question is to show that solutions of 14.1) tend to solutions of (4.2) in the limit of the relaxation time rev) ---> 0+. Because the perturbation (4.1) of (4.2) is more singular than that of
(3.1) of (2.1), theory of compensated compactness has not been shown to work. 5. Convective Reaction-Diffusion. When reaction is present in viscous conservation laws, the system becomes convective
reaction-diffusion equations and often takes the form
Such a model occurs in important physical situations such as combustions. Nonlinear waves for the system can be highly unstable, see articles on combustions in this volume. In [7] a model arised from
the theory for gas flow through a nozzle is studied. It turns out that the inviscid theory offers guide for the stability and instability of waves for such viscous model. The theory for nozzle flow,
see (7) and references therein, provides the mathematical basis for the occurance of unstable waves for gas flows. For nozzle flow instability results from the geometric effect of the nozzle. In
combustion the reactions are chemical and have highly unstable effects. Mathematical study of such behavior remains largely a challenge. 6. Nonstrictly Hyperbolicity. Studies of conservation laws
which are nonstrictly hyperbolic are centered mostly on the important Riemann problem, see articles on this volume. The effects of viscosity on the behavior of overcompressive waves have been
studied, see the article of Liu and Xin in the proceeding of the last IMA workshop on equations of mixed types. Recently overcompressive shock waves have shown to occur in MHD and nonlinear
elasticity, see also articles in the aforementioned volume. It would be interesting to study the effects of viscosity for these systems as well as for other systems with crossing shocks. 7.
Concluding Remarks. We have seen several types of equations which carry shock waves; and there are more. The classification of these equations into hyperbolic, parabolic or mixed types tells part of
the story. Coupling of different models of waves and dissipation induced by nonlinearity, relaxation, viscosity etc. are also important elements in the shock wave theory. It is important to recognize
the elementary modes of a general flow, whenever possible. Even though the progresses so far have been very substantial, many more fundamental questions remain to be answered. One hopeful sign is
that several different approaches are available now. The present article emphasizes the geometric approach of shock waves. Undoubtedly new progress will be made based on the basic understanding of
the available techniques illustrated in the articles in this volume.
202 REFERENCES [1]
CHERN, I.-L., AND LIU, T.-P., Convergence to diffusion waves of solutions for viscous conservation laws, Comm. Math. Phys. 110 (1987), 503-517.
DI PERNA, R., Uniqueness of solutions to hyperbolic conservation laws, Indiana U. Math. J. 28 (1979), 244-257.
GLIMM, J., Solutions in the large for nonlinear systems of equations, Comm. Pure Appl. Math., 18 (1965), 95-105.
GLIMM, J. AND LAX, P.D., Decay of solutions of nonlinear hyperbolic conservation laws, Amer. Math. Soc. Memoir, No. 101 (1970).
[5] GOODMAN, J. AND XIN, Z., (preprint). [6]
HOPF, D. AND LIU, T.-P., The in viscid limit for the Navier-Stokes equations of compressible isentropic flow with shock data, Indiana U. J. (1989).
Hsu, S.-B. AND LIU, T.-P., Nonlinear singular Sturm-Liouville problem and application to transonic flow through a nozzle, Comm. Pure Appl. Math. (1989).
LAX, P .D., Hyperbolic systems of conservation laws, II, Comm. Pure Appl. Math., 10 (1957), 537-566.
LIN, L., On the vacuum state for the equation of isentropic gas dynamics, J. Math. Anal. Appl. 120 (1987), 406-425.
LIU, T.-P., Deterministic version of Glimm scheme, Comm. Math. Phys., 57 (1977), 135-148.
LIU, T.-P, Linear and nonlinear large time behavior of general systems of hyperbolic conservation laws, Comm. Pure Appl. Math., 30 (1977), 767-798.
LIU, T .-P., Admissible solutions of hyperbolic conservation laws, Amer. Math. Soc. Memoir, No. 240 (1981).
LIU. T.-P., Nonlinear stability of shock waves for viscous conservation laws, Memoirs, Amer. Math. Soc. No. 328 (1975).
LIU, T.-P., Hypel'bolic conservation laws with relaxation, Math. Phys. 108 (1987), 153-175.
LIU, T .-P, AND XIN, Z., Nonlinear stability of rarefaction waves for compressible Navier-Stokes equations, 118 (1988), 415-466.
Abstract. In fluid flows one can often identify surfaces that correspond to special features of the flow. Examples are boundaries between different phases of a fluid or between two different fluids,
slip surfaces, and shock waves in compressible gas dynamics. These prominent features of fluid dynamics present formidable challenges to numerical simulations of their mathematical models. The
essentially nonlinear nature of these waves calls for nonlinear methods. Here we present one such method which attempts to explicitly follow (track) the dynamic evolution of these waves (fronts).
Most of this exposition will concentrate on one particular implementation of such a front tracking algorithm for two space, where the fronts are one-dimensional curves. This is the code associated
with J. Glimm and many co-workers.
Introduction. In fluid flows one can often identify surfaces of co-dimension one that correspond to prominent features in the flow. Examples are boundaries between different phases of a fluid or
between two different fluids, slip surfaces, shock curves in compressible gas dynamics. All such surfaces are characterized by significant changes in the flow variables over length scales small
compared to the flow scale. For example in oil reservoirs the oil banks have a size of 10 meters compared to an average length scale of 10 kilometers; or in compressible gas dynamics shock waves have
a width of 10- 5 cm compar'ed to a length scale of 10 cm. The dynamics of such waves may be influenced by their internal structures. Whereas for shock waves the speed depends on the asymptotic states
to the left and right, for two dimensional detonation waves the speed depends also on the chemistry and curvature, [B], [J]. There are situations where it is necessary to take these physical aspects
of the flow into account when doing a numerical simulation. A simple model for nonlinear wave propagation is Burger's equation Ut
+ UU x = VU xx
where the state variable U is convected with characteristic speed U and diffused with viscosity v. Because of the dependence of the characteristic speed on the state variable one obtains a focusing
effect that leads to the formation of shock waves. Consider initially a wave of length L (see Fig. 1 y. The monotone decreasing part of the wave will steepen such that in a thin layer the solution
rapidly decreases from a value u/ to u r . The width w of this layer is about I v I' and this layer moves with speed s =
(u/ - u r ). If w ~ L, we may approximate the layer by a jump
from U/ to U r and consider the inviscid limit by neglecting v to obtain the inviscid Burger's equation
*Department of Applied Mathematics, University of Heidelberg, 1m Neuenheimer Feld 294, D-6900 Heidelbert, Germany tDepartment of Applied Mathematics and Statistics, SUNY at Stony Brook, Stony, Brook,
NY 11794
t=- i, ~to
II , 14-~ I
s \ .-~ I
Fig. 1 The evolution of the initial data (left) under Ut
+ UU x =
VU xx
is given on the right. Now data as in Fig. 1 leads to jumps in the solution, where the Rankine-Hugoniot conditions govern the relationship between the speed of the jump and its left and right
asymptotic states. When computing such a flow with very small viscosity v, suppose we represent the state variables associated with points on a fixed underlying grid with spacing ~x. In this
framework we would like to contrast two numerical methodologies: shock capturing and shock tracking. In the shock capturing methods v is replaced by a 1/ numerical viscosity Vnum ~ v. The width of a
shock layer W num = I nUIn I ~ 3~x, UI- U r
so that these waves are most accurate for weak waves. In a shock tracking method an additional moving grid point is introduced which serves as a marker for the shock position. The algorithm has to
update its position and the asymptotic left and right states on the underlying fixed grid. For the moving of the shock point analytic information about it is necessary. Shock tracking corresponds to
replacing v by zero, so it is best for strong waves and gives a high resolution on relatively coarse grids. The front tracking principle, which is not limited to conservation laws or to shocks, is
that a lower dimensional grid gets fit to and follows the significant features in the flow. This is coupled with a high quality interior scheme to capture the waves that are not tracked. In the
following we talk only about front tracking in two space dimensions. First we describe tracking of a single wave and mathematical issues arising from this. Next we discuss tracking wave interactions
and its mathematical issues. Then follows a section describing the data structure of a front tracking code. After a few numerical examples we give a conclusion.
Front tracking applied to a singe wave. Suppose we consider an expanding cylindrical shock wave for a certain time interval. Say this is modeled by the two dimensional Euler equations for polytropic
gas dynamics where the outstanding feature of the flow is a shockwave with smooth flow in front and behind it. If the numerical simulation requires a high level of resolution on a moderate size grid,
front tracking lends itself to this problem. To this end a one dimensional grid is fitted to the shock wave and follows its dynamic evolution. The smooth flow is captured using an underlying two
dimensional grid, where in each time step an initial-boundary value problem is solved in each smooth component of the flow field. The front is represented by a finite number of points along the
curve, which carry with them physical data, in this case the left and right states and the fact that it is a hydrodynamic shock wave. Say the underlying grid is cartesian, which carries the
associated state variables at each grid point. Each timestep consists of a front propagation and an interior update. THE CURVE PROPAGATION is achieved by locally at each curve point rewriting the
equation in a rotated coordinate system, normal and tangential to the front: Ut
+ n((n. V)f(u)) + 8' ((8' V)f(u))
= O.
This then gets solved through dimensional splitting. The normal step reduces to a one dimensional Riemann problem, if one approximates the data to the left and right of the shock by constants.
1--- -- - - ' - - - -
Fig. 2 A second order scheme for the normal propagation of a hydrodynamic shock wave, [CG]. This normal step can be made into a second order scheme in the following way
[CG], see Fig. 2: - first solve Riemann problem to obtain speed and approximate states at
- follow the characteristics from the left and right states at t = tl back to t = to and use the data at the foot of them to obtain updated left and right states at t = tl
206 - finally solve a Riemann problem at t = tl to improve states and speed there. After the normal step has been implemented at all points representing the shock curve, the tangential step, which
propagates surface waves, is done by a one dimensional finite difference scheme on each side of the front. Because points on the front may move too far apart (or too close together) during
propagation, a routine which redistributes the points along the curve is sometimes useful. One has to be cautious though, because this routine stabilizes the curve which may tend to become unstable
due to physical or numerical effects. THE INTERIOR SCHEME. The underlying principle is to solve an initial-boundary value problem on both sides of the front (the front is a moving boundary), and to
never use states on the opposite side of the front. Away from the front this is readily achieved by using any finite difference scheme compatible with the resolution one needs in the interior. Near
the front an algorithm which is consistent with the underlying partial differential equation has yet to be worked out. The following recipe has been implemented successfully (see Fig. 3): suppose the
stencil gets cut off by the front. Use the states at the nearest crossing point (obtained through linear interpolation from the front states) and place them at the missing stencil points.
Fig. 3 A five point centered stencil near the front, where the states on the front are assigned to the two grid points on the opposite side of the front. So far two papers have addressed the
front-interior coupling problem in two space dimensions: [ee] suggest and implement a coupling which is conservative for gas dynamics. [KZ] have formulated a class of front tracking schemes for which
they show stability. Mathematical issues related to this. In the previous section we saw that this approach leads to the study of one dimensional Riemann problems. This is a
207 special Cauchy problem of the type ut+f(u)x=O
0 UL,X
Since the equation and initial data are scale invariant
(x, t)
(ax, at) ,
we may expect scale invariant solutions. These are well understood e.g. for the scalar equation and for gas dynamics. There is a considerable research effort trying to understand the Riemann
solutions of more complicated models. One example are the 2 x 2 systems with quadratic flux functions studied by various authors, e.g. [IMj, [ITj. New interesting mathematical phenomena arise: -
non-classical waves - non-contractible discontinuous waves, i.e. it is not possible to decrease the wave strength to zero while following a connected brach of the wave curve - open existence and
uniqueness questions. Another example are Riemann solvers for equations describing conservation of mass, momentum and energy in real materials. Their effects on the wave structure has been studied,
[MPj. In another approach the equation of state is tabulated (SESAME code at Los Alamos). Scheuermann used this for a Riemann solver by preprocessing the data. Finally we mention certain waves where
the internal structure of the waves play a role. Whereas say for shock waves of isentropic gas dynamics the two jump equations plus the three pieces of information given by the impinging
characteristics determine the four state variables on both sides of the shock with its speed, for transitional shock waves not enough information impinges through the characteristics and one needs
information from the internal structure in order to determine speed and states. The structure depends sensitively on the viscosity used in the parabolic approximation. These waves thus present a
danger for finite difference schemes, which introduce their own brand of viscosity which is different for different schemes. Here a tracking algorithm which mimicks the structure with a Riemann
solver lends itself naturally to this problem. The front tracking method described so far could also be applied to more complex flow patterns than the expanding spherical shock wave by simply
tracking a single front and capturing all other phenomena using a high quality interior scheme. An example are the Euler equations coupled with complex chemistry used to model the flow around a
hypersonic projectile [Zhuj. Here the hydrodynamic bowshock is tracked and the flow with most of the chemistry concentrated right behind this shock is captured. This is an example where a tracking of
the bowshock is necessary.
208 Wave interaction. One can also track interacting waves. To illustrate this consider a planar shock wave impinging on a curved ramp (Fig. 4), giving rise first to a regular and then to a Mach
reflection. This is an example on how new curves may arise. For hydrodynamic shock waves this bifurcation may arise through the intersection of shocks with each other or with other "curves", or
through compressive waves ("blow up" of the smooth solution). If one wants to incorporate these phenomena into a front tracking algorithm it is necessary to understand them mathematically. For
example in the case of the planar shock impinging onto the wedge, one needs a criterion which gives for given shock strength the ramp-angle when a bifurcation from regular to Mach reflection occurs.
If one wants to track all the waves, the algorithm needs to have this criterion built in.
Fig. 4 A planar shock impinges onto a wedge, and, depending on the shock strength and wedge angle, give rise either to a regular reflection (left) or a Mach reflection (right). In the latter the
reflected point has lifted off the wall to become a "triple point" from which a "Mach stem" connects to the wall. This is an example of a two dimensional Riemann problem. In general, at the meeting
point of more than two curves, if one approximates the curves by rays and the states nearby by constant states, these nodes are examples of two dimensional Riemann problems. As in one dimensional
case, this is scale invariant Cauchy data
(x, y, t
a, ay, at , a > 0) giving rise to a self similar solution u
= u (~ ,~).
Thus front tracking may lead to two dimensional Riemann problems. Mathematical issues related to this. There has been some progress on studying the qualitative behavior of two dimensional Riemann
problems. For the equations of compressible inviscid, polytropic gas dynamics, in analogy to the one dimensional Riemann problem which is resolved by elementary waves, one expects that the two
dimensional Riemann problem will evolve into a configuration containing several two dimensional elementary waves. This this end these elementary waves were completely classified [GK], some of them
can already be found in [L]. For the scalar two dimensional conservation law the two dimensional Riemann
209 problem could be solved much further. For Ut
+ f(u)x + g(u)y
= 0
with f = 9 it was solved in [W] (J convex), [Ll], [L2] (J one inflection point), [KO] (J any number of inflection points). For f # 9 [W] (J close to g,J convex) and [KO], [eH) (J convex, 9 one
inflection point) gave solutions. Numerical implementation. This knowledge of two dimensional Riemann problems has been used in front tracking codes to some extent. The classification of elementary
waves for gas dynamics gave a list of the generic node one can expect there, that is all generic meeting points of shock waves, contact discontinuities and centered rarefaction waves. The tracking of
a node is the numerical solution of a subcase of the full Riemann problem, one has to determine the velocity and states associated with one specific elementary wave. for gas dynamics this has been
done [GK), GI], [G2). For the scalar two dimensional conservation law the resolution of the two dimensional Riemann problem caused by the crossing of two shocks has been implemented. Whereas in [K)
the point is to solve the interaction of two scalar waves quite accurately, in [GG) the emphasis is on following scalar wave interaction within a complicated topology of curves in a robust fashion
without an unacceptable proliferation of subcases. An approximate numerical solution to a general two dimensional Riemann problem was implemented by approximating the flux functions by piecewise
linear functions [R). Computer science issues related to front tracking. Here we briefly describe a package of subroutines which provides facilities for defining, building and modifying
decompositions ofthe plane into disjoint components separated by curves. It is worth noting that ideas from conceptual mathematics, symbolic computation and computer science have been utilized,
thereby going beyond the usual numerical analysis framework, see [GM).
Fig. 5 The front tracking representation of a Mach reflection. Taking the Mach reflection example (Fig. 4), we illustrate in Fig. 5 the representation of this particular flow. The front consists of
piecewise linear curves at the endpoints of each linear piece we have associated quantities like states and wave types. Given this interface, the plane is decomposed into disjoint components. An
integer component value is associated with each such component. Given any point x, y in the plane, the component value can be recovered. The underlying grid and possible interpolating grids near the
front allow the definition of associated state variables in the interior. There is a recursive data structure. It consists of
which denotes the position of the grid points on the curve
which denotes the piece of the curve between two adjacent points and previous bond by giving a start and an end point and having a pointer to the next
denoting usually a pice of the interface homotopic to an interval. A curve is a doubly-linked list of bonds given by a start and node (see below). It has a point to the first and last band.
which is the position of a point on the interface where more than two curves meet. Its position is given with a list in and out curves.
is a list of nodes and curves
Then there are routines that operate on the interface structure. There are routines that allocate and delete the above structures, then those which add these to the interface, routines that split and
join bonds and curves, all needed for example when there is a change in topology. Also one can traverse a list of the above structures. The code has purposely been set up in such a way that this
interface data structure can be dressed with the physics of a given problem containing curves. For gas dynamics one would associate with each point a left and right state, with each curve the wave
type and at the node the state in each sector in order to have the set up for the Riemann problem. This whole structure now needs routines which allows the interface to propagate from one timestep to
the next. This is done by first moving the interface. This means moving bonds and nodes. Next the interior is updated. Then one has to handle possible interactions and bifurcations. These have to be
detected, classified (they could be tangIer of curves or two dimensional Riemann problems and th'en resolved. There is also a routine which redistribute points on the interface, in case they become
to close together or too far apart.
Numerical examples. We shall give four examples out of many that have been calculated over the years with the code. Fig. 6 shows regular and Mach reflection, [GK]. Fig. 7 show an underwater explosion
[G2]. Fig. 8 shows RayleighTaylor instability [FG]. Fig. 9 shows an example from oil reservoir modelling [GG].
Fig. 6 On the left the numerical simulation of regular reflection, where the incident shock has Mach number 2.05 and the wedge angle is 63.4 0 • The calculation was performed on a 80 by 20 grid. The
picture shows lines of constant density inside the bubble formed by the reflected shock. On the right the numerical simulation of a Mach reflection, where the incident shock has Mach number 2.03 and
the wedge is 27 0 • Inside the bubble formed by the reflected shock the calculated lines of constant density are shown. The calculations we performed on a 60 by 40 grid. In both cases there is
excellent agreement with experiments,
213 (a) time 0.0 msec
(b) Lime 7.5 msec
(d) time 50.0 msec
(c) time 15.0 msec
20.:U= 206y
Fig. 7 An underwater expanding shock wave diffracting through the water's surface. The internal pressure is 100 kbans and initial radius of 1 meter installed 10 meters below the water's surface. The
tracked front in dark lines is super imposed over lines of constant pressure. The grid is 60 by 120.
Light fluid
lOdx = lOdy
Heavy fluid t = 0
t = 12
t = 18
t = 24
Fig. 8 Two compressible fluids of different densities, with gravitational forces (here positing upward) pushing lighter fluid into heavy one. The interface is initialized by 14 bubbles with different
wave length and initial amplitude of 0.01. The density ratio is 10. The interface between these fluids is unstable and leads to a mixing layer, with bubbles of light fluid rising in the heavy fluid.
- - ---
(a) SICP U
- -
® @
o il
@ 0
® ®
® ® ®
o o
(b) S ICP 40
® (e) . Iep 80
® (d) . Iep 240
Fig. 9 A horizontal cross section of an oil reservoir modeled by the Buckley-Leverett equations_ Water gets injected at 19 injection wells (cross squares), displacing the oil in the porous media, and
oil get extracted at 12 producing wells (open squares). Plots of the fronts between water and oil are shown_ The frontal mobility ratio for water displacing oil is 1.33. Conclusion. It should have
become clear that this numerical approach forces one to think hard about underlying physics and mathematics_ If one is successful
216 at penetrating the problem at hand, front tracking can give the correct simulation with very high resolution. REFERENCES
BUKIET, The effect of curvature on detonation speed, SIAM J. Appl. Math., 49 (1989).
CHANG, HSIAO, The Riemann problem and interaction of waves in gas dynamics, John Wiley, New York, 1989.
CHERN, COLELLA, A conservative front tracking method for hyperbolic conservation laws, Journal Compo Physics, (1989).
CHERN, GLIMM, McBRYAN, PLOHR, YANIV, Front tracking for gas dynamics, J. Compo Phys., 62 (1986).
FURTATO, GLIMM, GROVE, LI, LINDQUIST, MENIKOFF, SHARP, ZHANG, Front tracking and the interaction of nonlinear hyperbolic waves, NYU preprint (1988).
GLIMM, McBRYAN, A computational model for interfaces, Adv. Appl. Math., 6 (1985).
GLIMM, KLINGENBERG, McBRYANT, PLOHR, SHARP, YANIV, Front tracking and two dimensional Riemann problems, Adv. Appl. Math. 6 (1985).
GLIMM, GROVE, LINDQUIST, McBRYAN, TRYGGVASON, The bifurcation of tracked scalar waves, SIAM J. Sci. Stat. Compo 9 (1988).
GROVE, The interaction of shock waves with fluid interfaces, Adv. Appl. Math (1990).
GROVE, Anamolous reflection of a shock wave at fluid interfaces, Los Alamos preprint LA UR (1989) 89-778.
ISAACSON, MARCHES IN , PLOHR, TEMPLE, The classiBcation of solutions of quadratic Riemann problems I, MRC Report (1985).
ISAACSON, TEMPLE, The classiBcation of solutions of quadratic Riemann problems II, III, to appear SIAM J. Appl. Math ..
JONES, Asymptotic analysis of an expanding detonation, NYU DOE report (1987). scalar conservation laws in one and two space dimensions, Probl., ed. BaUmann, Jeltsch, Vieweg Verlag (1989).
KLINGENBERG, OSHER, Nonconvex Proc. 2 nd Int. Conf. Nonlin. Hyp.
KLINGENBERG, ZHU, Stability of difference approximations for initial boundary value problems applied to two dimensional front tracking, Proc. 3rd Int. Conf. on Hyp. Problems, ed. Gustafsson (1990).
LANDAU, LIFSHITZ, Fluid Mechanics, Addison Wesley (1959).
LINDQUIST, The scalar Riemann problem in two space dimensions, SIAM J. Anal. 17 (1986).
LINDQUIST, Construction of solutions for two dimensional Riemann problems, Adv. Hyp. PDE and Math. with Appl. 12A (1986).
MENIKOFF, PLOHR, Riemann problem for fluid flow of real materials, Los Alamos prepint LA UR-8849 (1988).
RiSEBRO, The Riemann problem for a single conservation law in two space dimensions, May 1988, Freiburg, Germany.
[W] WAGNER, The Riemann problem in two space dimensions for a single conservation law, SIAM J. Math. Anal. 14 (1983). [Zhu]
ZHU, CHEN, WARNATZ, Same computed results of nonequilibrium gas flow with a complete model, SFB123 Heidelberg University preprint 530 (July 1989).
Introduction. It is evident from the lectures at this meeting that the subject of systems of hyperbolic conservation laws is flourishing as one of the prototypical examples of the modern mode of
applied mathematics. Research in this area often involves strong and close interdisciplinary interactions among diverse areas of applied mathematics including (1) Large (and small) scale computing
(2) Asymptotic modelling (3) Qualitative modelling (4) Rigorous proofs for suitable prototype problems combined with careful attention to experimental data when possible. In fact, the subject is
developing at such a rapid rate that new predictions of phenomena through a combination of theory and computations can be made in regimes which are not readily accessible to experimentalists.
Pioneering examples of this type of interaction can be found in the papers of Grove, Glaz, and Colella in this volume as well as the recent work of Woodward, Artola, and the author ([1], [2], [3],
[4], [5], [6]). In this last work, novel mechanisms of nonlinear instability in supersonic vortex sheets have been documented and explruned very recently through a sophisticated combination of
numerical experiments and mathematical theory. Here I will discuss my own perspective on several open problems in the field of hyperbolic conservation laws which involve the interaction of ideas in
modern applied mathematics. Since the audience at the meeting consisted largely of specialists in nonlinear P.D.E. and analysis, I will mostly emphasize open problems which represent rigorous proofs
for prototype situations. I will concentrate on open problems in three different areas: 1) Self-similar patterns in shock diffraction; 2) Oscillations for conservation laws; 3) New phenomena in
conservation laws with source terms. In the first section, I will give the compressible Euler equations as the prototypical example of a system of conservation laws in several space variables and
then describe several approximations such as isentropic, potential flows which yield other related hyperbolic conservation laws. I will also discuss the nature of these approximations in
multi-dimensions. This material may not be well-known to the reader and provides background material for some of the open problems discussed in the remruning sections. Each of the remruning three
sections is devoted to my own *Oepartment of Mathematics and Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544 partially supported by grants N.S.F. OMS
8702864, A.R.O. OAAL03-89-K-0013, O.N.R. N00014-89-J-I044
218 perspective on the open problems in the three areas mentioned earlier. It was clear from my lectures during the meeting that I regard the mathematical problems associated with turbulence and
vorticity amplification and concentrations as extremely important for future research but they are not emphasized here. The interested reader can consult some of my other research/expository articles
(see [7], [8]) for my perspective on these topics. Section 1: The Compressible Euler equations and Related Conservation Laws. A general m x m system of conservation laws in N -space variables is
given by
t ~(Fj(il))
ail + at _ ax)-
= S(il) .
The functions Fj(il), 1::; j ::; N, are the nonlinear fluxes and S(il), are the source terms. These are smooth nonlinear mappings from an open subset of R M to R M. For convenience in notation, we
have suppressed any explicit dependence on (x; t) of the coefficients in (1.1). The prototypical example of a system of homogeneous conservation laws is the system of N +2 conservation laws in
N-space variables given by the Compressible Euler Equations expressing conservation of mass, momentum, and total energy and given by
at+ IV aE d. at + IV
In (1.2), p is the density with l/p = r, the specific volume, v = t(Vl, ... ,VN) is the fluid velocity with pv = mthe momentum vector, p is the scalar pressure, and E = ~(m. m)/p + pe(r,p) is the
total energy with e the internal energy, a given function of (r, p) defined through thermodynamics considerations. For an ideal gas, e = (pr)h - 1 with I > 1, the specific heat ratio. The notation 1/
9 b denotes the tensor product of two vectors. It is well-known that for smooth solutions of (1.2), the entropy S(p, E), is conserved along fluid particle trajectories, i.e.
-Dt = 0 where
DaN - =- + Dt at
'"""v--f:r) aXj .
The first simpler system of conservation laws which emerges as an approximation for solutions of the compressible Euler equations is probably well-known to the reader. If the entropy is initially a
uniform constant and the solution remains smooth, then (1.3) implies that the energy equation can be eliminated. Furthermore, under standard assumptions on the equation of state, the pressure can be
regarded as a
219 function of density and entropy, pep, S). Thus, with constant initial entropy the smooth solution of (1.2) satisfies the system of N + 1 conservation laws in N-space variables given by the
equations for Isentropic Compressible Flow
8p at 8m
+ div (m) =
. (mQ9m) at + div - p - + \lp(p, So) = 0 .
For an ideal gas law, pcp) = A(So)p"Y with I> 1. I remind the reader that solutions of the system in (1.4) are a genuine approximation to solutions of the system in (1.2) once shock waves form since
the entropy increases along a shock to third order in wave strength for solutions of the compressible Euler equations while in (1.4), the entropy is constant. Next, I present a conservation law which
involves a further approximation to solutions of (1.2) beyond the isentropic approximation from (1.4); this approximation is well-known in transonic aerodynamics and is called the equation for
time-dependent potential flow. First I consider smooth solutions of (1.4) that are irrotational, i.e. curlv=O.
With = curlv defining the vorticity, then the vorticity in a smooth solution of the 3-D Euler equations from (1.2) satisfies (1.6) where P. ~(p, S). The general formula in (1.6) is readily verified
by taking the curl of the momentum equation and using vector identities. One immediate consequence of the equations in (1.6) and (1.3) is that a smooth solution of compressible 3-D Euler which is
both isentropic and irrotational at time t = 0 remains isentropic and irrotational for all later times as long as this solution stays smooth; thus, the condition in (1.5) is reasonable for smooth
solutions. Next, for smooth irrotational solutions of the equations for isentropic compressible flow, I will integrate the N-momentum equations in (1.4) through Bernoulli's law. With the condition,
curl v = 0, the N-momentum equations in (1.4) assume the form
(1. 7)A) where h(p) satisfies
(1. 7)B)
= ~:(p,So)/p > O.
On a simply connected space region, the condition, curlV' = 0, implies that (1.8)
so that (1.7)A) determines the density from the potential through the formula (1.9) with D = t( t, V'. is one of the m wave speeds of the linearization at Uo. The amplitude solves a nonlinear
transport equation
(3.4) The operator D is a first order operator which is the linear transport operator of geometric optics given by differentiation along the bicharacteristic rays associated with tP and has the form,
(3.5) with a(x, t), b(x, t), C(x, t) determined from tP by explicit formulas. In the case of single propagating waves, provided b f- 0 (which is always true for a genuinely nonlinear wave), there are
elementary changes of variable which reduce (3.4) to the inviscid Burgers equation, (3.6) The advantage of utilizing geometric optics as an asymptotic tool in understanding phenomena in the complex
general multi-D system in (3.1) is now evident. The solutions of (3.6) are known explicitly and provide general quantitative asymptotic approximations for (3.1) through the equations in (3.2)-(3.5).
Obviously, it is an important theoretical problem to justify nonlinear geometric optics for discontinuous solutions of conservation laws. The only rigorous work thus far on this topic is that by
Diperna and the author ([27]) for a class of systems in a single space variable which I will discuss briefly below. An outstanding and very important but accessible open problem is the following
Problem #1: Provide a rigorous justification of the single wave geometric optics expansion in (3.2) for discontinuous initial data.
I believe that this problem is ripe for solution for the following reasons: at this meeting, G. Metevier (see his paper in this volume) has announced the existence
of shock front solutions of (3.1) for a uniform time T independent of € as € 1 0; the existence and structure of the discontinuous approximating solutions of (3.2)-(3.5) is completely understood in
the genuinely nonlinear case because the discontinuous solutions of (3.6) are well known; the errors between the approximate and exact solution can probably be estimated through an appropriate
multi-dimensional generalization of the estimates utilizing geometric measure theory developed by Diperna and the author ([27]) for systems in a single space variable. Next, I turn to important open
problems regarding the new phenomena which occur when one attempts to build multi-wave approximations for geometric optics with the form,
(3.7) In linear geometric optics the wave patterns in (3.7) superimpose and each amplitude solves the corresponding single wave transport equation of geometric optics. When does this happen in the
nonlinear case? The formal asymptotic theory ([28], [29]) predicts that the 8ingle wave pattern8 of nonlinear geometric OptiC8
described in (3.2)-(3.5) 8uperimp08e and are non-resonant provided that the amplitudes {o"p(x,t,8n~=1 have compact 8upport in 8 .
The only systematic rigorou8 jU8tification for discontinuous solutions for geometric OptiC8 has been developed by Diperna and the author ([27]) in this non-reMnant ca8e in a 8ingle 8pace variable.
The main theorem from [27] requires the hypothesis that the initial amplitudes {o- p( x, ~=l at time t = 0 have compact support (thus, are non-resonant) and that all m wave fields are distinct and
genuinely nonlinear. Under these assumptions Diperna and the author prove that
(3.9) where u(x,t) is the solution of the conservation laws with the same initial data constructed by Glimm's method. Here II . IlL' is the LI norm. Thus, even for discontinuous initial data,
geometric optics is valid uniformly for all time in this non-resonant situation - this is a surprising result !! Incidentally, one immediate corollary of this theorem is that for small amplitude
initial data of compact support with size €, the discontinuous solutions of the isentropic flow equations in (1.4) in one-space dimension and the discontinuous solutions (3.10)
of the potential flow equations in (1.10) in one-space dimension with the same initial data agree within in the L 1 norm for all time
230 This result provides a rigorous justification for some of the approximations described in Section 1 in the special case of a single space variable. Since the isentropic flow equations in (1.4)
and the potential flow equations in (1.10) have the same smooth solutions, it is an exercise to check that these two equations have the same single wave expansions for nonlinear geometric optics.
With this fact, the corollary in (3.10) follows immediately from (3.9) and the triangle inequality. Some interesting and accessible open problems generalizing the results stated in (3.9) are
described at the end of the author's survey article ([24]). I return to the general multi-wave expansions of geometric optics and ask whether new phenomena occur when the non-resonance conditions
from (3.8) are no longer satisfied? The answer is yes. Recent research of Hunter, Rosales, and the author ([29], [30]) employing a systematic development of nonlinear geometric optics reveals more
complex effects beyond (3.8); general periodic or almost periodic wave trains do not superimpose but instead interact resonantly. The eikonal equations in (3.3) remain the same but the amplitudes,
{o"P( x, t, B)};;'=l no longer solve simple decoupled transport equations like those in (3.4); in fact the different amplitudes resonantly exchange energy through nonlinear interaction and solve a
coupled system of quasi-linear integro-differential equations provided that m ~ 3. Applications to the equations of compressible fluid flow from (1.2) and (1.4) are developed in detail in the above
papers in both a single and several space dimensions. As regards the 3 x 3 system from (1.2) describing compressible fluid flow in one space variable, the resonant nonlinear interaction of small
amplitude sound waves with small amplitude entropy waves produces additional sound waves which resonantly interact. After some elementary changes of variables, the two sound wave amplitudes oo±(B, t)
satisfy the coupled system of resonant equations
00: + GO"+ ): + (3.11 )
+ GO"-): - [
k(B - y)O"-(y, t)dy = 0 k( -B - y)O"+(y, t)dy
where I assume in (3.11) that 0"+,0"-, and k are periodic with period one. The kernel k is a multiple of a rescaled derivative of the initial entropy perturbation; the asymptotics predicts that the
entropy perturbation does not change to leading order in time. Recent papers ([31J, [32]) which combine small scale numerical computation and several exact solutions reveal surprising new phenomena
in the solutions of (3.11) through resonant wave interaction. Thus, the formal predictions from geometric optics for periodic wave trains at small amplitudes for 3 x 3 compressible fluid flow involve
surprising new phenomena. The open problems which I suggest next are motivated by these new phenomena. Since conservation laws without source terms are scale invariant, I propose some open problems
for the rigorous justification of nonlinear geometric optics for the resonant case by considering solutions of the M x M system of conservation laws in a single space variable
231 with small amplitude periodic initial data
= Uo + fU~(X) .
Here uOfRM is a constant and u~(x) is a function with period one, i.e. u~(x + 1) = u~(x ).
Problem #2: For a general system of conservation laws, let u' denote the weak solution with initial data in (3.12)B) and let u~ denote the corresponding approximation from nonlinear geometric optics
(involving resonant wave interaction for m ~ 3 in general). Show that there is a time, T( f) with fT( f) ---+ 00 as 15 ---+ 0 so that (3.13)
max Ilu' - u:VllL' ::; 0(15)
where L1 denotes the L1- norm of a one periodic function in x. I make several remarks on this problem. For m = 2 where the resonant effects are absent and for a pair of genuinely nonlinear
conservation laws the estimate in (3.13) has been proved in [27] with T(f) = 0(15- 2 ); it would be interesting to know if this is sharp. Furthermore, there is an improved geometric optics formal
approximation for large times due to Cehelsky and Rosales (see [33]) which accounts for accumulating phase shifts from wave interactions and this geometric optics approximation u~ should be used in
Problem #2. In fact, the result of Diperna and the author for periodic waves for pairs of conservation laws does not utilize this more refined geometric optics approximation with phase shift
corrections for long times. An interesting and much more accessible technical problem than Problem #2 is to assess whether through the use of this refined geometric optics approximation for m =
2,.the time of validity T(f) becomes significantly larger than T(f) = 0(15- 2 ). One of the reasons that the work in [27] for the periodic case is restricted to m = 2 is that general existence
theorems for small periodic initial data for conservation laws following Glimm's work are unknown for m ~ 3. A straightforward repeat of Glimm's proof shows that the solution u' of the system of
conservation laws in (3.12)A) with general initial data exists for times of order 0(15- 2 ). I conjecture that for a general system of conservation laws with genuinely nonlinear and linearly
degenerate wave fields, this crude result is sharp; my conjecture is based on the unstable nature of solutions of the resonant asymptotic equations for a particular example system discussed in [30].
It would be very interesting to find out whether this conjecture is correct. One the other hand, I believe that there is global existence for the 3 X 3 system of compressible fluid flow, (1.2), for
small amplitude periodic initial data as given in (3.12)B). This I list as Problem #3: Show that for the specific 3 x 3 system of compressible fluid flow, Glimm's method yields the global existence
of solutions for general small amplitude periodic initial data. I believe that Problem #2 is too difficult to attack in full generality; the special and illlPortant case of 3 x 3 gas dynamics is
already extremely interesting. For emphasis I stat" this as
232 Problem #4: For the 3 x 3 system of compressible fluid flow, let u:" denote the resonant geometric optics approximation for the initial data in 3.12B) given through 3.11 (see [30]) but including
the large time phase shift corrections of CehelskyRosales ([33]). Let u be the weak solution of (3.12)A) with the same initial data that exists for times of order e- 2 • Find a time interval T(e)
with eT(e) / 00 as e -+ 0 so that u:" differs from u' by o( e) on that time interval.
I remind the reader that from my earlier comments the full solution of Problem #3 is not needed to study Problem #4 and any progress on Problem #4 would be very interesting. Large Oscillations
This section involves the study of existence of solutions via the weak topology and the propagation of large amplitude oscillations for systems of conservation laws
Ut+F(u)x=O u(x,O) = Uo
with large amplitude initial data, Uo. The use ofthe weak topology and the method of compensated compactness was introduced by Tartar ([34]) and applied to scalar conservation laws. Diperna ([35],
[36]) carried out Tartar's program for pairs of conservation laws provided that both wave fields are genuinely nonlinear; in this case strong convergence was deduced from the apriori weak convergence
so that no oscillations propagate. Rascle and P. Serre (see [37], [38], and Serre's paper in this volume) have studied pairs of conservation laws which are not genuinely nonlinear; for example, for a
general nonlinear wave equation, they show that oscillations propagate but the nonlinear tenns in the equations still converge and define a weak solution in the limit. Given all of the phenomena
deduced via geometric optics for propagation and interaction of oscillations at small amplitudes for m ~ 3 it is not surprising that the propagation of large amplitude oscillations and the use of the
weak topology provide difficult questions for systems of conservation laws with m ~ 3. The most important and most accessible of these problems regards propagation of oscillations for 3 x 3
compressible fluid flow. In Lagrangian mass co-ordinates, these equations have the fonn Tt Vt
Gv2 where (3.16)
= 0
+ Px = 0
+ e(T,p)) t + (pv)x = 0
is the specific volume; the interval energy e is given by
pT e---,),-1'
for an ideal gas law. The first remark is that large amplitude oscillations do propagate in solutions of (3.15). Consider the rapidly oscillating exact solution sequence
233 defined by contact discontinuities, i.e. (3.17) where VO,PO are fixed constants and TO is a fixed positive I-periodic function. Large amplitude oscillations propagate for this equation because
the weak limit of TO (~) has a non-trivial Young measure but the velocity and presure converge strongly. Nevertheless, the weak limit is a solution ofthe equations in (3.15). The conjectured behavior
is that these examples provide the worst possible situation. I present this as Problem #5: Let t(T',V',p') be a sequence of weak solutions of the compressible fluid equations in (3.15). Assume the
uniform bounds, 0 < L :S; T' :S; T+, 0 < p_ :S; p' :S; p+, Iv'l:S; V ~nd as € -+ 0, t( T', v', p') converges weakly to t( T, V, p). Is it true that (v',p') converges strongly to (v,p)? Both C.S.
Morawetz and D. Serre are currently working on this problem. In fact, Serre has remarked that if the conjecture in Problem #5 is true, then (3.18)
for an ideal gas law, the limit is a weak solution of the equations for compressible flow.
With (3.16), the result in (3.18) is an easy exercise for the reader. Nevertheless, I have some doubts that this conjecture is true; some high quality numerical simulations could generate some
important insight here.
Section 4: New Phenomena in Conservation Laws with Source Terms. In this section, I briefly discuss my perspective on conservation laws with source terms. I focus on solutions of systems in a single
space variable with the form, (4.1)
+ F(u)x
S(u) .
T.P. Liu has been the principal contributor to the study of conservation laws with a special class of source terms with x dependence which model physical problems such as the averaged duct equations
for one dimensional fluid flow. He has discussed the stability and large time asymptotic behavior for a large class of problems with source terms. The interested reader can consult Liu's paper in
this volume for a detailed list of references. Here I will discuss open problems for conservation laws with source terms which do not satisfy the hypotheses of Liu's work - the equations of reacting
gas flow are a prototypical example. I will discuss some of the new phenomena that occur for these systems with source terms which have been discovered recently through numerical, asymptotic, and
qualitative modelling and then I suggest some accessible open problems motivated by these new phenomena. I emphasize the phenomena for the compressible Euler equations of reacting gas flow as an
example of (4.1) although I am confident that similar phenomena are likely to occur for the hyperbolic conservation laws with suitable source terms arising in multi-phase flow, retrograde materials,
and other applications.
234 The compressible Euler equations for a reacting gas with simplified one-step irreversible kinetics are given by
+ (pv)", = 0 (pV)1 + (pV 2 + p)", = 0 (pE)1 + (pvE + pv)", = qoK(T)pZ (pZ)1 + (pvZ)", = -K(T)pZ PI
where E = T +e is the energy density, Z is the mass fraction offuel, e = ~,qO > o is the heat release, and T = ,pip is the temperature. In the discussion below we assume either the Arrhenius form for
the rate function K(T), 2
= K exp( -E+ IT)
or the ignition temperature law
K(T) = { :
, T"2 T j , T 5: T j
where T; is a fixed reference ignition temperature. In (4.3) K is the rate constant while in (4.3)A), E+ is the non-dimensional activation energy. An important practical problem for both safety and
enhanced combustion regarding the system in (4.2) is the initiation of detonation. Detonation waves are travelling wave solutions of (4.2) which have the structure of an ordinary fluid dynamic shock
followed by chemical reaction; these exact solutions are readily determined by quadrature of a single O.D.E. (see Fickett-Davis [39], Majda [40]) and are called Z-N-D waves; the Z-N-D wave moving
with the slowest velocity is called the C-J (Chapman-Jouget) wave. The problem of initiation involves an initial flow field with a small region of hot gas kept at constant volume and velocity. The
main issue in initiation regards whether this perturbation will grow into a fullydeveloped Z-N-D wave in which case there is transition to detonation or whether this perturbation will die out as time
evolves and the chemical reaction will be quenched so that there is failure? Both experimental data and detailed numerical computations display many complex features in examples illustrating both
failure and initiation. The recent paper by V. Roytburd and the author, [41], contains a discussion and documentation of these phenomena as well as a large list of background references. While these
phenomena in initiation are becoming understood through a combination of experiments and numerical computation, the rigorous theory of such phenomena for solutions of the equations in (4.2) seems
beyond reach. In fact, a very interesting preliminary open problem is the following Problem #1: Establish the existence of solutions for the reacting Euler equations in (4.2) for appropriate initial
data by a modification of Glimm's method. At the meeting, D. Wagner (personal communication) has announced some major progress toward solving Problem # 1.
235 Next, I present an interesting qualitative-quantitative model for high Mach number combustion and then I indicate some beautiful prototype problems for the initiation of detonation which are
accessible in this model. It is not surprising given the complexity of the phenomena described in the preceding paragraph that there is an interest in simpler models which qualitatively mimic some of
the features in solutions of (4.2) in various regimes. One such model for high Mach number combustion was proposed and studied by the author in [42] and then derived by Rosales and the author [40],
[43] in a slightly modified form from the equations of reacting gas flow in (4.2) through nonlinear geometric optics as a quantitative asymptotic limiting equation of (4.2). This
qualitative-quantitative model for high Mach number combustion is the following 2 x 2 system Ut
+ ("2u2)x = qoK(u)Z Zx = K(u)Z
where the rate function K( u) has either of the forms in (4.3). The function Z is the mass fraction of reactant and the function u appearing in (4.4) is the amplitude of an acoustic wave moving to
the right; when the reaction terms vanish so that Z K( u) == 0, we get Burgers equation as expected from the theory of geometric optics sketched in (3.2)-(3.6). The coordinate x appearing in (4.4) is
not physical space but instead is a suitable space-time distance to the reaction zone. Thus, the natural data for (4.4) is a signalling problem: uo(x) and Zo(t) are prescribed with (4.5)
u(x, t)lt=o = uo(x) and lim Z(x, t) = Zo(t) . t~oo
For simplicity in exposition, we assume below that Zo(t) == 1. From my discussion above, it is evident that the model equations in (4.4) retain the nonlinear interactions between the right sound wave
and combustion but ignore all other multi-wave interactions that are present in solutions of (4.2). The model equations in (4.4) have a transparent analogue of Z-N-D waves and C-J waves (see [40],
[42]) and also an analogue of the initiation problem. To mimic the initiation problem, I take the ignition temperature form from (4.3)B) for the rate function K(u) in (4.4) and consider a pulse in
the initial data for u given by ,O Ui and < Ui and Ui is the ignition value in the model. The initial data in (4.6) is the analogue in the model of the hot spot mentioned earlier in the initiation
problem. The solution of (4.4), (4.5) with the initial data in (4.6) was studied by Roytburd and the author through numerical computations in the paper, [44], from LM.A. volume 12. Also, numerical
solutions of initiation with (4.4), (4.5), (4.6) were compared with simulations of the full reacting gas system in (4.2) and as expected the solutions of (4.4)-(4.6) have good qualitative agreement
with those in (4.2) provided the initiation process in solutions of (4.2) does not involve complex
236 multi-wave gas dynamic interactions. In [44], Roytburd and the author found that depending on the parameters for the initial data u~, d, and the heat release qo and the rate constant K in (4.3)
B), the solution of (4.2) either was quenched and tended rapidly to zero so that there was failure or the solution grew (sometimes in a highly non-monotone fashion) to a fully-developed C - J wave so
that strong initiation occurs. The equations in (4.4) have both the attenuating effects on u of the spreading of rarefaction waves and the amplifying effects of exothermic heat release which compete
to produce either outcome. A discussion of these competing effects is given in [44]. The main rigorous prototype problem which I propose in this section is the following
Problem #2: For a fixed K, qo, and Ui characterize those initial data Uo given in (4.6) so that either 1) the asymptotic solution of (4.4) as t / 00 is a C-J wave or 2) the solution tends rapidly to
zero as t / 00. I remark that the global existence of solutions for (4.4), (4.5) has been established by V. Roytburd (unpublished) through a constructive proof utilizing finite difference schemes. I
believe that Problem #2 demonstrates v~ry interesting new phenomena and also is extremely accessible to a rigorous analysis. One natural strategy would be to implement a version of the random choice
scheme for the equations in (4.4) together with an appropriate version of Liu's wave tracing ideas to assess the ultimate growth or failure of the wave pattern.
I end this section with some additional comments regarding the equations of reacting gas flow as a source of new phenomena in conservation laws with source terms. Those familiar with homogeneous
hyperbolic conservation laws know that shock waves in these systems are asymptotically stable at large times as t / 00. The analogues of shock wave solutions for (4.2) are the Z-N-D- travelling waves
mentioned earlier. It is both an experimental and numerically documented fact that in appropriate regimes of heat release, overdrive, and reaction rate, the Z-N-D waves lose their stability to
time-dependent wave patterns with either regular or sometimes even chaotic pulsations. These facts and a corresponding asymptotic theory together with numerical calculations are mentioned in the
paper by V. Roytburd in this volume. I would like to mention here that such effects cannot be found in solutions of the equations in (4.4); the full multi-wave structure of the gas dynamic equations
in (4.2) is needed to produce these pulsation instabilities. An asymptotic analysis by Roytburd and the author to appear in a forthcoming publication confirms this. Concluding Remarks: I have
presented several problems in the modern applied mathematics of hyperbolic conservation laws. I have emphasized phenomena for the equations of compressible flow in several space variables. However, I
believe that many of the phenomena and problems which I discuss here also have analogues in other applications such as dynamic nonlinear elasticity, magneto fluid dynamics, and multi-phase flow. I
would like to thank Harland Glaz for the use of two of his graphs and also for interesting conversations regarding Section 2 of this paper.
237 REFERENCES [1] [2]
[3] [4] [5] [6] [7] [8]
[9] [10] [11] [12] [13] [14]
[15] [16] [17] [18] [19] [20] [21]
[22] [23] [24]
[25] [26] [27]
M. ARTOLA AND A. MAJDA, Nonlinear development of instabilities in supersonic vortex sheets I: the basic kink modes, Physica 28D, pp. 253-281, 1988. M. ARTOLA AND A. MAJDA, Nonlinear development of
instabilities in supersonic vortex sheets II: resonant interaction among kink modes, (in press S.I.A.M. J. Appl. Math., to appear in 1989). M. ARTOLA AND A. MAJDA, Nonlinear kink modes for supersonic
vortex sheets, in press, Physics of Fluids A, to appear in 1989. P. WOODWARD, in Numerical Methods for the Euler Equations of Fluid Dynamics, eds. Angrand, Dewieux, Desideri, and Glowinski, S.I.A.M.
1985. P. WOODWARD, in Astrophysical Radiation Hydrodynamics, eds. K.H. Winkler and M. Norman, Reidel, 1986. P. WOODWARD AND K.H. WINKLER, Simulation and visualization offluid flow in a numerical
laboratory, preprint October 1988. A. MAJDA, Vorticity and the mathematical theory of incompressible fluid flow, Comm. Pure Appl. Math 39, (1986), pp. S 187-220. A. MAJDA, Mathematical Fluid
Dynamics: The Interaction of Nonlinear Analysis and Modern Applied Math, Centennial Celebration of A.M.S., Providence, RI, August 1988 (to be published by A.M.S. in 1990). R. COURANT AND K.
FRIEDRICHS, Supersonic Flow and Shock Waves, Springer-Verlag, New York, 1949. C.S. MORAWETZ, The mathematical approach to the sonic barrier, Bull. Amer. Math. Soc., 6, #2 (1982), pp. 127-145. A.
MAJDA, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, Appl. Math. Sciences 53, Springer-Verlag, New York 1984. W. HAYES, The vorticity jump across a gas dynamic
discontinuity, J. Fluid Mech. 2 (1957), pp. 595-600. A. MAJDA AND E. THOMANN, Multi-dimensional shock fronts for second order wave equations, Comm. P.D.E., 12 (1987), pp. 777-828. H. GLAZ, P.
COLELLA, 1.1. GLASS, AND R. DESCHAMBAULT, A detailed numerical, graphical, and experimental study of oblique shock wave reflections, Lawrence Berkeley Report, April 1985. M. VAN DYKE, An Album of
Fluid Motion, Parabolic Press, Stanford, 1982. C.S. MORAWETZ, On the non-existence of continuous transonic flows past profiles, I, II, III, Comm. Pure Appl. Math. 9" pp. 45-68, 1956; 10, pp. 107-132,
1957; 11, pp. 129-144, 1958. J. HUNTER AND J .B. KELLER, Weak shock diffraction, Wave Motion 6 (1984), pp. 79-89. E. HARABETIAN, Diffraction ofa weak shock by a wedge, Comm. Pure Appl. Math. 40
(1987), pp. 849-863. D. JONES, P. MARTIN, AND C. THORNHILL, Proc. Roy. Soc. London A, 209, 1951, pp. 238-247. J.B. KELLER AND A. A. BLANK, Diffraction and reflection of pulses by wedges and corners,
Comm. Pure Appl. Math. 4 (1951), pp. 75-94. J. HUNTER, Hyperbolic waves and nonlinear geometrical acoustics, in Proceedings of 6th Army Conference on Applied Mathematics and Computations, Boulder,
CO, May 1988 (to appear). D. G. CRIGHTO N, Basic theoretical nonlinear acoustics, in Frontiers in Physical Acoustics, Proc. Int. School of Physics Enrico Fermi, Course 93, North-Holland, Amsterdam
(1986). D.G. CRIGHTON, Model equations for nonlinear acoustics, Ann. Rev. Fluid. Mech. 11 (1979), pp.11-33. A. MAJDA, Nonlinear geometric optics for hyperbolic systems of conservation laws, in
Oscillation Theory, Computation, and Methods of Compensated Compactness, IMA Volume 2, 115-165, Springer-Verlag, New York, 1986. A. MAJDA AND R. ROSALES, Nonlinear mean field-high frequency wave
interactions in the induction zone, S.I.A.M. J. Appl. Math., 47 (1987), pp. 1017-1039. R. ALMGREN, A. MAJDA AND R. ROSALES, Rapid initiation through high frequency resonant nonlinear acoustics,
(submitted to Combustion Sci. and Tech., July 1989). R. DIPERNA AND A. MAJDA, The validity of nonlinear geometric optics for weak solutions of conservation laws, Commun. Math. Physics 98 (1985), pp.
238 [28] [29] [30] [31] [32] [33]
[35] [36] [37] [38] [39] [40] [41] [42] [43] [44]
J. K. HUNTER AND J .B. KELLER, Weakly nonliner high frequency waves, Comm. Pure App\. Math. 36 (1983), pp. 543-569. J.K. HUNTER, A. MAmA, AND R.R. ROSALES, Resonantly interacting weakly nonlinear
hyperbolic waves, II: several space variables, Stud. App\. Math. 75 (1986), pp. 187-226. A. MAmA AND R.R. ROSALES, Resonantly interacting weakly nonlinear hyperbolic waves, I: a single space
variable, Stud. Appl. Math. 71 (1984), pp. 149-179. A. MAmA, R. ROSALES, M. SCHONBEK, A canonical system ofintegro- differential equations arising in resonant nonlinear acoustics, Studies App\. Math.
in 1989 (to appear). R. PEGO, Some explicit resonanting waves in weakly nonlinear gas dynamics, Stud. App\. Math. in 1989 (to appear). P. CEHELSKY AND R. ROSALES, Resonantly interacting weakly
nonlinear hyperbolic waves in the presence of shocks: A single space variable in a homogeneous time independent medium, Stud. App\. Math. 74 (1986), pp. 117-138. 1. TARTAR, Compensated compactness
and applications to partial differential equations, in Research Notes in Mathematics, Nonlinear Analysis and Mechanics: Heriot-Watt Symposium, Vol. 4, R. Knops, ed., Pitman, London, 1979. R. DIPERNA,
Convergence of approximate solutions to conservation laws, Arch Rat. Mech. Ana\. 82 (1983), pp. 27-70. R. DIPERNA, Convergence of the viscosity method for isentropic gas dynamics, Comm. Math. Phys.
91 (1983), pp. 1-30. M. RASCLE AND D. SERRE, Comparite par compensation et systemes hyperboliques de lois de conservation, Applications C.R.A.S. 299 (1984), pp. 673-679. D. SERRE, La compucite par
compensation pour les systems hyperboliques nonlineaires de deux equations a une dimension d'espace, J. Maths. Pures et Appl., 65 (1986), pp. 423-468. W. FICKET AND W. DAVIS, Detonation, Univ.
California Press, Berkeley, 1979. A. MAmA, High Mach number combustion, in Reacting Flows: Combustion and Chemical Reactors, AMS Lectures in Applied Mathematics, 24, 1986, pp. 109-184. A. MAmA AND V.
ROYTBURD, Numerical study of the mechanisms for initiation of reacting shock waves, submitted to S.I.A.M. J. of Sci. and Stat. Computing in May 1989. A. MAmA, A qualitative model for dynamic
combustion, S.I.A.M. J. App\. Math., 41 (1981), pp.70-93. R. ROSALES AND A. MAmA, Weakly nonlinear detonation waves, S.I.A.M. J. Appl. Math., 43 (1983), pp. 1086-1118. A. MAmA AND V. RoYTBURD,
Numerical modeling of the initiation of reacting shock waves, in Computational Fluid Mechanics and Reacting Gas Flows, B. Engquist et al eds., I.M.A. Volumes in Mathematics and Applications, Vol. 12"
1988, pp. 195-217. A. MAmA AND R. ROSALES, A theory for spontaneous Mach stem formation in reacting shock fronts, I: the basic perturbation analysis, S.I.A.M. J. App\. Math. 43, (1983), pp.
STABILITY OF MULTI-DIMENSIONAL WEAK SHOCKS GUY METIVIER * Abstract. In this paper we discuss the stability of weak shocks for a class of multi-dimensional systems of conservation laws, containing
Euler's equations of gas dynamics; we study the wellposedness of the linearized problem, and study the behaviour of the L2 estimates when the strength of the shock approaches zero. AMS(MOS) subject
classifications. 35L65 - 76L05 - 35L50.
1. Introduction. In this lecture, we are concerned with the linearized stability of multi-dimensional weak shocks. Let us first recall that A. Majda has defined the notion of "uniform stability" for
shock front solutions to systems of free boundary mixed hyperbolic problems, this stability condition is the natural "uniform Lopatinski condition" for the linearized problem. However, the analysis
in [Ma 1] relies on the fact that the front of the shock is non-characteristic while, for weak shocks, the front is "almost" characteristic, i.e. the boundary matrix has a small eigenvalue; in fact
the estimates given in [Ma 1] blow up when the strength of the shock tends to zero. In this context, our main goal is to make a detailed study of the behaviour of the L2 estimates that are valid for
the linearized equations, when the strength of the shock tends to zero. Another interesting point we get as a by-product of our analysis, is that, in rather general circumstances, any weak shock that
satisfies Lax' shock conditions, is uniformly stable (this was already noted in [Met 1] for 2 x 2 systems). The details of the proofs are given in [Met 3].
2. Equations of shocks. Let us consider a system of conservation laws; (2.1) where the space-time variables are called x = (xo, ... , x n ), and the unknowns U = (Ull ... , UN). The functions /j are
supposed to be Coo on the open set n C RN , and denoting by Ai the jacobian matrix of /j, the quasi linear form of (2.1) is: (2.2)
The typical example we keep in mind all along this paper, is Euler's system of gas dynamics:
atP + div (pv) = 0
at(pv) + div (pv 1/9 v) + grad p = 0 at(pE) + div (pEv
+ pv) =
*IRMAR URA 0305 CNRS, Universite de Rennes I, Campus Beaulieu, 35042 Rennes Cedex, France.
240 with p the density, p the pressure, v the velocity and E = ~ Ivl 2 + e(p, p); the unknowns are u = (p, v, s), s being the entropy; as usual, we assume that p together with the temperature T are
given function of (p, s), which satisfy the second law of thermodynamics: de = Tds + p- 2 dp. Going back to general notations (2.2), we will always assume that the system is symmetric hyperbolic with
respect to the time variable t = Xo, (for instance assuming that it admits a strictly convex entropy) that is: ASSUMPTION 1. There is a matrix S(u), which depends smoothly on u such that all the
matrices SAj are symmetric, with SAo definite positive.
A shock front solution of (2.1) is, to begin with, a piecewise smooth weak solution u which is discontinuous across an hypersurface 2:, say of equation ,p( x) = 0; the restrictions u± of u to each
side of 2: are smooth solutions of (2.1), and asserting that u is a weak solution is equivalent to the Rankine-Hugoniot jump conditions:
(2.4) where [f] denotes the jump of the function f across
Recall from [Lax] the following lemma, which allows the construction of planar shock fronts (solutions where u+ and u- are constant and 2: is the hyperplane of equation at = x.O: LEMMA 1. Let
A( u,
be, for u E RN and ( E Rn\O, a simple eigenvalue of
(j AOl(u)Aj(u)
Then there is a (Rankine-H ugoniot) "curve" of solutions to the jump equations: (2.6)
a{fo(u+) - fo(u-n =
(j{h(u+) -
u+ = U(c:, u-O, a small) such that: (2.7)
= S(c:, u-o, (c: being the parameter on the {
1c:1 remaining
u+ = u- +c:r(u-,O +O(c: 2 ) a
= A(U-,O+ tc:r.duA(u-, 0 + O(c: 2 )
where r( u, 0 denotes a right eigenvector associated to the eigenvalue A. 3. Structure of the problem. The starting point is to consider equations (2.2) for u+ together with the jump condition (2.4),
as a free boundary value problem, and in this context, the boundary matrix (the coefficient of the normal derivative to 2: in (2.1)) plays an important role: (3.1)
L O::;j::;n
241 The first requirement is:
a) the front :L is non-characteristic. That means that both matrices M( u+, d¢J) and M( u- ,d¢J) are invertible. When looking at the example (2.7), this is well known to be equivalent to agenuine non
-linearity assumption, i.e. d u )'( u, O·r(u, 0 i0, in which case we impose the standard normalization: (3.2)
du )'( u, O.r( u, ()
Also recall that, when the eigenvalue is linearly degenerate (d u )'( u, O·r( u, 0 = 0), one falls into the completely different category of contact discontinuities, which in the multi-D case are not
yet understood. The next thing to check is b) the number of boundary conditions. For that purpose one has to look at the number of characteristics impinging the boundary (number of positive and
negative eigenvalues of M(u±,d¢J»; note that, here, we have N boundary conditions for 2N + I unknowns u+, u- and ¢J). In fact, requiring that our problem possesses the right number of boundary
conditions leads to the familiar Lax' shock conditions ([Lax]). In the example (2.7), with the normalization (3.2), that means we need restrict ourselves to the case: (3.3)
£ 0 small enough, the problem (6.1) (6.2) is uniformly stable as defined by A. Majda. Such a fact was already noticed for 2 x 2 systems in the appendix of [Met 1]. 2. Estimate (7.3) just makes
precise the dependence on e in the estimates given by A. Majda ([Ma 1]). In particular, existence of solutions (v,cp) for the problem (6.1) (6.2) with data (f,g) in L2, follows from [Ma 1] as well as
estimates and existence in domains {t < T}. 3. The reader might be worried by the term C I / 2IgI0,"'( in the right hand side of
(7.3). Indeed, in the Rankine-Hugoniot condition (2.4) each term is O(e), so in the forthcoming applications, linearizing (2.4) will yield a term 9 which will contain a factor e in front of it.
8. Several reductions. In the three last sections we shall give a few indications concerning the proof of theorem 1, assuming for simplicity that the eigenvalue >. under consideration is the smallest
one. First, one can perform several reductions:
a) localize estimate (7.3), making use of a partition of unity. b) after a (local) change of variables, one can assume that condition (6.9), rh± = >.(ru±, re#) - reo = =j=£e b±, holds not only on Xn
= 0, but also on both side ±x n ~ o. c) next, one can diagonalize the boundary matrix, getting a problem of the following form: n-l
J±On v ± + "LBfOjV± = f±
J+rv+ = M rrv- + £Xip + G
Thanks to assumption 1, the matrices B j can be assumed to be symmetric, and
Bo definite positive. The next lemma is a consequence of (6.8) and of assumption 3, but it is crucial in the understanding of the structure of the problem:
LEMMA 2. bj = _ ea+b+/ 2 x first column of Bj
+ 0(£), and X
is elliptic.
Remark. It is a good exercise to go to the litnit in the boundary conditions (8.2) (assuming that g = £h). Indeed, in the first row, one can factor out £, and the limit is of the form: (8.3) and the
limit of the N - 1 other equations is simply: (8.4)
(8.3) is nothing but the linearized equation of the eikonal equation corresponding to the limit problem of sound waves mentioned in section 4, while (8.4) are the natural transmission conditions for
the linearized equations of sound waves. In these conditions, weak shocks appear as singular perturbations of sound waves, the perturbation being singular in two aspects: first, the boundary becomes
non characteristic and second, the boundary conditions become elliptic with respect to ip.
d) Denoting by VI [resp v] the first component of V [resp the vector of the N - 1 last components ], theorem 1 is a consequence of the more precise following estimates:
246 THEOREM
2. Under the same circumstances as in theorem 1, one has:
,1/2Ivlo,")' + £1/2IrVlI0,")' + Iri>lo,")' + ,£1/2Icplo,")' + £Icpll,")' ::; C {,-1/21/10,")' + £-1/2Ig110,")' + Ig 1o"),}
e) Because A is the smallest eigenvalue, the problem lying on the side Xn ::; 0 is symmetric hyperbolic and well posed without any boundary condition, so that for , large enough:
f) A direct analysis of the boundary conditions shows that: (8.7)
,£1/2Icplo,")' + c:lcpll,")' ::; C{ £1/2IrVlI0,")' + Iri>lo,")'} + C{C 1/ 2Ig110,")' + Iglo,")'}
g) Therefore, it suffices to provide an estimate for v±, and in fact, because (8.8) it suffices to give an estimate ofrv+. More precisely, forgetting the +'s in (8.1), we consider the following
problem: n-l
+ LBjojv = 1
and it remains to prove an estimate of the form:
£1/2IrWlI0,")' + !fiOlo,")' ::; ::; Cb- 1/ 21/10,")' + Iglo,")' + £-1/2Ig110,")' + ,-1/2Iwlo,")'}+ + C{ £ + ,-I H£I/2Irwdo,")' + IriOlo,")' + ,£1/2Icplo,")' + £Icpit.")'}
Indeed, with (8.6) (8.7) and (8.8) estimate (8.5) follows immediately if, is large enough and £ > 0 is small. 9. Symmetrizors. As usual, theorem 2 is proved by using suitable symmetrizors and
"integrations by part", but, as shown in [Ma 1], the nature ofthe boundary conditions (8.2) or (8.10) forces us to introduce pseudo-differential symmetrizors; however, there is a difficulty due to
the lack of smoothness of the coefficients (u, B, ",), and the classical calculus does not apply. To overcome this, there exists a convenient modification of the pseudo-differential calculus which
was introduced by J .M. Bony ([Bo]), and which he called the "para-differential" calculus. In fact, we need a version "with parameter ," of the calculus, similar to the one which was used in [Met 1].
We will not enter into the details here, referring the reader to [Met 3] for
a precise description of the calculus and also for a complete proof of the theorems. Instead, we would like to explain a little what happens at the symbolic level, and for that purpose, say that (u-,
u+, 8, K) are constant; for instance the reader may think of (6.1) (6.2) or (8.1) (8.2) or (8.9) (8.10), as the linearized equations of (2.2) (2.4) around a weak planar shock. In that case, a natural
way to study (8.9) (8.10) is to perform a partial FourierLaplace transform with respect to the tangential variables y = (t, y') = (t, Xl," Xn-l)' Let us call 'r/ = (r, 'r/') the dual variables; as
usual in this kind of problem, r is complex, with Imr = -, < 0 and 'r/' remains real. So, after this transformation we are led to the following system:
= JDnv + Pv = f
Jfv = £Xcp+ 9
where Dn = -tan and P and X are matrices which depend linearly on Il P= [ a
P and X are real when, (9.3)
P' ;
= [~] + 0(£1'r/1)
= 0, P' is of dimension N
- 1, and:
OrP is definite positive, and in particular Orll > 0
The following fact is a consequence of assumption 3, and implies that X elliptic as stated in lemma 2: there is c > 0 such that:
(9.4) Moreover, the 0(£1'r/1) term in X can be neglected because it only yields error terms in the right hand side of (8.11); so in the sequel we just drop it. The construction of the symmetrizor S
which holds as soon as 5J is hermitian:
= S ('r/) relies on the following formula,
+ j(Im(SP)v,v)dxn = j
where (,) denotes the scalar product on eN and 11.110 the Classically, two facts are needed (see [Kr] or [Ch-Pi]): (9.6)
L2- norm
on [0,+00].
for some constant c > 0, and:
= £Xcp + g. IIlI ~ lal or IIlI :S lal.
whenever w satisfies the boundary condition Jw Now, the choice of S depends on whether Case I: IIlI ~ S = -ld: one has:
In that case, it suffices to take the standard symmetrizor
* because of (9.3) Im(SP) ~ c( -Imr) = Cf for some constant c> 0 * the boundary term is:
clWl12 + Iwl 2 - 219 + curpl2 and because lui S ClJ.lI,
(8 Jw, w) = ClWl12 -lilW = this term is bigger than
ClWl12 + Iwl 2 - 4191 2 - ClcJ.lrpl2 ~ ClWl12 + Iwl 2 - 4191 2 - Cc21wl12 - CIgl1 2 which certainly implies (9.7) if 15 is small enough. Case II: LEMMA
1J.l1 S 817]1, with 8 small. The main ingredient is the following one:
3. There are invertible matrices W(7]) and V(7]) such that:
W JV
of dimension N - 2,
Setting v into:
= Vw
and w
I1 real when T
[ -15 o
0] D 1
is real, and
Dn w" + II"wl
Furthermore, neglecting (9.10)
= (WbW2'W") = (w',w"),
II ~f WPV = [~ ~ ~ 1
= J;
il = J.l + 0(1517]1).
we see that (9.1) decouples
= f"
+ [il(j u] P w , = f'
O(cl7]lrp) terms, the boundary conditions also decouple:
rw" =g" -CrW1 = cilrp + gl rW2 = curp + g2
The study of (9.8) (9.10) is easy, and in fact we can skip it because, as said in section 8, it suffices to provide estimates for the traces and this is trivial from (9.10). So it suffices to study
the 2 X 2 system (9.9) (9.11); the first step is to solve (9.9) with the boundary condition rW2 = g2, and subtracting this solution to the solution of (9.9) (9.11), one reduces oneself to the case
where g2 = O. Eliminating rp in (9.11) leads to a boundary condition of the form: (9.12) Now, it remains to get estimates for the traces of solutions of systems like (9.9) (9.12), and this will be
performed in the next and last section.
249 10. The 2 x 2 problem. Let us stop for a while, to give a typical and very simple example of system (9.9) (9.12), which may help the reader to understand the problem. It is a differential example
in which the normal variable and the spacetime tangential variables are respectively called x, and (t, y) E R2; the equations in the half-space x > 0, are
+ OtWI + OyW2 = Ox W 2 + Ot W 2 + OyWI = h and the boundary condition on x = 0 is: (10.1)
(10.2) In that case the matrix P of (9.1) is simply: (10.3) Let us go back now to a general system (9.9) (9.12); however, for simplicity, we drop the tildes from the notations, and we set W = (WI,
W2). The first idea is to use a new weight function, and more precisely we introduce: (10.4) where a
> 0 is a small parameter to be determined. (9.9) is transformed into:
(10.5) while the boundary condition is unchanged: (10.6) It is important to note that Ilrll S; IIfll (because a 2: 0), but that it is equivalent to estimate the traces of z or those of w. In order to
do so, we introduce the symmetrizor:
(10.7) with q = 17- 1 P (recall that we are working in the domain Ipl S; 8'1171 by (9.4)).
Ipl S; 811)1, and hence that
With this choice, it is clear that SJ is hermitian, and that: (10.8) Therefore, if the parameter 8 is small enough, condition (10.6) implies that: (10.9)
is small, so the boundary
On the other hand:
[-r !]
with C = 0(,) and:
m = ~Im {2~P - 2p(1 + g)lql2 + gp} We now remark that the condition IIlI :S 811J1 implies that, :S 81171; because (T is real when, = 0 and Iql is small when 8 is small, and because Imp = -Imll ~ C/,
we see that if 8 and g are small enough, then Imm ~ cg-l, and therefore: (10.10) With (10.8), we conclude that, iJ a is small enough, then: (10.11) At last, we note the trivial estimate: (10.12)
l(r,z)l:S {IJd +gl/21121 + Iqhl 2}x {IZll + g-I/2Iz21 + g-llz21} :S Cllrllo{lzll2 +g-llz212 +g-llq Z 21 2}1/2
With a formula similar to (9.5), we see that estimates (10.10), (10.11) and (10.12) are exactly what we need in order to conclude and get an estimate for z and its traces. In fact, because the
symmetrizor (10.7) is singular as g -> 0, the actual calculus with operators is slightly more complicated than the calculus on symbols we have sketched above (several terms have a g-1 coefficient).
In particular remainders deserve a great attention, but again, we refer the reader to [Met 3J for complete proofs. REFERENCES [AI] [Bo] [Ch-Pi]
[Co-Me] [Kr] [Lax] [Mal] [Ma2] [Met 1] [Met 2] [Met 3]
S. ALINHAC, Existence d'ondes de rarefaction pour des systemes quasilineaires multidimensionnels, Comm. in Partial Diff. Equ., 14 (1989), pp. 173-230. J .M. BONY, Calcul symbolique et propagation des
singularites pour les equations aux derivees partielles non lineaires, Ann. Sc. E.N.S., 14 (1981), pp. 209-246. J. CHAZARAIN - A. PIRIOU, Introduction a la tMorie des equations aux derivees
partielles, Bordas (Dunod), Paris, 1981, English translation Studies in Math. and its Applications, vol 14, North Holland (1982). R. COIFMAN - Y. MEYER, Au dela des operateurs pseudodifferentiels,
Asterisque 57 (1978). H.O. KREISS, Initial boundary value problems for hyperbolic systems, Comm. Pure and Appl. Math., 23 (1970), pp. 277-298. P. LAX, Hyperbolic systems of conservation laws, Comm.
on Pure and Appl. Math., 10 (1957), pp. 537-566. A. MAJDA, The stability of multidimensional shock fronts, Memoirs of the Amer. Math. Soc., nO 275 (1983). A. MAJDA, The existence of multidimensional
shock fronts, Memoirs of the Amer. Math. Soc., nO 281 (1983). G. METIVIER, Interaction de deux chocs pour un systeme de deux lois de conservation en dimension deux d'espace, Trans. Amer. Math. Soc.,
296 (1986), pp. 431-479. G. METIVIER, Ondes soniques, Seminaire E.D.P. Ecole Poly technique, annee 1987-88, expose nO 17 & preprint a paraitre, J. Math. Pures et Appl.. G. METIVIER, Stability of weak
shocks, preprint.
NONLINEAR STABILITY IN NON-NEWTONIAN FLOWS* J. A. NOHELt:j:, R. L. PEGOt#
AND A. E. TZAVARASt##
1. Introduction. In this paper, we discuss recent results on the nonlinear stability of discontinuous steady states of a model initial-boundary value problem in one space dimension for
incompressible, isothermal shear flow of a non-Newtonian fluid between parallel plates located at x = ±1, and driven by a constant pressure gradient. The non-Newtonian contribution to the shear
stress is assumed to satisfy a simple differential constitutive law. The key feature is a non-monotone relation between the total steady shear stress and steady shear strain rate that results in
steady states having, in general, discontinuities in the strain rate. We explain why every solution tends to a steady state as t -+ 00, and we identify steady states that are stable; more details and
proofs will be presented in [8].
We study the system
S:= T+fx,
Sx ,
(1.2) on [0,1] x [0,00), with tions
S(O, t) =
a fixed positive constant. We impose t.he boundary condi-
v(1, t) = 0,
t ~ 0,
and the initial conditions (1.4)
l7(x, 0)
accordingly, S(x,O) = So(x) := l7o(x) + vox(x) + fx. The function g : R -+ R is assumed to be smooth, odd, and g(O > 0, =I 0. In the context of shear flow, v, the velocity of the fluid in the
channel, and T, the shear stress, are connected through the balance of linear momentum (1.1). The shear stress T is decomposed into a non-Newtonian contribution 17, evolving in accordance with the
simple differential constitutive law (1.2), and a viscous contribution V x • The coefficients of density and Newtonian viscosity are taken as 1, without loss of generality. The flow
*Supported by the U. S. Army Research Office under Grant DAAL03-87-K-0036 and DAAL0388-K-0185, the Air Force Office of Scientific Research under Grant AFOSR-87-0191; the National Science Foundation
under Grants DMS-8712058, DMS-8620303, DMS-8716132, and a NSF Post Doctoral Fellowship (Pego). tCenter for the Mathematical Sciences, University of Wisconsin- Madison, Madison, WI 53705. :j:Also
Department of Mathematics. #Department of Mathematics, University of Michigan. ##Also Department of Mathematics.
252 is assumed to be symmetric about the centerline of the channel. Symmetry dictates the following compatibility restrictions on the initial data:
vo(l) = 0,
they imply that 0"(0, t)
= vx(O, t) = 0,
= 0,
= 0;
and 0"0(0)
and symmetry is preserved for all time.
The system (Ll )-(1.4) admits steady state solutions (v( x), 0'( x)) satisfying
on the interval [0,1]. In case the function w(O := g(O + is not monotone, there may be multiple values ofvx ( x) that satisfy (1.6) for some x's, thus leading to steady velocity profiles with jumps
in the steady velocity gradient v x • Our objective is to study the stability of such steady velocity profiles; we also study well-posedness and the convergence of solutions of (1.1)-(1.4) to steady
states as t -+ 00. w@
Fig. 1:
v (x)
Fig. 2: Velocity profile with a kink; w( -vx(x)) = fx.
For simplicity, the function w( 0 is assumed to have a single loop. The graph of a representative w(O is shown in Fig. Ij in the figure m and M stand for the levels of the bottom and top of the loop,
respectively. Our results and techniques can be easily generalized to cover the case when w( 0 has a finite number of loops. Steady state velocity profiles are constructed as follows: First solve w(u
( x)) = f x for each x E [0,1], where u = -v x. This equation admits a unique solution for ~ fx < m or fx > M, and three solutions for m < fx < Mj let u(x), ~ x ~ 1, be a solution. Setting
= g( -u(x)),
then (v(x),O'(x)) satisfy (1.6) and (1.3) for a.e. x E [0,1] and give rise to a steady state. Clearly, if f < m there is a unique smooth steady state; if m < f < M, there is a unique smooth velocity
profile and a multitude of profiles with kinks; finally, if f > M, all steady state velocity profiles have kinks. An example of a velocity profile with kinks is shown in Fig. 2.
253 Problem (1.1)-(1.4) captures certain key features of a class of viscoelastic models that have been proposed to explain the occurence of "spurt" phenomena in nonNewtonian flows. Specifically, for
a particular choice of the function g in (1.2), the system under study has the same steady states as the more realistic systems studied in [6] and [7]; the latter, derived from a three-dimensional
setting that is restricted to one-dimensional shearing motions, produce non-monotone steady shear stress vs. strain-rate relations of the type shown in Fig. 2. The phenomenon of spurt was apparently
first observed by Vinogradov et al. [13] in the flow of highly elastic and very viscous non-Newtonian fluids through capillaries or slit-dies. It is associated with a sudden increase in the
volumetric flow rate occuring at a critical stress that appears to be independent of the molecular weight. It has been proposed by Hunter and Slemrod [5], using techniques of conservation laws, and
more recently by Malkus, Nohel, and Plohr [6] and [7], using numerical simulation and extensive analysis of suitable approximating dynamic problems (motivating the present work), that spurt phenomena
may be explained by differential constitutive laws that lead to a non-monotone relation of the total steady shear stress versus the steady shear strain-rate. In this framework, the increase of the
volumetric flow rate corresponds to jumps in the strain rate when the driving pressure gradient exceeds a critical value. We conjecture that our stability result discussed in Sec. 3 below can be
extended to these more complex problems.
2. Preliminaries. In this section, we discuss preliminary results that are essential for presenting the stability result; further details and proofs can be found in [8].
A. Well-Posedness. We use abstract techniques of Henry [4] to study global existence of classical solutions for smooth initial data of arbitrary size, and also existence of almost classical, strong
solutions with discontinuities in the initial velocity gradient and in the stress components. The latter result allows one to prescribe discontinuous initial data of the same type a~ the
discontinuous steady states studied in this paper. Existence results of this type are established in [8] for a general class of problems that serve as models for shearing flows of non-Newtonian
fluids; the total stress is decomposed into a Newtonian contribution and a finite number of stress relaxation components, viewed as internal variables that evolve in accordance with differential
constitutive laws frequently used by rheologists (for discussion, formulation and results, see [11], [7], also the Appendix in [8]). Existence of classical solutions may also be obtained by using an
approach based on the Leray - Schauder fixed point theorem (cf. Tzavaras [12] for existence results for a related system). Other existence results were obtained by Guillope and Saut [2], and for
models in more than one space dimension in [3]. As a consequence of the general theory, one obtains two global existence results (see Theorems 3.1, 3.2, 3.5, and Corollary 3.4 in [8]): (i.) the
existence of a unique classical solution (v(x, i), 0"(:1:, I)) of (1.1)- (1.5) on [O,IJ x [0,00) for initial data (vo(x),O"o(x)), not restricted in size, that sat-
254 isfy: So(x) := vox(x) + O"o(x) + Ix E HS[O,l] for some s > 3/2, with So(O) = 0, vo(l) = Sox(l) = 0, and 0"0 E eI[O, 1], where HS denotes the usual interpolation space. (ii.) the existence and
uniqueness of a strong, "semi-classical" solution of (1.1) -(1.5), obtained by a different choice of function spaces, for initial data (vo(x), 0"0 (x » that satisfy: So(x) E HI[O, 1] with So(O) = 0,
vo(l) = Sox(l) = 0, and 0"0 E
Result (ii.) yields solutions in which 0" and Vx may be discontinuous in x, but Sx and Vt are continuous, and 0" is e l as a function of t for every x. Thus all derivatives appearing in the system
may be interpreted in a classical sense as long as the equation is kept in conservation form. A result of this type was obtained by Pego in [10] for a different problem by a similar argument.
B. A Priori Bounds and Invariant Sets. To discuss global boundedness of solutions, let 0", v be a classical solution on an arbitrary time-interval, and note that system (1.1) -(1.5) is endowed with
the differential energy identity
~ {1/211v;dx + 11 [W(V x) + x I vx]dx} + 11 [v; + v;tldx = 0 .
The function W(~) := Joe w«()d( plays the role of a stored energy function; by the assumption on g, W is not convex. This fact is the main obstacle in the analysis of stability. (i.)Boundedne33 of S.
Since ~g(O > 0, it follows that and W(O satisfies the lower bound
Joe g«()d( ~
0 for ~ER,
(2.2) Standard energy estimates based on (2.1) and (2.2) coupled with integration of (1.1) with respect to x yield a global a priori bound for S:
(2.3) where
IS(x, t)1
o~ x
0 ~ t < 00,
e is a constant depending only on data but not t.
(ii.)Invariant Set3 for a Related ODE. Control of S enables us to take advantage of the special structure of Eq. (1.2) and determine suitable invariant regions. For this purpose, it is convenient to
introduce the quantity s := 0" + I x. Then, Eqs. (1.2), (1.3) readily imply that s satisfies
+ S + g(s -
= Ix.
For a fixed x, it is convenient to view Eq. (2.4) as an ODE with forcing term S(x,.). Also, observe that at a steady state (a,v x ), one has If = 0, and consequently,
s = -vx
is an equilibrium solution of (2.4) (with S = 0). If S == 0 in (2.4), the hypothesis concerning 9 implies that the ODE admits positively invariant intervals for each fixed x. We sketch how this
property is preserved in the presence of a priori control of S as provided by (2.3); more delicate bounds are essential in the proof of stability in Sec .3. To fix ideas, let (2.5)
to > 0 be given, and assume that IS(x,t)l::; p,
x::; 1,0::; t::; to,
for some p > O. For x fixed in [0,1]' we use the notation Set) .- sex, t) and conveniently rewrite (2.4) as (2.6)
+ w(s -
x - S(t).
We state the following result on invariant intervals; its proof is obvious. Proposition 2.1. Let S satisfy the uniform bound (2.5) for 0 ::; t ::; to. For x fixed, ::; 1, assume there exist s_, s+
such that s_ < s+ and
o ::; x (2.7)
w(B- - A) < f x - A
w(s+ - A) >
Then the compact interval time interval 0 ::; t ::; to.
IAI::; p IAI::; p
x - A
[L, s+ 1 is positively invariant for the OnE (2.6) on the
Invariant intervals are generated by solution sets of the inequalities (2.7) and (2.8) as functions of p and x. In particular, since lim w(e) = ±oo, given any x and
p, one easily determines So+ large, positive and So_ large, negative such that if L < So- and s+ > so+, then L and s+ satisfy (2.7) and (2.8), respectively, and
the compact interval [L, s+l is positively invariant for the ODE (2.6). More discriminating choices of invariant intervals occur if one restricts attention to small values of p; the analysis becomes
more delicate. For a function w( 0 with a single loop, the most interesting case arises when f x - p, f x and f x + p each intersects the graph of w( 0 at three distinct points. Referring to Fig. 3,
the abscissae of the points of intersection are denoted by (CL, /3-,,-), (0'0, /30, ,0) and (0'+, /3+, 1+), respectively. It turns out that for x fixed and p small enough, there are discriminating
invariant intervals of the type shown in Fig. 3. However, in contrast to the large invariant intervals discussed in the previous paragraph, the more discriminating ones degenerate as we approach the
top or bottom of the loop (when x varies). For the stability of discontinuous steady states in Sec. 3., it is crucial to construct compact invariant intervals that are of uniform length (see
Corollary 2.2 in [8]). The latter is accomplished by taking p sufficiently small and by avoiding the top and bottom of the loop in Fig. 3. Of specific interest is the situation in which s( x) is a
piecewise smooth solution of
w(s(x)) = fx,
w (~)
fx ---------
fx + p
--f----------f- ------------------
oro' fx -
Fig. 3: Invariant Intervals. defined on [0,1] and admitting jump discontinuities at a finite number of points XI, ... ,Xn in [0,1]. Recall that s(x) is a steady solution of the ODE (2.6)
corresponding to the steady state (0', vx ). In addition, suppose that s( x) takes values in the monotone increasing parts of the curve w(O and that it avoids jumping at the top or bottom of the
loop, i.e., (2.10)
w'(s(x)) 2: Co> 0,
for some constant Co. A delicate construction in [8] yields compact, positively invariant intervals of (2.6) of uniform length, centered around s(x) at each x E
[O,I]\{XI, ... ,X n}. (iii.)Boundedness of u and V x . As an easy application of Sec. 2 (ii), choose a compact interval [s_, s+] that is positively invariant for (2.6) and valid for all x E [0,1]. By
virtue of the global bound (2.3) satisfied by S(x, t), we conclude that
Is(x, t)1 :S C,
o:S x :S 1, t 2: 0
which, in turn, using (1.3) and (2.11), implies (2.12)
o :S x :S 1, t 2: 0,
for some constant C depending only on the data. The definition of s also implies that u is uniformly bounded.
257 (iv.) Convergence to ~teady ~tate8. Let (v(x, t), u(x, t)) be a classical solution of (1.1) -(1.5) defined on [0,1] X [0,00]. We discuss the behavior of this solution as t -+ 00. The first result
indicates that S value.
Proposition 2.2. Under the
= (f + Vx + f
x converges to its equilibrium
of the exi8tence
lim S(x,t) = t-=
uniformly for x E [0,1].
The proof is a consequence of Sobolev embedding applied to the following a priori estimates that are derived from the system (1.1)-(1.4) by standard techniques:
S;(x,t)dx:::; C, 0:::; t
< 00,
where C is a positive constant depending only on data. Use of (2.13) enables us to identify the limiting behavior of solutions of (2.4) as t -+ 00. The following result is analogous to Lemma 5.5 in
Pego [10]; its elementary
proof is given in Lemma 4.2 of [8].
°: :;
Proposition 2.3. Let s(x,.) E C 1 [0, 00) be the ~olution of (2.10), where S(x,.) i~ continuow and ~ati~fie8 (2.13), x:::; 1. Then s(x,.) converge~ to s=(x) as t -+ 00 and s=( x) ~ati~fies
(2.17) In view of the shape of w(e) = ~ + g(O, equation (2.17) has one solution for 0:::; f x < m or f x> M and three solutions for m < f x < M. Let (v(x, t), (f(x, t)) be a classical solution of
(1.1)-(1.4) on [0,1] x [0,00). Recalling the definition of s, Proposition 2.3 implies
(2.18) Also, combining (Ll), (2.13) and (2.18) yields
lim vx(x,t)
= t-oo lim(S(x,t)-s(x,t))=-s=(x),
and (2.20) Finally, noting that (2.21)
v(x, t) =
Vx(X, t)dx,
x) is Lipschitz continuous and satisfies
V OO (
(2.22) We conclude that any solution of (1.1)-(1.4) converges to one of the steady states. If 0 ~ f < m, then there is a unique smooth steady state which is the asymptotic limit of any solution.
However, if m < f, then there are multiple steady states and thus a multitude of possible asymptotic limits. In Sec. 3, we identify stable steady states. Also note from (2.20) that in a discontinuous
steady state, the discontinuities in 7f and Vx cancel.
Observe that in case w( 0 is monotone the above arguments yield that every solution converges to the unique steady state. Moreover, the above results can be routinely generalized to the case that the
function w(~) has multiple loops but the graph of w has no horizontal segments. 3. Stability of Steady States.
The purpose is to study the stability of velocity profiles with kinks. To fix ideas, let (v(x),7f(x)) be a steady state of (1.1)-(1.3) such that v(x) has a finite number of kinks located at the
points xl, . .. ,x n in (0, 1); accordingly, v x( x) and 7f( x) have a finite number of jump discontinuities at the same points. Recall that, if we set
= -vx(x),
(3.1) and 7f(x)
XE[0,1],x=jxI, ... ,x n
= g(-u(x)).
Given smooth initial data (vo(x),O"o(x)), there is a unique smooth solution
0"( x, t)) of (1.1 )-(1.4). As t
-+ 00, the solution converges to one of the steady states, not a-priori identifiable. We now restrict attention to initial data that are close to (v( x), 7f( x)), except on the union U of small
subintervals centered around the points Xl, ... ,x n . U can be thought of as the location of transition layers separating the smooth branches of the steady state. Roughly speaking, it turns out that
the steady state is "asymptotically stable" under smooth perturbations that are close in energy, provided (v( x), 7f( x)) takes values in the monotone increasing parts of w( 0; the stable solutions
are local minimizers of an associated energy functional (see (3.8) below). The interesting problem of finding the domain of attraction of a
stable steady solution appears to be a difficult task. Our main result is:
Theorem 3.l. Let (v(x),O'(x)) be a 3teady 3tate 80lution a8 de8cribed above and 3ati3/ying
w'(vx(x)) ~ co> 0, x E [0,1],
xix}"",X n
for 80me p08itive con8tant co. If the mea3ure of U i8 8ufficiently 8mall, there i8 a p08itive con8tant 80 depending on U 3uch that, if 8 < 80 , then for any initial data (vo(x), uo(x)) 3ati3/ying sup
ISo(x)1 < 8,
1 v;(x,O)dx < _8 2 2
and Ivox(x)-vx(x)I Xo. To insure that the construction of ell produces the property desired, this part of the analysis makes crucial use of invariant intervals of the ODE (2.6) that are of uniform
length as discussed in Sec. 2(ii) above.
REFERENCES 1. G. ANDREWS AND J. BALL, " Asymptotic Stability and Changes of Phase in One-Dimensional Nonlinear Viscoelasticity:," J. Diff. Eqns. 44 (1982), pp. 306-341.
2. C. GUILLOPE AND J.-C. SAUT , "Global Existence and One-Dimensional Nonlinear Stability of Shearing Motions of Viscoelastic Fluids of Oldroyd Type ," Math. Mod. Numer. Anal., 1990. To appear. 3. C.
GUILLOPE AND J.-C. SAUT, "Existence Results for Flow of Viscoelastic Fluids with a Differential Constitutive Law," Math. Mod. Numer. Anal., 1990. To appear. 4. D. HENRV, Geometric Theory of Semi
linear Parabolic Equations, Lecl.ure Notes in Mathematics, vol. 840 Springer-Verlag, New York, 1981. 5. J. HUNTERAND M. SLEMROD, "Viscoelastic Fluid Flow Exhibiting Hysteretic Phase Changes," Phys.
Fluids 26 (1983), pp. 2345-2351. 6. D. MALKus, J. NOHEL, AND B. PLOHR, "Dynamics of Shear Flow of a Non-Newtonian Fluid," J. Comput. Phys., 1989. To appear. 7. D. MALKUS, J. NOHEL, AND B. PLOHR,
"Analysis of New Phenomena In Shear Flow of NonNewtonian Fluids," in preparation, 1989. 8. J. NOHEL, R. PEGO, AND A. TZAvARAs, "Stability of Discontinuous Steady States in Shearing Motions of
Non-Newtonian Fluids," Proc. Roy. Soc. Edinburgh, Series A, 1989. submitted.
9. A. NOVICK-COHEN AND R. PEGO, "Stable Patterns in a Viscous Diffusion Equation," preprint, 1989. submitted. 10. R. PEGO, "Phase Transitions in One-Dimensional Nonlinear Viscoelasticity:
Admissibility and Stability," Arch. Rational Mech. and Anal. 97 (1987), pp. 353-394. 11. M. RENARDV, W. HRUSA, AND J. NOHEL, Mathematical Problems in Viscoelasticity, Pitman Monographs and Surveys in
Pure and Applied Mathematics, Vol. 35, Longman Scientific & Technical, Essex, England, 1987. 12. A. TZAVARAS, "Effect of Thermal Softening in Shearing of Strain-Rate Dependent Materials," Arch.
Rational Mech. and Anal. 99 (1987), pp. 349-374. 13. G. VINOGRADOV, A. MALKIN, Yu. YANOVSKII, E. BORISENKOVA, B. YARLVKOV, AND G. BEREZHNAVA, "Viscoelastic Properties and Flow of Narrow Distribution
Polybutadienes and Polyisoprenes," J. Polymer Sci., Part A-2 10 (1972), pp. 1061-1084.
A NUMERICAL STUDY OF SHOCK WAVE REFRACTION AT A CO 2 / CH 4 INTERFACEt ELDRIDGE GERRY PUCKETTt Abstract. This paper describes the numerical computation of a shock wave refracting at a gas interface.
We study a plane shock in carbon dioxide striking a plane gas interface between the carbon dioxide and methane at angle of incidence IT;. The primary focus here is the structure of the wave system as
a function of the angle of incidence for a fixed (weak) incident shock strength. The computational results agree well with the shock polar theory for regular refraction including accurately
predicting the transition between a reflected expansion and a reflected shock. They also yield a detailed picture of the transition from regular to irregular refraction and the development of a
precursor wave system. In particular, the computations indicate that for the specific case studied the precursor shock weakens to become a band of compression waves as the angle of incidence
increases in the irregular regime. Key words. shock wave refraction, conservative finite difference methods, Godunov methods, compressible Euler equations
AMS(MOS) subject classifications. 35L65, 65M50, 76L05
1. The Problem. In this work we consider a plane shock wave striking a plane gas interface at angle of incidence 0° < 0'; < 90°. This is a predominantly two dimensional, inviscid phenomenon which we
model using the two dimensional, compressible Euler equations with the incident shock wave and gas interface initially represented by straight lines. Incident shock ' "
Figure 1 A diagram of the problem t\Vork performed under the auspices of the U.S. Department of Energy at the Lawrence Livermore National Laboratory under contract number \V-7405-ENG-48 and partially
supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research under contract number W-7405-Eng-48 and the Defense Nuclear Agency under IACRO 88-873. :j:Applied
Mathematics Group, Lawrence Livermore National Laboratory, Livermore, CA 94550.
262 A diagram of the problem is shown in figure 1. The shock wave travels from right to left in the incident gas striking the interface from the right. This causes a shock wave to be transmitted into
the transmission gas and a reflected wave to travel back into the incident gas. The reflected wave can either be a shock, an expansion, or a band of compression waves. Depending on the strength of
the incident shock, the angle of incidence, and the densities and sound speeds of the two gases these three waves may appear in a variety of distinct configurations. In the simplest case the
incident, transmitted, and reflected waves all meet at a single point on the interface and travel at the same speed along the interface. This is known as regular refraction. A. ,\ingram depictin,g
regnlnr refraction appears in figure 2.
undisturbed gas interrace
incident shock reflected wave
\ \ \
...~--- disturbed interface \
Figure 2 Regular Refraction 'When the sound speed of the incident gas is less than that of the transmission gas the refraction is called slow-fast. In this case the transmitted wave can break away
from the point of intersection with the incident and reflected waves and move ahead of them, forming what is known as a precursor. The incident shock can also form a stem between its intersection
with the interface and its intersection with the reflected wave, similar to the well known phenomenon of Mach reflection. \Vhen the sound speed of the incident gas is greater than that of the
transmission gas the refraction is called fast-slow. In this case the transmitted shock will lean back toward the interface. In this paper we restrict ourselves to the study of a specific sequence of
slow-fast refractions. See Colella, Henderson, & Puckett [1] for a description of our work with fast-slow refraction. For the purposes of modeling this phenomenon on a computer we assume the two
gases are ideal and that each gas satisfies a ,-law equation of state,
Here p is the pressure, p is the density, I is the ratio of specific heats, and the coefficient A depends on the entropy but not on p and p. Note that ~f is a constant for each gas but different
gases will have different ,. Given these assumptions the problem depends on the following four parameters: the angle of incidence O'i, the ratio of molecular weights for the two gases It;/ p,t,
the ratio of the, for the two gases ,d,t, and the inverse incident shock strength ~i = PO/PI where Po (respectively pd is the pressure on the upstream (respectively downstream) side of the shock. In
this paper we consider the case when the incident gas is CO 2 , the transmission gas is CH 4 , the inverse incident shock strength is ~i = 0.78 and only the angle of incidence ai is allowed to vary.
Thus , i = 1.288, = 1.303, {Ii = 44.01, fit = 16.04, and the incident shock Mach number is 1.1182.
For this choice of parameters we find three distinct wave systems depending on These are: i) regular refraction with a reflected expansion, ii) regular refraction with a reflected shock, and iii)
irregular refraction with a transmitted precursor. These wave systems appear successively, in the order listed, as ai increases monotonically from head on incidence at ai = 0° to glancing incidence
at ai = 90°. In this paper we examine this sequence of wave patterns computationally much as one would design a series of shock tube experiments. This particular case has been extensively studied
both experimentally and theoretically by Abd-el-Fattah & Henderson [2]. This has enabled us to compare our results with their laboratory experiments thereby providing us with a validation of the
numerical method. See Colella, Henderson, & Puckett [1, 3] for a detailed comparison of our numerical results with the experiments of Abcl-el-Fattah & Henderson. Once we have validated the numerical
method in this manner we can use it to study the wave patterns in a detail heretofore impossible due to the limitations of schlieren photography and other experimental flow visualization techniques.
Early work on the theory of regular refraction was done by Taub [4] and Polachek
& Seeger [5]. Subsequently Henderson [6] extended this work to irregular refractions, although a complete theory of irregular refraction still remains to be found. More recently, Henderson [7, 8] has
generalized the definition of shock wave impedance given by Polachek & Seeger for the refraction of normal shocks. Experiments with shock waves refracting in gases have been done by Jahn [9],
Abd-el-Fattah, Henderson & Lozzi [10], and Abd-el-Fattah & Henderson [2, 11]. More recently, Reichenbach [12J has done experiments with shocks refracting at thermal layers and Haas & Sturtevant [13]
have studied refraction by gaseous cylindrical and spherical inhomogeneities. Earlier, Dewey [14J reported on precursor shocks from large scale explosions in the atmosphere. Some multi phase
experiments have also been done: Sommerfeld [15J has studied shocks refracting from pure air into air containing dust particles while Gvozdeava et al. [16] have experimented with shocks passing from
air into a variety of foam plastics. Some recent numerical work on shock wave refractions include Grove & Menikoff [17] who examined anomalous refraction at interfaces between air and water and
Picone et al. [18] who studied the Haas & Sturtevant experiments at Air/He and Air/Freon cylindrical and spherical interfaces. Fry & Book [19] have considered refraction at heated layers while
Glowacki et al. [20] have studied refraction at high speed sound layers and Sugimura, Tokita & Fujiwara [21] have examined refraction in a bubble-liquid system.
264 2. The Shock Polar Theory. 2.1 A Brief Introduction to the Theory. In this section we present a brief introduction to the theory of regular refraction. This theory is a straightforward extension
of von Neumann's theory for regular reflection (von Neumann [22]) and is most easily understood in terms of shock polal's. The theory is predicated on the observation that oblique shocks turn the
flow. Consider a stationary oblique shock. If we call the angle by which the flow is turned 8 (see figure 3), then 8 is completely determined by the upstream state (po, Po, Uo, vo) and the shock
strength pipo where p denotes the post-shock pressure. Shock
turning angle
Figure 3 An oblique shock turns the flow velocity towards the shock For a ,-law gas the equation governing this relation is
tan (8)
where Ms is the freestream Mach number upstream of the shock (e.g. see Courant & Friedrichs [23]). If we now allow the shock strength to vary and plot log (plpo) versus the turning angle 8 we obtain
the graph shown in figure 4, commonly referred to as a shock polar.
log!. Po
Figure 4 A Shock Polar Recall that, by definition, in regular refraction the incident, transmitted, and reflected waves all meet at a single point on the interface. We now assume that these waves are
locally straight lines in a neighborhood of this point and (for the moment) that the reflected wave is a shock. Each of these shocks will turn the flow by some amount, say OJ, Ot, and Or respectively
(figure 5) and each of these angles will satisfy (2.1) with the appropriate choice of M s , I, and p/Po.
incident shock undisturbed gas interface
reflected shock
transmitted shock . disturbed gas interface
Figure 5 The shock polar theory for regular refraction is based on the fact that the flow must be parallel to the gas interface both above and below the intersedion of the shocks. Thus, Ot = OJ + Or.
All shocks are assumed to be locally straight in a neighborhood of this intersection.
266 Furthermore, since the interface is a contact discontinuity we must have (2.2)
Pt =P2
(2.3) where the latter condition follows from the fact that the flow is parallel to the interface both upstream and downstream of the intersection of the incident, transmitted and reflected shocks.
Note that the interface is, in general, deflected forward downstream of this intersection. The problem now is as follows. Given the upstream state on both sides of the interface (po;,po, lto;, VOi)
and (POt, Po, ltOt, VOt), the inverse incident shock strength ~i' and the angle of incidence (Xi determine all other states. Let (PI, PI, lt1, vd denote the state downstream of the incident shock
(upstream of the reflected shock) and let (Pt,Pt, 1tt, Vt) and (p2,P2, lt2, V2) denote the states downstream of the transmitted and reflected shocks respectively. For certain values of the given data
this information is sufficient to completely determine all of the unknown states, although not necessarily uniquely. For example one can derive a 12th degree polynomial in the transmitted shock
strength pt!po from (2.1-3), which for regular reflection has as one root the observed transmitted shock strength (Henderson [8]). The other roots either do not appear in laboratory experiments or
are complex, and hence not physically meaningful. Note that knowledge of the transmitted shock strength pt!Po is sufficient to determine all of the other states.
Mr (reflected)
MI (transmitted)
Figure 6 Each intersection of the transmitted shock polar and the reflected shock polar represents a possible wave configuration for regular refraction. The physically meaningful roots of this
polynomial may also be found by plotting the shock polars for the three waves in a common coordinate system. An example is shown in figure 6. Note that we have scaled the reflected shock strength P2
/ PI by PI! po and translated br by b;. Thus the plot of the reflected shock polar is given by log(pdpo) = log(P2/P1)+log(pI!po) versus br+b;. This causes the base of
the reflected shock polar (P2 = P1) to coincide with the map of the incident shock on the incident shock polar (D;, pI/po), labeled 'i' in the figure. In this shock polar diagram any intersection of
the transmitted and reflected shock polars represents a physically meaningful solution to the problem, i.e. a pair of downstream states (Pt,Pt,Ut,Vt) and (p2,P2,U2,V2) such that all of the states
satisfy the appropriate shock jump conditions and the boundary conditions (2.2-3). Note that more than one such intersection may exist. For example, in figure 6 there are two, labeled A1 and A2. It
is also the case that for some values of the initial data (pOi, Po, UOi, VO;), (POt,PO,UOt,VOt), ei and Q;, the transmitted and reflected shock polars do not intersect. It is interesting to inquire
whether the existence of such an intersection exactly coincides with the occurrence of regular refraction in laboratory experiment. 1,Ve will discuss this point further below.
We can extend the shock polar theory to include reflected waves which are centered expansions by adjoining to the reflected shock polar the appropriate rar2 + v 2 denote the magnitude of efaction
curve for values of P2 < Pl. Let q = flow velocity, c the sound speed, and define the Mach angle Jl by Jl = sin- 1 l/M where M = q/c is the local Mach number of the flow. Then this rarefaction curve
is given by
p2 PI
cos Jl dp qcp
(see Grove [24]). This curve is sometimes referred to as a rarefaction polar. The sign will determine which branch of the shock polar is being extended. In figure 6 the branch corresponding to a
negative turning angle Dr has been plotted with a dotted line and labeled with a c. The intersection of this curve with the transmitted shock polar has been labeled fl. In some cases there may be two
intersections. Each intersection represents a wave system in which the state (PI, P1, U1, V1) is connected to the state (p2, P2, U2, V2) across a centered rarefaction. Such systems are also found to
occur in laboratory experiments (e.g. Abd-el-Fattah & Henderson [2]). 2.2 A Shock Polar Sequence. In this section we present the shock polar diagrams for the COdCH 4 refraction with ei = 0.78. The
data was chosen as specified in Section 1 with only the angle Q; being allowed to vary. In figure 7 we present four shock polar diagrams. These correspond to the two types of regular refraction -
namely regular refraction with a reflected expansion (RRE) and regular refraction with a reflected shock (RRR) - the transition between these two states, and the transition between regular and
irregular refraction. The polars are labeled Mi, M t , and M r , which represent the freest ream Mach numbers upstream of the incident, transmitted, and reflected waves respectively. To the right of
each shock polar diagram is a small diagram of the wave system in which the initial interface is denoted by an m, and the deflected interface by aD.
In each of the shock polar diagrams the tops of the incident and reflected polars have not been plotted in order to allow us to focus on the intersections which are of interest. As stated above the
map of the incident shock on the incident shock
268 polar is labeled i. This point corresponds to the base of the reflected shock polar. The intersection of the incident shock polar with the transmitted shock polar has been labeled AI' log~ Po
Figure 7a
logL Po
IX I = 32.0592 0
Figure 7b In figure 7 a) we plot the polars for Qi = 27°. Here we have only plotted the reflected rarefaction polar c and its intersection with the transmitted shock polar €1, not the reflected shock
polar. There still exist two solutions Al and A2 with a reflected shock but €1 is the solution observed in the laboratory (Henderson [2]). If we now continuously increase the angle Qi the points i
and A 1 move towards each other until they coincide at Qi :::::; 32.0592°. Here there is no need for a reflected
269 shock or expansion since 8i = 81 . The shock polar diagram for this value of O is a gradient we can take the base solution as Po = Po(x) = ¢>, TO = TO(X) = V(¢» and Vo = O. Introducing u = (p -
Po, vf as the set of variables, the hypothesis of Theorem 3.1 are satisfied and the system is isotropic. Actually, strict hyperbolicity fails except when N = 1, but the acoustical modes are simple
with c~ = ±eo. See Remark 3.7 and Example 2.1. 4. The basic expansion ( single wavefront case ). In this section we will consider a small variation of the basic expansion in §3 that allows us to deal
with the propagation of single wavefronts, in particular weak shocks. Basically, the only difference is at the level of: which requirements we enforce on the () dependence of the expansion. Here we
remove the oscillatory condition and replace it by the requirement that definite limits should be achieved as () -+ ±OOlO. Unfortunately, in this case it is generally not possible to impose a
sublinearity condition, as in (3.7), on U2 and the expansion is generally non-uniform, which ( as we will see ) forces the need to use matched asymptotic expansion techniques ( see [KC] ) to complete
the expansion. For a discussion of the meaning of this non-uniformity, see Remark 4.3. We start again with the ansatz (3.1) and up to equation (3.18) everything is the same as in §3. The only
difference is as to the interpretation of equation (3.12). Since the condition 7J = 0 is now meaningless, the splitting into v and 0' is now made unique lOObviously, this is the natural condition on
a situation where a narrow transition layer (shock region) connects two different states.
by requiring ( see Remark 3.4 for notation) r . Ag . v = o. Thus O"rO includes all the component of Ut, not just the oscillatory part as in (3.12). We also introduce
(4.1) so that
+ and -
= 8-+±oo lim 0",
indicate limits ahead (0) 0) and behind (0 < 0) the wave.
Now, clearly, the arguments leading from (3.18) to (3.19) and so on do not apply. We consider three cases:
This case is actually a particular instance of the situation considered in §3 as we can see by redefining 0" and v as follows:
(4.2) Then we are in the situation of §3 when in (3.6) fa = O. In this case the expansion will be uniform ( U2 bounded) provided O"new is integrable, otherwise it will be semiuniform with U2 merely
sublinear in O.
(4.3) This is a rather exceptional situation that can occur, for example, if all the Aj, 0 ~ j ~ N, are constant and 'P = k· x - wt is a plane wave with k and w constants. Then rO is also constant
and (4.3) will apply if, for example, 0"1:>. is constant. Case 1 follows when 0"1:>. = O. In this case the expansion can also be made uniform ( U2 bounded in () ), or semi-uniform ( U2 semilinear in
() ), depending on the rate of convergency of 0" to O"± as 0 -+ ±oo. In any event, it is quite clear that the "solvability" conditions in (3.18) that guarantee a solution U2 with the proper behavior
are now Limit Equations
(4.4) where v± are defined in (4.1) and these two systems are consistent because of (4.3).
296 Layer Equation
(4.5) where r = r(x, t; v) = -Lo . 'L.{i -/x-;(Aj . v) and the rest is as in (3.21). Thus, with J very small obvious modifications, all the considerations in §3, from Remark 3.9 to the end, apply
here. Obviously, as () --+ ±oo, (4.5) yields
d dt
+ t:.0'± = r '
which also follows from (4.4) upon multiplication by LO and use of (4.1). CASE 3.
f- 0'_
and (4.3) cannot be enforced.
Clearly, from (3.18), (4.5) and thus (4.6) must still apply. On the other hand, (4.4) cannot be enforced - at least not on both the (+) and (-) sides. No other condition can be imposed on (3.18) and
we are faced with the fact that IIu211 will grow secularly as I(}I when I(}I --+ 00. Thus the expansion is nonuniform and is formally valid only in a narrow zone If(}1 = 1 O. It becomes elliptic as
the dependent variables enter inside the parabola v 2 + 4u < O. Thus the aforementioned linear manifolds (straight lines in this 2 x 2 case) cover only the domain v 2 + 4u 2: O. An important
consequence of this remark is that the formula (2.8) will not be usable for constructing entropies inside the elliptic zone. We thus shall restrict to the hyperbolic one. The Riemann invariants are
clearly the two (real) roots wand z of the quadratic equation X2 +vX - u = o. Then Ea = (w - a)(a - z), where we assume w 2: z. A similar idea to (2.8) is that (w - a) + (a - z) is again an entropy,
so that the following formulae define an entropy and its flux for any choice of a bounded measure m:
J =J w
(w - a)(a - z)dm(a)
(v - a)(w - a)(a - z)dm(a) .
We shall keep in mind that v = -w - z. Defining functions J,g,h,k(w) as the anti derivatives of aP dm( a), O:S; P :s; 3, we rewrite E and F as
E(w, z) F(w,z)
= -wzJ(w) + (w + z)g(z) - hew) ,
= (w + z)wzJ(w) -
(w 2 + wz
+ z2)g(w) + k(w)
The definition of J,g, hand k allows us to introduce a function T(w) satisfying the following four equalities:
J=T'" 9
= wT'" - T"
k = w3 TIII - 3w 2T" + 6wT' - 6T h = w 2TIII - 2wT" + 2T'
So that we get an infinite family of pairs entropy-flux, parametrized by a real function T of one variable:
E = (w - z)T"(w) - 2T'(w) F
= (z -
w)(z + 2w)T"(w) + 6wT'(w) - 6T(w)
The above formula actually does not give all the entropies of our system because z did not play the smale role in our calculations. We need to supplement it
w and
323 by the symmetric formula depending on a real function S( z), so that the general entropy would be
(w - Z)(T"(W) - S"(Z)) - 2T'(w) - 2S'(z) .
It turns out that this formula gives all the entropies of the system, as it can be checked by hand, using the entropy equation
((2w+z)E w)z
= ((2z+w)E z)w'
Conversely, (2.10) does not give any information about entropies in the elliptic zone, except the case where S = T is a polynomial. Then the formula makes sense and defines a smooth function on the
whole plane. For instance, the choice S = T = X 5 /10 gives E = v 4 + 6v 2 u + 6u 2 • REMARK. In the formula (2.8), the special entropy (li.U - Ci)+ appears to be an extremal one in the cone of
convex entropies, so that Ei,(T will be convex if and only if u is a non-negative measure. This fact relies to the scalar example where the entropies Iu - kl or (u - k)+, used by Kruzkhov [27] and
Tartar [4] generate all the convex functions of one variable by means of the integrals
- k)du(k) .
Coming back to the more explicit formula (2.10), we get a convex entropy if and only if T = -S and T 1V is non-negative. The condition T = -S comes from the fact that (w - a)+ (a - z) is not convex,
so that we have to apply formula (2.8). The next subsection is devoted to the compensated compactness theory, applied to this system, and we shall pay attention to the elliptic zone. 8) Compensated
compactness with an elliptic zone. The compensatedcompactness theory is a tool which has been powerful in the study of the convergence of the artificial viscosity method for the Cauchy problem:
+ f(u 0 is fixed, then for sufficiently large N it will be that I:' 0.0
-0.5+---------..,---------, 0.00
Figure 4.8b: The evolution of ypp(p, t) at the same times as in Figure 4.8a. Figures 4.8a and 4.8b show the behavior of xpp(p, t) and ypp(p, t), respectively, from t = 1.4 to t = 1.6 at intervals of
0.025. Again, the difference in behavior between xpp and YPP is observed as t -+ tc -. In particular, xpp(p, t) appears to be diverging at p = Jr, in the form of an infinite jump discontinuity, while
YPp(p, t) is plainly not diverging, though its derivative is seemingly becoming infinite. To make more clear the difference in behavior, Figures 4.9a and b show xppp(p, t) and yppp(p, t) at the same
times. Both (x ppp ( Jr, t))-l and (YPpp( Jr, t))-l appear to be approaching zero simultaneously at some finite time, which through extrapolation, is estimated to be tc ~ 1.615 ± .01. Further, the
behavior of YPpp(p, t) in the neighborhood of p = Jr suggests that ypp(p, t) is approaching a finite jump discontinuity ast-+t;;.
360 5.0
4 .0
-1.0 0.00
3 .14
Figure 4.9a: The evolution of xppp(p,t) at the same times as in Figure 4.8a.
3 .0
- 1.0 0.00
:1 . 14
Figure 4.9b: The evolution of Yppp(p, t) at the same times as in Figure 4.8a.
2 .0
64 .0
12 8 .0
2:>6 .0
Figure 4.10a: The evolution of 13x(k), fit using formula (4.5) from = 1.4 to 1.6 at intervals of 0.05. The range over which the fit is approximately 5/2 (dashed line) increases as time increases.
4 .0 . . . - - - - - - - - - - - - - - - - - . . . ,
2 .0
l.o-l-----.-----,r-----;-----l 0.0
64 .0
128 .0
2:>6 .0
Figure 4.lOb: The evolution of /3v(k), fit using formula (4.5) at the same times as Figure 4.lOa. Dashed lines are at 5/3 and 3. The approach to 3 corresponds to increasing times. This behavior is
consistent with that of the spectrum. Figures 4.10a and 4.lOb
362 show (3x and (3y, respectively, from t = 1.4 to t = 1.6 at intervals of 0.025, using the fit in (4.5). The difference in the behavior of these fits is apparent. (3x is still well fit by a value
of 2.5, which would yield a divergent second derivative, while the fit to (3y shows a transition away from 2.5. Indeed, the last time shown, t = 1.6, suggests that there is a transition from an
algebraic decay of k- 5 / 2 for Yk(t), to a k- 3 decay. Such an algebraic decay at the singularity time would be be consistent with ypp(p, t c ) containing a step discontinuity.
g.. (j
0 .5
-t-=====::;:::=====;==:=.,- -----1256.0 0.0
64 .0
Figure 4.11: The evolution ofax(k), fit using formula (4.5) at the same times as Figure 4.10a. Figure 4.11 shows ax at the same times. It is clear that it is becoming more difficult to resolve the
sheet as evidenced by the noisiness in the fit, but ax still shows the tendency towards zero as the singularity time is approached. While there are differences in the singular behavior of x(p, t) and
yep, t), resp., no such difference in their singularity times is evident; ay is not shown as its values are practically identical to those of ax . Choosing k = 32 as a representative of the k
independent portion of the ax fit , Figure 4.12 shows its behavior as a function of time. The approach to zero is obvious. The dashed line is the extrapolation to zero at t = 1.615.
363 .5 .4
.1 0
Figure 4.12: ax(kh = 32) as a function of time. The dashed portion is an extrapolation to an approximated singularity time of teh 1.615. While there are differences between analytic prediction and
numerical results, the manner in which vortex sheets become singular does not change. At a point along the sheet, there is a very rapid compression of the vorticity through the Lagrangian motion of
the marker particles. It is because the vorticity must remain confined to the sheet, without the additional degrees of freedom of smoother vorticity distributions, that this compression leads to the
appearance of singularities. This is illustrated by the behavior of the true vortex sheet strength w, shown in Figure 4.13 from t = 1.3 to t = 1.6 at intervals of 0.05, as a function of the signed
arclength of the sheet from p = 7r. w gives the jump in tangential velocity across the sheet, and is given initially by w(p, t
= 0) = -p + ~ cos(p)
(dashed). Around
the local extrema at p = 7r (8 = 0), w(p, t) becomes concentrated, and increasing in amplitude. Moore's analysis predicts that at t = t e , w(p, t) is finite, with a squareroot cusp at p = 7r. As the
singularity in y(p, t) remains of the form predicted by Moore, and that in x(p, t) only weakens, this conclusion is unchanged.
364 2.5 2
'3 I
.5 0 -4
0 s
Figure 4.13: The evolution of w(s, t), the true vortex sheet strength, from th = 1.3 to 1.6 at intervals of 0.05. The dashed graph is the initial true vortex sheet strength. Increasing amplitude
corresponds to increasing time. Section 5. Concluding remarks. The intent of this work was to examine the generality of Moore's analysis, presumeably valid only for small amplitude data, in
predicting the form of vortex sheet singularities. By studying in detail the evolution of a large amplitude, entire initial condition, it was found that Moore's analysis was valid in predicting the
behavior of the sheet at times well away from the singularity time, but near the singularity time the sheet behavior underwent a transition leading to a change of form in the nascent singularity. A
more complete study (Shelley 1989), is to be given elsewhere. This work does not address the possible existence and nature of the vortex sheet after the singularity time, but has instead focused on
gaining precise information on the form of the singularity. As has also been observed by Kr, no convergence of the numerical solution was observed after the singularity time. Of course, the spectral
accuracy of the MPVA is lost in the presence of singularities. At these later times, the motion becomes dominated by grid scale interactions, and is apparently chaotic. It appears that mollification
of some sort is necessary to numerically study behavior past the singularity time (Krasny 1986b, Baker & Shelley 1989). Such studies indicate that the solution, if it exists, may have the form of
doubly branched spiral. It is known that measured-valued solutions exist globally for vortex sheet initial data, but the notion of such a solution is so general that it gives little information about
its specific nature. The scaling of vorticity concentrations in the study of thin vortex layers by Baker & Shelley (1989) suggests that the vortex sheet may
actually exist as a classical weak solution after the singularity time (Diperna & Majda, 1987). Explicit singular solutions have been constructed by Caflisch & Orellana (1988), with 'Y(p) = 1, which
have the form z(p, t)
= p + So + r
where So = c:(1- i)
1- e- .2I-.''P
)1+" (1- e-
.2I . 'P
c: IS small, r is a correction term, and v > O. v = 1/2 would give the spatial structure of Moore's singularity at t = O. The singularity found in this work is not of this form, though it is quite
possible that such a singularity could be constructed analytically. Acknowledgements. The author would like to thank G. R. Baker, R. E. Caflisch, A. Majda, D. I. Meiron and S. A. Orszag for useful
discussions. This work was partially supported under contracts ONR/DARPA N00014-86-K-0759 and ONR N00014-82-C-0451. Some of the computations were carried out on the CrayXMP at Argonne National
Laboratory. REFERENCES BAKER, G. R., MCCRORY, R. L., VERDON, C. P. & ORSZAG, S. A., Rayleigh-Taylor instability of fluid layers, J. Fluid Mech. 178 (1987) 16I. BAKER, G. R., MEIRON, D. I., & ORSZAG,
S. A., Vortex simulations of the Rayleigh-Taylor instability, Phys. Fluids 23 (1980), 1485. BAKER, G. R., MEIRON, D. I., & ORSZAG, S. A., Generalized vortex methods for free-surface flow problems, J.
Fluid Mech. 123 (1982), 477. BAKER, G. R. & SHELLEY, M. J., Boundary integral techniques for multi-connected domains, J. Camp. Phys. 64 (1986), 112. BAKER, G. R. & SHELLEY, M. J., On the connection
between thin vortex layers and vortex sheets, To appear in J. Fluid Mech. (1989). CAFLISCH, R. & LOWENGRUB J., Convergence of the Vortex Method for Vortex Sheets, to appear where (1988)? CAFLISCH, R.
& ORELLANA, 0., Long time existence for a slightly perturbed vortex sheet, Comm. Pure Appl. Math. XXXIX (1986), 807. CAFLISCH, R. & ORELLANA, 0., Singular solutions and ill-posedness of the evolution
of vortex sheets, (1988). DUCHON, R. & ROBERT, R., Global vortex sheet solutions to Euler equations in the plane, to appear in Comm. PDE. (1989). DIPERNA, R & MAJDA, A., Concentrations in
regularizations for 2-d incompressible flow, Comm. Pure Appl. Math. 40 (1987), 301. EBIN, D., Ill-posedness of the Rayleigh-Taylor and Helmholtz problems for incompressible fluids, CPAM (1988).
KRASNY, R., A study of singularity formation in a vortex sheet by the point-vortex approximation, J. Fluid Mech. 167 (1986a), 65. KRASNY, R., Desingularization of periodic vortex sheet roll-up, J.
Compo Phys. 65 (1986b), 292. LONGUETT-HIGGINS, M. S. & COKELET, E. D., Proc. R. Soc. Lond. A 350 (1976), 1.
366 MAJDA, A., Vortex dynamics: Numerical analysis, scientific computing, and mathematical theory, In Proc. of the First Intern. Conf. on Industrial and Applied Math., Paris (1987). MCGRATH, F. J.,
Nonstationary plane flow o[ viscous and ideal fluids, Arch. Rat. Mech. Anal., 27 (1967), 329. MEIRON, D. I., BAKER, G. R., & ORSZAG, S. A., Analytic structure o[ vortex sheet dynamics. 1.
Kelvin-Helmholtz instability, J. Fluid Mech. 114 (1982), 283 .. MOORE, D. W., The spontaneous appearance o[ a singularity in the shape o[ an evolving vortex sheet, Proc. R. Soc. Lond. A 365, (1979)
105. MOORE, D. W., Numerical and analytical aspects o[ Helmholtz instability, In Theoretical and Applied Mechanics, Proc. XVI IUTAM, eds. Niodson and Olhoff (1985), 263. PULLIN, D. I. AND PHILLIPS,
W. R. C., On a generalization o[ Kaden's problem. J. Fluid Mech. 104 (1981), 45. ROSENHEAD, L., The [ormation o[ vortices [rom a surface o[ discontinuity, Proc. R. Soc. Lond. A 134 (1931), 170.
SHELLEY, M. J., A study o[ singularity [ormation in vortex sheet motion by a spectral accurate vortex method, To appear in J. Fluid Mech. (1989). SIDI, A. & ISRAELI, M., Quadrature methods [or
periodic singular and weakly singular Fredholm integral equations, J. Sci. Compo 3 (1988), 20l. SULEM, C., SULEM, P. L., BARDOS, C. & FRISCH, U, Finite time analyticity [or the two and three
dimensional Kelvin-Helmholtz instability, Comm. Math. Phys. 80 (1981), 485. SULEM, C., SULEM, P. L. & FRISCH, H., Tracing complex singularities with spectral methods, J. Compo Phys. 50 (1983), 138.
VAN DER VOOREN, A. I., A numerical investigation o[the rolling up o[vortex sheets, Proc. Roy. Soc. A 373 (1980), 67. VAN DYKE, M., Perturbation Methods in Fluid Mechanics, The Parabolic Press (1975).
THE GOURSAT-RIEMANN PROBLEM FOR PLANE WAVES IN ISOTROPIC ELASTIC SOLIDS WITH VELOCITY BOUNDARY CONDITIONS* T. C. T. TINGtAND TANKIN WANGt Abstract. The differential equations for plane waves in
isotropic elastic solids are a 6 x 6 system of hyperbolic conservation laws. For the Goursat-Riemann problem in which the initial conditions are constant and the constant boundary conditions are
prescribed in terms of stress, the wave curves in the stress space are uncoupled from the wave curves in the velocity space and the equations are equivalent to a 3 X 3 system. This is not possible
when the boundary conditions are prescribed in terms of velocity. An additional complication is that, even though the system is linearly degenerate with respect to the C2 wave speed, the C2 wave
curves cannot be decoupled from the Cl and C3 wave curves. Nevertheless, we show that many features and methodology of obtaining the solution remain essentially the same for the velocity boundary
conditions. The Cl and C3 wave curves are again plane polarized in the velocity space although the plane may not contain a coordinate axis of the velocity space. Likewise, the C2 wave curves are
circularly polarized but the center of the circle may not lie on a coordinate axis of the elocity space. Finally, we show that the C2 wave curves can be treated separately of the Cl and C3 wave
curves in constructing the solution to the Goursat-Riemann problem when the boundary conditions are prescribed in terms of velocity. Key words. Goursat-Riemann problem, wave curves, elastic waves AMS
(MOS) subject classifications. 35L65, 73D99
1. Introduction. In a fixed rectangular coordinate system Xl, X2, X3, consider a plane wave propagating in the Xl - direction. Let u, 72, 73 be, respectively, the normal stress and two shear stresses
on the Xl = constant plane. Also, let u, V2, V3 be the particle velocity in the Xl> X2, X3 direction, respectively. The equations of motion and the continuity of displacement can be written as a 6 x
6 system of hyperbolic conservation laws[1,2,3] Ux
F(U)! = 0,
= (pu,PV 2,PV3,E"2"3).
In the above, X = Xl, t is the time, p is the mass density in the undeformed state, are, respectively, the longitudinal strain and the two shear strains. For isotropic elastic solids, the
stress-strain laws have the form [1]
E, ,2,,3
= 1(E,,2),
72 = ' 2
73 = ' 3
l =,? +,~,
*This work has been supported by the U.S. Air Force Office of Scientific Research under contract AFOSR-89-0013. tDepartment of Civil Engineering, Mechanics and Metallurgy, University of Illinois at
Chicago, Box 4348, Chicago, IL 60680.
368 where f and g are functions of £ and 1'2. We see that l' is the total shear strain. If T is the total shear stress on the x = constant plane, we obtain from (1.2h,3,
T = 1'g(£,1'2), T2
= Ti + T;'
In the region of £ and 1'2 where equations (1.2h and (1.3)1 have an inversion, we have
= h(l7, T2),
= T q( 17, T2),
where h and q are functions of 17 and
Equations (1.2h,3 can then be written as
= T, q( 17, T2), 1'3 = T3 q( 17, T2).
We study the Goursat-lliemann problem of (1.1) in which the strains £,1'2,1'3 are assumed to be known functions of 17, T2, T3 as given in (1.4)1 and (1.5). In Section 2 the characteristic wave speeds
and the right eigenvectors of (1.1) are presented. The simple wave curves and shock wave curves in the stress space are presented in Section 3 and that in the velocity space are examined in Section
4. Up to this point, the material is general isotropic Cauchy elastic solids, i.e., the existence of strain energy function is not assumed. In Section 5 we consider hyperelastic solids. In
particular, the simple wave curves for second order hyperelastic materials are presented which are used in Section 6 as an illustration to solve the Goursat-lliemann problem with stress boundary
conditions. In Section 7 the Goursat-Riemann problem is solved in which the boundary conditions are prescribed in terms of velocity. 2. Characteristic wave speeds and right eigenvectors. For the
Riemann problem and the Goursat-lliemann problem, the solution U depends on one parameter xlt only [4-9]. If U is continuous in xlt, we have a simple wave (or rarefaction wave) solution in which xl t
= c, the characteristic wave speed. In this case, (1.1 h is reduced to
(2.1) where I is a unit matrix, 'V is the gradient with respect to the components of U, the superscript T stands for the transpose, and the prime denotes differentiation with c. If we introduce the
equation (2.1) can be written as two equations
s'+pcu'=O, u' +cGs' = 0,
369 where the components of the 3
3 matrix G are
(2.4) Elimination of u' in (2.3) yields
(G -1]1)s' = 0, I] = (pC 2)-I.
(2.5) (2.6)
Thus I] and s' are, respectively, the eigenvalue and eigenvector of G. Assuming that I]i, i= 1, 2, 3, are positive, the wave speed Ci come in three pairs of positive and negative values. We let
ci ~ c~ ~ c~
> O.
'Hence, (2.8) From equations (l.4h, (l.5), (2.2) and (2.4), it is readily shown that the second and third columns of the matrix (G -1]1) are linearly dependent when I] = q. Hence I] = q is an
eigenvalue of G. The other two eigenvalues can be shown to satisfy the quadratic equation
(2.9) in which the subscript ~ and 7 denote differentiation with these variables. We therefore have, using (2.8), 1],
= 2" {(c.,. + Ir) -
1], = q(~,72) = 1/7, 1 1]3 = 2"{(c.,. + Ir) + y},
Y = {(c.,. -Ir)2 +4 / .,.cr}I/2. The second equality for 1]2 follows from (l.4h. By substituting
s' = (0,73,-72),
= 1]2
S'={7(1]-,r), 72/.,., 73/"'} ,
f or
= 1]1, 1]3 ,
in (2.5) and making use of (2.10h for I] = 1]2 and (2.9) for I] = 1]1,1]3, it can be verified that equation (2.5) is satisfied. Equations (2.11) therefore provide the eigenvectors s'.
(a) The stress space
(b) The velocity space (e2 < 0)
(c) The velocity space
> 0)
Fig.l The C2 simple wave( or V2 shock wave) curve for which CT, T and u are constants and C2 = V2 •
371 T3
B T
A T2
(a) The stress space
v v
----------~--------_e_ V2
- -______- L__________
(b) The velocity space
< 0) v
__________- L________- . _
----------~------------- u
(c) The velocity space
Fig.2 The
simple wave curves on
> 0)
= constant
372 3. Simple wave curves and shock wave curves in the stress space. The differential equation for simple wave curves associated with C2 is given in (2.11)1 which can be written as da dT2 dT3 -=-=-o
T3 -T2. Hence, (3.1)
T2 = Ti
a = constant,
+ T;
= constant.
In the stress space (a, T2, T3), (3.1) represents a circle with its center on the a-axis. The C2 simple wave curve is therefore "circularly polarized", Fig.1(a). Moreover, from (3.1) and (2.10)2, 'f/
2 and hence C2 is a constant along the C2 simple wave curve. Thus the system is linearly degenerate with respect to C2 [8] and the C2 simple wave curve is in fact a shock wave curve. The simple wave
curve for C1 or C3 is given in (2.11)2 which is rewritten as (3.2) The last equality yields (3.3)
= constant.
If we let
T2 = TcosB,
T3 = TsinB,
we have B = constant.
(3.5) Equations (3.2) now reduce to
or (3.6)
dT I" 'f/ -E" -=---=--da 'f/
the second equality follows from (2.9). Equation (3.6), when integrated, provides simple wave curves for C1 and C3 on the (a, T) plane. If (a, T2, T3) is regarded as a rectangular coordinate system,
(a, T, B) is a cylindrical coordinate system. Since simple wave curves for C1 and C3 are on a B = constant plane, they are "plane polarized", Fig.2( a).
373 If U as a function of x/t is discontinuous at x/t = V, we have a shock wave with shock velocity V. The Rankine-Hugoniot jump conditions for (1.1h are (3.7)
+ V[F(U)] =
where [U]=U--U+
denotes the difference in the value of U + in front of and U - behind the shock wave. Using the notations of (2.2), (3.7) is equivalent to
[s] + pV[u] = 0, [u] + V[p] = 0.
Equations (3.8) are the Rankie-Hugonint jump conditions for (2.3). Elimination of [u]leads to (3.9) From (1.4h, (1.5) and (2.2), we may write (3.9) in full as
= pV2[ h(O", T2)],
= pV2[ T2 q(O", T2)],
= pV2[ T3 q(O", T2)].
If we eliminate pV 2 between (3.10h.3, it can be shown that [1]
(3.11) There are two possibilities for this equation to hold. We discuss them separately below. One possibility is [q] = O. If the shock wave speed for this case is V2 , we see that (3.10) are
satisfied if
= q+ = q-, k] = 0 = [TJ,
(pVl)-1 (3.12)
This is identical to the circularly polarized simple wave curve discussed earlier. Hence V2 = C2 and the V2 shock wave curve is identical to the C2 simple wave curve, Fig.l(a). The other possibility
for (3.11) to hold is
This is identical to (3.3). It follows from (3.4) that
0+ = 0-,
and (3.10h,3 are reduced to the same equation
This and (3.10h can be written as (3.14) For a fixed (a+, r+), the second equality provides a shock wave curve for (a-, r-) on the (a, r) plane which is a B = constant plane. The shock wave curves
are therefore plane polarized as shown by the double solid lines in Fig.3( a). Since there are two shock wave curves emanating from the point (a+, r+) (only one is shown in Fig.3), the associated
shock wave speeds are denoted by VI and V3 as indicated in (3.14).
A (0"-,7-)
B (O"+, 7+) 0"
(a) The stress space
A (u-,v-)
B (u+,v+) u
(b) The velocity space (Vl , V3 < 0)
v w
A ----------~--------. .
--------~----------~ u
(c) The velocity space (Vb V3 Fig.3 The V l (or V3 ) shock wave curve on
> 0)
e = constant plane.
376 4. Simple wave curves and shock wave curves in the velocity space. The differential equation (2.5) contains the stress s only which enables us to determine simple wave curves in the
three-dimensional stress space without considering the full six-dimensional stress-velocity space. This is not possible for simple wave curves in the velocity space. We write (2.3h in full as
1 = --du, pc
1 pc 1 dV3 = -- dT3. pc
dV2 = --dT2'
For the velocity space we have to distinguish positive wave speed wave speed Cj.
When c = ±C2, equations (3.1) apply and hence u, T and Using (3.4), (4.1) lead to
are all constant.
from negative
= constant,
o T V2 - v 2 = --case, PC2 o T . {) V3 - v3 = --Slnu, PC 2
where and v~ are the integration constants. The determination of the integration constants will be illustrated in Section 7. We see that the Cz simple wave curves in the velocity space are also
circularily polarized, Fig.1(b). The radius ofthe circle is T / p1c21. However, unlike in the stress space, the center of the circle is not necessarily on the u-axis. Moreover, the point on the
simple wave curve assumes a different position depending on whether C2 is a negative (Fig.1(b)) or a positive wave speed (Fig.1(c)). When c = ±CI or ±C3, substitution of (3.4) into (4.1)z,3 and
noticing that a constant in this case, we obtain
= --case dT, pc
= _2. sine dT, pc
+ V3 sine) = -2.dT, pc d( -V2 sine + V3 case) = O. d(V2 case
If we let (4.3)
v = VO + V2 case
+ V3 sine, + V3 case,
w = w O - V2 sine
e is
are the integration constants, (4.1) are equivalent to
1 = --du, pc
1 = --dT, pc
w =0.
The determination of the integration constants VO and wO will also be illustrated in Section 7. From (4.4), the Cl and C3 simple wave curves in the velocity space are also plane polarized on the w =
0 plane, (Fig.2(b), 2( c)). Unlike in the stress space, the plane may not contain the u-axis. Equation (4.4h,2 can be combined to give
dv du
dT du,
and hence the slope of the simple wave curve in the velocity space and in the stress space are identical. This does not mean that the simple wave curves in the two spaces are identical. From (4.4)
1,2, the infinitesimal arclength of the simple wave curve in the velocity space is equal to the corresponding arclength in the stress space divided by the factor pc. For c < 0, therefore, the curves
in the velocity space can be obtained from that in the stress space by dividing every infinitesimal line segment of the curve by the factor pc without changing the orientation of the line segment,
Fig.2(b). For c> 0, the same procedure applies except that the direction of the wave curve is reversed, Fig.2( c). We next present the shock wave curves in the velocity space. Using (3.4), (3.8)1
written in full are
k] + pV[u]
= 0,
+ pV[ V2] = 0, [TsinO] + pV[ V3] = o.
For V = V2 =
C2, U
are constant. Hence
[u] (4.6)
= 0,
= --[ cosO],
= -2-[sinO].
Equations (4.2) which represent the C2 simple wave curves in the velocity space satisfy (4.6). Therefore, the V2 shock wave curves and C2 simple wave curves are also identical in the velocity space.
For V
= VI
or V3 , 0 is a constant and (4.5h,3 can be written as
[T] cosO + pV[ V2]
= 0,
[T] sinO + pV[ V3] =
By linearly combining the two equations and using (4.3), we obtain
+ pV[ v] = 0, w=O.
Therefore, the shock wave curves in the velocity space for V = VI or V3 are also plane polarized on w = 0 plane, Fig.3(b,c). From (4.5)1 and (4.7h, we have
y;r = r;] =~,
the last equality follows from (3.14)z. The first equality implies that the slope of the line connecting (IT+, r+) to (IT- , r-) on the shock wave curve in the stress space is identical to the slope
of the line connecting (u+,v+) to (u-,v-) on the shock wave curve in the velocity space, Fig.3(b,c).
5. Hyperelastic solids. For hyperelastic solids, there exists a complementary strain energy [10] W( IT, r2) whose gradients with respect to IT and r provide the strains c: and /, i.e., (5.1) The
characteristic wave speeds are, from (2.6) and (2.10),
(pcD- 1 = ~{(W.,..,.
= WTlr, (pc~)-1 = ~{(W.,..,. + WTT ) + Y}, (pC~)-1
= {(W.,..,.-WTT )2+4W;T}I/2.
The differential equation (3.6) for the space is (5.3)
+ WTT ) -
dr dlT
2 W.,.T (W.,..,. - W TT ) T Y
simple wave curves in the stress
-(W.,..,. - W TT ) T Y 2 W.,.T
where the upper (or lower) sign is for Cj (or C3) simple wave curves. The simple wave curves for Cj and C3 are now orthogonal to each other[1,2]. The simplest nonlinear hyperelastic solids are the
second order materials for which c: and / are functions of IT and r of the order up to two. This means that W must be a function of IT and r of order up to three. Noticing that W is a function of IT
and r2 and that the constant terms produce no strains while the linear terms would have yielded non-zero strains when the stresses vanish, we write
a 2 d 2 b 3 e 2 W = -IT + -r + -IT + -ITr 2
where a, d, band e are constants. d and a are positive and have the property [1,2]
< 0 = dl(d -
We first study the special case = O. With the initial conditions given in (6.1), it is not difficult to see that V3,T3 and (} vanish for x < 0 and t > O. From (4.3)1, v = V2 for x < 0 and t > O.
Using (4.4), the simple wave curves in the velocity space can be determined from the simple curves in the stress space. The simple wave curves in the velocity space associated with the simple wave
curves BS and BP in Fig. 5(a) are shown in Fig. 7(a). Likewise, the shock wave curves associated with BR and BH in Fig. 5(a) can be obtained from (4.5h and (4.7)1 and are shown in Fig. 7(a). We have
again divided the velocity plane (u,v) into four regions. The method of finding the solution is identical to that for the stress boundary conditions with (}A = O. Thus, depending on whether the
boundary conditions (7.1) with = 0 and = are represented by a point in region 1, 2, 3 or 4, we have the wave pattern 1, 2, 3 or 4 shown in Fig. 6 in which the V2 shock wave is absent.
vt vA
v1 =/= 0, the solution cannot be obtained by a simple superposition of a
V2 shock wave. As shown in [17], the V2 shock wave does not commute with
and C3 simple waves when the wave curves in the velocity space are considered. Nevertheless, we will show that one can make use of Fig. 7(a) to construct the solution when vt f= O. First of all, we
show that the wave curve BMA in Fig. 7(a) corresponds to a family of solutions with vt f= o. When vt = 0, the CI simple wave curve BM and the C3 simple wave curve MA are, in the (vz, V3) plane, the
segments B M+ and M+ A in Fig. 7(b) which are on the vz-axis. When vt f= 0, we may introduce a Vz shock wave curve M+ M- which is a circle with its center at T and radius rM /p/c zM /. This is the
circularly polarized shock wave given in (4.2h,3 in which and v~ are determined by the location of T. If we draw a circle AA' concentric with M+ M-, any point A' on the circle can be the location of
the new boundary conditions (vt, vt). The wave curves for this case will be the CI simple wave curve BM+, the Vz shock wave curve M+ M- and the C3 simple wave curve M- A'. The angle the line T A'
makes with the vz-axis is eA. The new coordinates v,w defined in (4.3) are obtained by rotating the Vz, V3 axes about T an angle eA. The location B' of the origin of (v, w) coordinates determines the
constants (vO, wO) in (4.3). In the (u, v) plane, Fig. 7(a), the wave curve is still BMA. The corresponding wave pattern is the wave pattern 1 in Fig. 6. Thus the wave curve BMA in Fig. 7(a)
corresponds to a family of solutions for which the boundary conditions (vf, vt) are on the circle AA' shown by the dotted line in Fig. 7(b).
With the V2 shock wave considered separately, one can determine the admissible wave curve for the velocity boundary conditions when vt f= 0 by an iteration scheme. However, one should be able to
determine whether the wave pattern belongs to wave pattern 1, 2, 3 or 4 before employing the iteration scheme. This is presented next. When (u A , vt, vt) are given, we draw the vertical line KL in
the (u, v) plane, Fig. 7( a), whose abscissa is u A . This line intersects the ct simple wave curve BP at L and the C3 simple wave curve BS at K. From (4.3),
Since vO and eA are unknowns, vA can be anywhere on the line KL. If A is located above K, between KL or below L, we have wave pattern 2, 1 or 4, respectively. The wave curve BK in Fig. 7(a)
corresponds to the wave curve BK in Fig. 7(b) when vt = O. Following the procedure explained earlier, we can obtain a circle through K shown by the solid line in Fig. 7(b) such that the wave curve BK
in Fig. 7(a) corresponds to a family of solutions with (vf, vt) on this circle. Likewise, one can obtain a circle through L shown by another solid line in Fig. 7(b) such that the wave curve BL in
Fig. 7(a) corresponds to a family of solutions with (vf,vt) on this circle. We then have the result that if (vf, vt) is located within the two circles, the solution belongs to wave pattern 1. If (vf,
vt) is located outside (or inside) the circle passing through K (or L), we have wave pattern 2 (or wave pattern 4). It should be pointed out that the two circles passing through K and L in Fig. 7(b)
are for the fixed u A > 0 shown in Fig. 7(a). For a different value of u A , the circles would be different. For u A < 0, a similar procedure can be employed to
386 determine whether the solution for given (vf, vt) belongs to wave pattern 2, 3 or 4. REFERENCES [1]
[2] [3]
[4] [5] [6] [7] [8] [9] [10] [11]
[12] (13) [14]
(15) [16) [17]
YONGCHI LI AND T. C. T. TING, Plane waves in simple elastic solids and discontinuous dependence of solution on boundary conditions, Int. J. Solids Structures, 19 (1983), pp. 989-1008. ZHIJING TANG
AND T. C. T. TING, Wave curves for the Riemann problem of plane waves in isotropic elastic solids, Int. J. Eng. Sci., 25 (1987), pp. 1343-1381. T. C. T. TING, The Riemann problem with umbilic lines
for wave propagation in isotropic elastic solids, in Notes in Numerical Fluid Mechanics, Nonlinear Hyperbolic Equations Theory, Numerical Methods and Applications, ed. by Josef Ballmann and Rolf
Jeltsch, 24, Vieweg, 1988, pp. 617-629. P. D. LAX, Hyperbolic systems of conservation laws. II, Comm. Pure Appl. Math., 10 (1957), pp. 537-566. A. JEFFREY, Quasilinear Hyperbolic Systems and Waves,
Pitman, 1976. J. A. SMOLLER, On the solution of the Riemann problem with general step data for an extended class of hyperbolic systems, Mich. Math. J., 16 (1969), pp. 201-210. T. -Po LiU, The Riemann
problem for general systems of conservation laws, J. Diff. Eqs., 18 (1975), pp. 218-234. C. M. DAFERMOS, Hyperbolic systems of conservation laws, Brown University Report, LCDS 83-5,(1983). D. G.
SCHAEFFER AND M. SHEARER, Riemann problem for nonstrictly hyperbolic 2 X 2 systems of conservation laws, Trans ArneI'. Math. Soc., 304 (1987), pp. 267-306. C. TRUESDELL AND W. NOLL, The Nonlinear
Field Theories of Mechanics, Handbuch del' Physik, III/3, Springer, Berlin, 1965. D. G. SCHAEFFER AND M. SHEARER, The classification of2 X 2 systems of nonstrictly hyperbolic conservation laws, with
application to oil recovery, Appendix with D. Marchesin and P. J. Paes-Leme, Comm. Pure Appl. Math., 40 (1987), pp. 141-178. B. L. KEYFITZ AND H. C. KRANZER, The Riemann problem for a class of
conservation laws exhibiting a parabolic degeneracy, J. Diff. Eqs., 47 (1983), pp. 35-65. E. ISAACSON AND J. B. TEMPLE, Examples and classification of nonstrictly hyperbolic systems of conservation
laws, Abstracts of Papers Presented to AMS, 6 (1985), pp. 60. M. SHEARER, D. G. SCHAEFFER, D. MARCHESIN AND P. J. PAES-LEME, Solution of the Riemann problem for a prototype 2 X 2 system of
nonstrictly hyperbolic conservation laws, Arch. Rat. Mech. Anal., 97 (1987), pp. 299-320. GUANGSHAN ZHU AND T. C. T. TING, Classification of2 X 2 non-strictly hyperbolic systems for plane waves in
isotropic elastic solids, Int. J. Eng. Science, 27 (1989), pp. 1621-1638. T. C. T. TING, On wave propagation problems in which cJ = c. = C2 occurs, Q. App\. Math., 31 (1973), pp. 275-286. XABIER
GARAIZAR, Solution of a Riemann problem for elasticity, Courant Institute of Mathematical Sciences Report (1989).
E-Book Information
• Series: The IMA Volumes in Mathematics and Its Applications 29
• Year: 1,991
• Edition: 1
• Pages: 386
• Pages In File: 398
• Language: English
• Identifier: 978-1-4613-9123-4,978-1-4613-9121-0
• Doi: 10.1007/978-1-4613-9121-0
• Cleaned: 1
• Orientation: 1
• Paginated: 1
• Org File Size: 12,178,203
• Extension: pdf
• Tags: Analysis
• Toc: Front Matter....Pages i-xiv
Macroscopic Limits of Kinetic Equations....Pages 1-12
The Essence of Particle Simulation of the Boltzmann Equation....Pages 13-22
The Approximation of Weak Solutions to the 2-D Euler Equations by Vortex Elements....Pages 23-37
Limit Behavior of Approximate Solutions to Conservation Laws....Pages 38-57
Modeling Two-Phase Flow of Reactive Granular Materials....Pages 58-67
Shocks Associated with Rotational Modes....Pages 68-69
Self-Similar Shock Reflection in Two Space Dimensions....Pages 70-88
Nonlinear Waves: Overview and Problems....Pages 89-106
The Growth and Interaction of Bubbles in Rayleigh-Taylor Unstable Interfaces....Pages 107-122
Front Tracking, Oil Reservoirs, Engineering Scale Problems and Mass Conservation....Pages 123-139
Collisionless Solutions to the Four Velocity Broadwell Equations....Pages 140-155
Anomalous Reflection of a Shock Wave at a Fluid Interface....Pages 156-168
An Application of Connection Matrix to Magnetohydrodynamic Shock Profiles....Pages 169-172
Convection of Discontinuities in Solutions of the Navier-Stokes Equations for Compressible Flow....Pages 173-178
Nonlinear Geometrical Optics....Pages 179-197
Geometric Theory of Shock Waves....Pages 198-202
An Introduction to front Tracking....Pages 203-216
One Perspective on Open Problems in Multi-Dimensional Conservation Laws....Pages 217-238
Stability of Multi-Dimensional Weak Shocks....Pages 239-250
Nonlinear Stability in Non-Newtonian Flows....Pages 251-260
A Numerical Study of Shock Wave Refraction at a CO 2 /CH 4 Interface....Pages 261-280
An Introduction to Weakly Nonlinear Geometrical Optics....Pages 281-310
Numerical Study of Initiation and Propagation of One-Dimensional Detonations....Pages 311-314
Richness and the Classification of Quasilinear Hyperbolic Systems....Pages 315-333
A Case of Singularity Formation in Vortex Sheet Motion Studied by a Spectrally Accurate Method....Pages 334-366
The Goursat-Riemann Problem for Plane Waves in Isotropic Elastic Solids with Velocity Boundary Conditions....Pages 367-386 | {"url":"https://vdoc.pub/documents/multidimensional-hyperbolic-problems-and-computations-41hp82dueld0","timestamp":"2024-11-12T06:18:41Z","content_type":"text/html","content_length":"509586","record_id":"<urn:uuid:18c751ba-daaa-4ab5-89af-b17bf1d6a612>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00705.warc.gz"} |
Quantum Zeno effect | Description, Example & Application
Quantum Zeno effect
Quantum Zeno Effect: An Introduction
The Quantum Zeno Effect refers to the phenomenon where continuous observation of a quantum system prevents its evolution. It is named after the Greek philosopher Zeno of Elea, who formulated a series
of paradoxes related to motion. The Quantum Zeno Effect was first described by George Sudarshan and Baidyanath Misra in 1977.
The theory behind the Quantum Zeno Effect is based on the fact that quantum measurements are disruptive. Each measurement alters the state of the observed system, which can lead to interference and
decoherence. According to the Quantum Zeno Effect, if measurements are made frequently enough, the system will remain in its initial state and not evolve over time.
The Quantum Zeno Effect has since become an active area of research in quantum mechanics and has led to the development of new technologies such as quantum computing and cryptography.
How it Works: Understanding the Theory
The Quantum Zeno Effect can be explained using the Schrödinger equation, which describes the behavior of quantum systems. The equation predicts that a quantum system will evolve over time unless it
is measured. When a measurement is made, the system’s wave function collapses into a single state, and its subsequent evolution is altered.
The Quantum Zeno Effect occurs when measurements are made frequently enough to prevent the system from evolving. This happens because each measurement projects the system onto its initial state, and
the probability of the system evolving away from that state becomes smaller and smaller. As a result, the system is effectively frozen in its initial state.
The Quantum Zeno Effect has been observed experimentally in various systems, including atoms, photons, and superconducting qubits. It has also been used to study the behavior of quantum systems in
the presence of noise and decoherence.
Applications in Physics and Computing
The Quantum Zeno Effect has several applications in physics and quantum computing. One of its most significant applications is in quantum error correction, where frequent measurements are used to
detect and correct errors that occur in quantum computations. The Quantum Zeno Effect can also be used to protect quantum states from decoherence, which is a major challenge in developing practical
quantum computers.
Another application of the Quantum Zeno Effect is in quantum cryptography. The effect can be used to detect eavesdropping on quantum communication channels by measuring the state of the transmitted
photons frequently. If an eavesdropper attempts to intercept a photon, the measurement will collapse the photon’s state, and the eavesdropper will be detected.
The Quantum Zeno Effect has also been used to study the behavior of quantum systems in extreme conditions, such as in black holes and the early universe.
Example of Quantum Zeno Effect in Action
One example of the Quantum Zeno Effect in action is in the field of quantum optics. In a 2016 experiment, researchers in Israel used the effect to slow down the decay of a photon in a system of
trapped ions. By measuring the state of the photon frequently, they prevented it from decaying and kept it in its initial state for longer than expected.
Another example of the Quantum Zeno Effect is in quantum computing. In a 2018 study, researchers used frequent measurements to detect and correct errors in a five-qubit quantum computer. The
measurements effectively froze the system in its initial state, allowing them to correct errors before they could cause the system to evolve away from its initial state.
Overall, the Quantum Zeno Effect is a fascinating phenomenon that has profound implications for our understanding of quantum mechanics and its practical applications. It has led to the development of
new technologies and has opened up new avenues for exploring the behavior of quantum systems in extreme conditions. | {"url":"https://your-physicist.com/quantum-zeno-effect/","timestamp":"2024-11-08T22:30:53Z","content_type":"text/html","content_length":"55138","record_id":"<urn:uuid:84f8fdc2-6bc4-4d92-8d65-f08f4c41787f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00639.warc.gz"} |
1. Properties of a matrix
Properties of a matrix
Use this calculator to know whether a matrix has one of the following properties :
positive definite
negative definite
Singular matrix
A matrix is singular if and only if its determinant is zero.
Another equivalent definition is that a matrix is singular when at least one column (or row) of the matrix is a linear combination of other columns (or rows) of that matrix. In this case, we say that
the matrix columns (or rows) are collinear.
A singular matrix is also called to be degenerate or non invertible.
A matrix which is not singular is called
, i.e. a matrix whose determinant is non-zero or, equivalently, which has linearly independent columns (and rows). Example :
`M = [[1,2,5],[-8,6,0],[2,4,10]]`
We notice that the 3d row can be obtained by multiplying the 1st row by 2. Since the row vectors of this matrix are not linearly independent, this matrix is singular. All these propositions are
equivalent and true for square matrix M,
- Row vectors (or column vectors) are collinear
- Row vectors (or column vectors) are linearly dependent
- Row vectors (or column vectors) are not linearly independent
- The determinant is zero
- This matrix is singular
- This matrix is not regular
- This matrix is not invertible
- This matrix is degenerate
Therefore, to prove that a matrix is singular, just prove that a column (or a row) is a linear combination of other columns (or rows). If such a linear combination is not easy to find then calculate
its determinant. If it is zero then the matrix is singular.
Invertible matrix
A matrix M of size n × n is invertible if there exists a matrix denoted `M^(-1)` of size n × n such as,
`M*M^(-1) = M^(-1) * M = I_n`
I_n is the identity matrix of size n &mult; n.
"M is invertible" is equivalent to "M is not singular".
Diagonalizable matrix
A square matrix M is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that,
`A = P D P^(-1)`
In practice, here are the steps to follow to diagonalize a matrix n-by-n.
First, we calculate its characteristic polynomial.
Then, we calculate its eigenvectors, eigenvalues and their multiplicities. As a remainder, eigenvalues are the roots of the matrix characteristic polynomial and the eigenvalues multiplicities are the
dimensions of the matrix eigenspaces.
If the the sum of the eigenspaces dimensions is equal to n, then matrix M is diagonalizable. In particular, when M has n distinct eigenvalues then all eigenspaces are of dimension 1 and the matrix is
Positive definite matrix
A square matrix M with real entries is positive-definite if it satisfies all of these conditions.
- M is a symmetric matrix.
- M is invertible.
- all M eigenvalues are real and positive.
Negative definite matrix
A square matrix with real entries is negative-definite if its additive inverse matrix (-1) * M is positive negative.
Orthogonal matrix
Let M be a square matrix of size n then M is called orthogonal if it satisfies one of the two following equivalent propostions :
1/ \(M^T . M = M . M^T = I_n\)
where, \(I_n\) is the identity matrix of order n and,
\(M^T\) is the transpose matrix of M.
2/ M is invertible and \(M^{-1} = M^T\)
`M = [[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]`
It is easy to check that,
\( M^T . M = M .M^T = I_2\)
because of,
`cos^2(theta)+sin^2(theta)= 1`
Symetric matrix
A square matrix M is symmetric if it is equal to its own transpose matrix i.e.,
\({}^t \! M = M\)
which is equivalent to,
if `a_(ij)` are the entries of matrix M, then,
`a_(ij) = a_(ji)` for all i and j between 1 and n.
Square matrix
A matrix is square if it has the same number of lines and columns. Square matrice M is said of order n if it is n-by-n.
Unitary Matrix
A square matrix M with complex coefficients is unitary if it satisfies the following equalities,
\(M . M^{*} = M^{*} . M = I_n\)
- Matrix M is n-by-n.
- \(M^{*}\) is the conjugate transpose of M
- \(I_n\) is the identity matrix of order n
This is equivalent to the fact that M is invertible and its inverse is its own conjugate transpose matrix.
For a matrix with real entries, this is equivalent to say that "M is an orthogonal matrix"
Normal matrix
A sqaure matrix with complex coefficients is normal if it commutes with its conjugate transpose, i.e.,
\(M . M^{*} = M^{*} . M\)
Involutory Matrix
An involutory matrix is an invertible square matrix that is equal to its own inverse matrix. Therefore, if M is a square matrix of size n, then it is involutory if and only if,
\(M^{-1} = M\)
which is equivalent to, \(M^{2} = I_n\)
\(I_n\) being the identity matrix of order n.
Hermitian matrix
A matrix with complex coefficients is hermitian if it is equal to its own conjugate transpose matrix, that is,
\(M = M^{*}\)
Nilpotent matrix
A matrix is nilpotent if it exists a power of this matrix which is equal to the zero matrix, i.e.,
It exists a positive integer n such that,
`M^n = 0`
If n is the least positive integer that satisfies this equality, then M is nilpotent of index n.
Diagonal matrix
A diagonal matrix is a matrix in which all elements outside the principal diagonal are equal to zero.
In a space of dimension n, it can be written as,
\(D_n = \begin{pmatrix} a_1 & 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n \\ \end{pmatrix}\)
Example in dimension 2 :
\(D_2 = \begin{pmatrix} 3 & 0 \\ 0 & 4 \\ \end{pmatrix}\)
Example in dimension 3 :
\(D_3 = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & -1 \\ \end{pmatrix}\)
See also
Matrix Operations
Determinant of a matrix
conjugate transpose
Inverse matrix
Characteristic Polynomial
Eigenvalues and eigenvectors | {"url":"https://www.123calculus.com/en/matrix-properties-page-1-35-300.html","timestamp":"2024-11-03T09:27:30Z","content_type":"text/html","content_length":"25447","record_id":"<urn:uuid:6dd208dd-a9b9-4eda-8438-926f29e2ac4a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00224.warc.gz"} |
General Chemistry - Online Tutor, Practice Problems & Exam Prep
When it comes to diprotic acids, just remember that sulfuric acid, which is H2SO4, represents the only strong diprotic acid. All others will be considered weak. Here we're going to say in terms of
acidity, the first acidic proton associates completely and the second acidic proton only partially. Taking into account both acidic protons will help give us a complete picture of the concentration
of H+ ions.
With that information we can calculate pH. So just remember how would this look? We have sulfuric acid here reacting with water. Since it's strong, we'd have a solid arrow going forward. This would
create HSO4 minus aqueous plus H3O plus. We've lost the first acidic hydrogen, so that's K1. But now if we're trying to lose the 2nd acidic hydrogen, that's HSO4 minus aqueous plus H2O liquid.
At this point it's considered weak. So instead of having a solid error going forward, we have reversible arrows, meaning only a portion of the reactants breakdown to give us products. Here this would
deal with K2 where we're losing the 2nd acidic hydrogen. We create our sulfate ion plus additional H3O plus ion. So this is the picture you have to take. When it comes to sulfuric acid, the first
acidic proton is lost completely, but only the second one partially. | {"url":"https://www.pearson.com/channels/general-chemistry/learn/jules/17-acid-and-base-equilibrium/diprotic-acids-and-bases-calculations","timestamp":"2024-11-07T04:41:11Z","content_type":"text/html","content_length":"543919","record_id":"<urn:uuid:f5392a76-5189-4e01-ab9a-27b62cc89333>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00784.warc.gz"} |
College GPA Calculator
Use our college GPA calculator to find your grade point average by entering the grade for each course below.
How to Calculate a College GPA
A grade point average, or GPA, is a standard way of measuring academic achievement in the U.S. A GPA is a numerical representation of a student’s average performance across all of their courses that
accounts for the weight or difficulty of each course.
For many students, keeping a close watch on their GPA is important for academic reasons, for future employment opportunities, or for potential graduate school admissions. You can calculate a college
GPA or semester GPA in a few steps.
Step One: Understand the Grade Point Values
Firstly, you need to be familiar with the grade point values assigned to each grade:
Table showing the
GPA for each
letter grade.
Letter Grade GPA
A 4.0
B 3.0
C 2.0
D 1.0
F 0.0
Note that some institutions may have + or – grades such as A-, B+, etc. They often carry slightly different point values.
Table showing the
GPA for each letter
grade using the
plus/minus scale.
Letter Grade GPA
A+ 4.33
A 4.0
A- 3.67
B+ 3.33
B 3.0
B- 2.67
C+ 2.33
C 2.0
C- 1.67
D+ 1.33
D 1.0
D- 0.67
F 0.0
Step Two: Determine the Credits for Each Course
Typically, courses have varying credits depending on their workload. For instance, a lab course may be worth 4 credits, while a regular lecture course might be 3 credits.
Step Three: Weight the GPA for Each Course
For each course, you need to multiply the grade point value for the grade you received by the number of credits that course is worth.
For instance, if you received a ‘B’ grade in a course that is 3 credits, then the total points for that course would be:
3 (credits) × 3.0 (B value) = 9.0 points
Step Four: Sum All the Grade Points
Once you have calculated the points for each course, sum them all up.
For instance, if you have the following points from three different courses: 9.0, 12.0, and 8.0, the total would be:
9.0 + 12.0 + 8.0 = 29.0 points
Step Five: Sum All The Credits
Add up the total number of credits you’ve taken.
Step Six: Divide the Total Points by the Total Credits
Finally, divide the total points from all your courses by the total number of credits.
For instance, given the total number of 29.0 points from step four and a total of 9 credits from step five, you can calculate your GPA like this:
GPA = 29.0 points ÷ 9 credits = 3.22
So, your GPA would be 3.22.
This process is the same as calculating a weighted grade.
Keep in mind that you should always double-check your college’s specific grading scale since some schools might have variations.
If you’re interested in your major-specific GPA (as opposed to your overall GPA), follow the same steps but only for the courses within your major.
You might also be interested in our high school GPA calculator. | {"url":"https://www.inchcalculator.com/college-gpa-calculator/","timestamp":"2024-11-07T04:22:37Z","content_type":"text/html","content_length":"172269","record_id":"<urn:uuid:ca8a7258-f116-4efa-8f80-2a3f816a7de0>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00673.warc.gz"} |
Quantum computer kills internet banking?
“SCIENTISTS FACTOR THE NUMBER 15.”
Hardly a headline to grab the popular imagination. But when it’s done by a quantum computer – and one that’s scalable – it’s time to take notice.
A paper published today in Science describes a five-atom quantum computer that can factor numbers – that is, start with a number and find numbers that, when multiplied, equal that first number. For
instance, 15 factors into three times five.
It’s also a striking illustration of how quantum computers will smash today’s internet encryption – when they arrive, that is.
Computerised factoring is not new – quantum computers have factored numbers before (and those much bigger than 15). The key point here, though, is the new design can be upscaled to much more powerful
versions simply by adding atoms.
Many of the world’s public key security systems, which encrypt online banking transactions and the like, operate on a simple principle: that it’s easy to multiply two large prime numbers to generate
a gigantic number.
But given the gigantic number, it’s next to impossible to work out its factors, even using a computer.
In March 1991 the encryption company RSA set a challenge – they published a list of very large numbers and announced cash awards for whoever could factor them. The prizes went from $1,000 for
factoring a 100-digit number, up to $200,000 for a 617-digit number.
A quarter of a century later, most of those numbers remain uncracked.
But with a large enough quantum computer, factoring huge numbers – even those 600 digits long – would be child’s play.
In classical computing, numbers are represented by either 0s or 1s called “bits”, which the computer manipulates in a series of linear, plodding logic operations trying every possible combination
until it hits the right one.
Without any prior knowledge of the answers, the system returned the correct factors (15 = 5 x 3), with a confidence of more than 99%.
For example, to factor a 232-digit monster (the largest RSA number broken) took two years with hundreds of classical computers running in parallel – and ended up being solved too late to claim the
$50,000 prize.
In contrast, quantum computing relies on atomic-scale units, or “qubits”, that can be 0, 1 or – weirdly – both, in a state known as a superposition. This allows quantum computers to weigh multiple
solutions at once, making some computations, such as factoring, far more efficient than on a classical computer.
The problem has been building these qubits into a large-enough assembly to make meaningful calculations. The more atoms, the more they jostle together and the harder it is to control each one.
And as superposition is a very delicate state, a small bump will cause an atom to flip to 0 or 1 easily.
The new design, devised by physicists at the Massachusetts Institute of Technology and constructed at the University of Innsbruck in Austria, uses five calcium ions (atoms stripped of an electron)
suspended in mid-air by electric and magnetic fields.
The ions are close enough to one another – about a hundredth the width of a human hair – to still interact. The researchers use laser pulses to flip them between 0, 1 and superposition to perform
faster, more efficient logic operations.
Without any prior knowledge of the answers, the system returned the correct factors (15 = 5 x 3), with a confidence of more than 99%. Previous quantum computers achieved the same result with 12 ions.
And this system is “straightforwardly scalable”, according to Isaac Chuang, a physicist at MIT whose team designed the computer.
A truly practical quantum computer would likely require thousands of atoms manipulated by thousands of laser pulses. Meanwhile, other researchers are working on scalable computer systems using more
conventional technology such as silicon.
“It might still cost an enormous amount of money to build – you won’t be building a quantum computer and putting it on your desktop anytime soon – but now it’s much more an engineering effort, and
not a basic physics question,” says Chuang.
Whatever the cost, the abililty to crack internet security would make a large-scale quantum computer, literally, invaluable.
Read our handy primer on quantum mechanics – Quantum physics for the terminally confused | {"url":"https://cosmosmagazine.com/science/physics/will-this-quantum-computer-take-down-internet-banking/","timestamp":"2024-11-12T04:09:22Z","content_type":"text/html","content_length":"89500","record_id":"<urn:uuid:ea4e1be2-e9db-4617-b27e-2301f4003543>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00429.warc.gz"} |
Let98jan - Richard Harter's World
This a traditional letter column. You are encouraged to write a letter of comment on anything that you find worthy of comment. It will (may) be published in this column along with my reply. As editor
I reserve the right to delete material; however I will not alter the undeleted material. E-mail to me that solely references the contents of this site will be assumed to be publishable mail. All
other e-mail is assumed to be private. And, of course, anything marked not for publication is not for publication. Oh yes, letters of appreciation for the scholarly resources provided by this site
will be handled very discreetly. This page contains the correspondence for January 1998.
Index of contributors
Other Correspondence Pages
From: L. Hunter Cassells
Date: 12/30/97
Subj: A choice that cannot be made
“My will is that you choose.” Except God cannot divest Himself of that power; He is the only one that has it, and even putting Himself at your disposal, so to speak, is His choice, for which He alone
is accountable. Whether I obey you or disobey, I choose, and the responsibility is mine, mine alone, and never my commander’s.
Hmmm. This gets tricky. Your point as I take it is that choices and the responsibility for those choices can never, in the nature of things, be passed off to others. One way to look at it is that
this is not your choice – it is God’s choice. All you can do is offer God advice. Very well, put it this way:
God says to you “I am contemplating one of two actions – eternal salvation for you and eternal damnation for everyone else or else eternal salvation for everyone else and eternal damnation for
you. I demand of you your opinion as to which of these two actions I should take.”
The natural thing to do is to refuse to make a choice. God being God, however, can compel you to make a choice. You might refuse until the moment of compulsion comes and your answer, so to speak,
pops out by itself. But this would not be a conscious choice. Can God compel you to make a conscious choice? Not, I opine, under the ground rules of Christianity.
You might build in a “default” choice. If the choice was made by my walking through the left door or the right one, I might answer by choosing to die in between and never pass either one; but if the
choice was made by staying put or leaving by (either) door, then the choice is forced.
This is another variation of “Can God force you to make a choice?”
Return to index of contributors
From: Alfred Lehmberg
Date: 12/30/97
Subj: Great site!
At first blush our priorities seem to parallel. Enjoy, and agree with many of your positions. Looking forward to keeping up with you — expect my periodic visit to your site.
Explore the Alien View?
Thunk you, nice to hear from you and all that good stuff. I took a look at your site – some interesting stuff there. But I don’t have priorities – I have seniorities.
Return to index of contributors
From: LaVirg
Date: 12/22/97
Subj: Waiting for Godot?
I read your interpretations for Beckett’s Waiting for Godot and it seems to me as if you understand even the most absurd. So, I have a question for you……… Do you or can you find postmodern aspects of
the play? You are perhaps the only person who can help me with this, and since you know so much…………………………………………….. If you write back, I’d be much obliged.
Sincerely, Andrea ( the confused college student)
I’ll try to answer although if I were you I wouldn’t put any great stock in my answer if you’re looking for a grade.
One of the problems with saying “postmodern aspects” is that postmodern and postmodernism are labels that are used for a lot of different things.
Waiting For Godot is a quintessential example of the Theatre of The Absurd which, in different forms, was quite popular after the world wars. In its simplest form it says that life – the complex
life of the culture that we live in – doesn’t make sense. It holds a mirror up to that senselessness. As such it was a natural reaction in the aftermath of the great wars. Our cultures are
supposed to make sense to us. That is, they provide ready explanations for what we do within them and why we do them. More than that they are expected to deliver – to take care of our needfuls
including our sense of security. Theatre of the Absurd is a trick – take an incomprehensible element of life within the culture and strip everything else away.
It’s been a good while since I’ve seen a production of WFG so memory may fail me but, as I recall it, it’s a pretty simple play in structure. You two elements – the two citizens waiting for
contact with a remote authority and the master/servant who may be taken as symbols of the aristocratic upper class/lower class. Puzo represents the past; the two main characters are people in the
modern incomprehensible state. As to who they are waiting for, my little satire points out that different explanatory systems provide different explanations. Each of them provides jargon and
formulas of explanation. I suppose you could say that the satire is a structuralist analysis – it points to the common structure of how these different explanatory systems have a common
The terms used in post-modern circles for what I have called ‘explanatory systems’ are grand narratives and meta-narratives. It is a very post-modern thing to do to look at meta-narratives with a
jaundiced eye. Meta-narratives do two things that post-modernists point to. They substitute for the actual an explanation of the actual. (Hence the emphasis on looking only at the text.) They
also are politicized. That is, a meta-narrative gives privelege to certains concepts and modes of thinking; the priveleges within the meta-narrative reflect the interests and values of the
priveleged class that propounded the meta-narrative.
Perhaps a post-modern way of looking at WFG is that it makes it evident that the meta-narratives don’t explain what they purport to explain. The incomprehensible remains incomprehensible after
the explanation is over. After the babblers are through babbling they are still waiting for Godot.
Another thought along these lines is that Derrida devotes attention to the idea of the Messiah as the unplanned future that is to come.
I don’t know if this is of any help but it may give you a few thoughts to play with.
Return to index of contributors
From: Henning Strandin
Date: 01/06/97
Subj: editorial dec 29
As always, it is a pleasure to visit your site. I thought I’d contribute some semi-relevant thoughts on the web as an encyclopedia (it became quite long, sorry about that).
That’s quite all right. I enjoy long letters, particularly interesting ones such as yours. As a bonus they are material for the ever-expanding web site that I don’t have to write.
I believe you are right in comparing the web to your other reference literature. This is how I use the web much of the time. There is one fundamental difference though. On the shelves of the web can
not only be found books and writings, but also human beings, with personal experiences from professions, cultures, hobbies, you name it. This is in my opinion what really makes the web an
‘encyclopedia humana’ (please correct my latin, it is non-existent). The fact is, the recipe you were looking for was out there somewhere, and you managed to get hold of it. One might protest that
Usenet, IRC and e-mail correspondence is not part of the web as such. It isn’t. But hyper-links in ICQ, Usenet archives on the web and HTML in e-mail etc, will soon make the distinction irrelevant, I
Along these lines the web is part of what I will call experience of the the third kind. Experience of the first kind is direct experience – we walk, we talk, we interact directly with people.
Reading text and looking at pictures is experience of the second kind. The medium is passive – we produce the sensory experience in our imagination and thought. The electronic media is a new kind
of experience, experience of the third kind, which is still developing and expanding. The media is active and not passive, yet it is not the immediate experience of the first kind.
This encyclopedia can also be used for all kinds of not-so obvious tasks. E.g., English is not my maternal language. I have an on-line dictionary among my bookmarks, but it’s far from complete. When
I need to check the spelling of a more unusual word, I use Altavista. I make a search for the word, if I get 50 hits, all in non-English speaking countries, I conclude that it’s the wrong spelling,
but a common enough mistake. If I get 3000 hits with a large part from sites that end with .edu, I can safely assume that it’s the right spelling. I can also search for a whole phrase, to see what
this phrase means, and even get an idea of what kind of people use it and get it in a cultural context. This encyclopedia is really like no other before it.
Hmmm. That’s an interesting trick that I hadn’t thought of. I have a dictionary (the print and paper kind) immediately at hand – it’s a bit handier than going to alta vista and doing a search. On
the other hand it isn’t the OED so that’s a trick worth remembering. A friend introduced me to web bingo. The game is to simply think of names that are likely to be used at site names, e.g. http:
//www.foobar.com (which is real) or http://www.fubar.com (which doesn’t exist). One turns up all sorts of randomness this way. There is a nifty site that provides domain lookup, , which will tell
you if any domain name is available.
Finally, about the web being unreliable as a source. It’s impossible to use the web for research without practicing good critical analysis of the sources you find. You need to get multiple
confirmations and make a check-up on the authors if you can. I know it has taught me to be more critical towards traditionally published writings too. This, I think, is a Good Thing. So this is the
one point were I don’t agree with you. I don’t think the net will propagate pseudo-science, but maybe teach us to be more critical in our approach to other people’s writings. After all, getting
multiple sources is often a matter of seconds on the web, while at the library, it may take hours or days.
It’s sort of a mixed bag. One of the plusses is that a lot of reliable sources such as the leading science journals are on-line. This means that if I create a page on a subject I can link
directly to my references. On the other hand my page is not refereed. This is a problem. The vast bulk of material on the web is not validated by anybody except the web page author. A major aim
of the enterprise of science is to produce reliable knowledge of the world.
This is not a simple matter. There is an immense amount of cross checking that goes into the knowledge claims of science. It is this process of social verification of knowledge that gets diluted
in the web. On the other hand these things tend to be self correcting. Today we have search engines which provide us with connectivity. We have various sites that point out “cool” and “hot”
sites. I wouldn’t be surprised if we end up with web page review engines that rate reliability of pages.
Pseudoscience is another matter entirely. Every crackpot can put up a web page and many of them do. They can recruit and foster their sundry causes. The loonier ones also provide great
entertainment but that is another matter.
That was all, thanks again for contributing such a massive amount of original content to the web (that’s where the real work lies after all).
Kind words are always appreciated. Thanks for the thoughtful letter. Sites with a large amount of original content are common enough but they are relatively rare.
One of the neat things about the web is that you can get a look at what other people have to say about themselves. The ordinary world of mass media (books, movies, et cetera) doesn’t do that.
From: L. Hunter Cassells
Date: 1/7/98
Subj: A choice that cannot be made
(material from first letter)
God says to you “I am contemplating one of two actions – eternal salvation for you and eternal damnation for everyone else or else eternal salvation for everyone else and eternal damnation for
you. I demand of you your opinion as to which of these two actions I should take.”
“God, my honest opinion is that you need a long hot bath and a good bottle of Scotch ale, and find yourself some new actions. You’re God; you should be able to come up with something.”
The trouble is that that would make you comfortable. God in this scenario isn’t interested in making you comfortable or making you feel good. He wants to put you to the test to see what you do
with a really hard choice.
(material from first letter)
The natural thing to do is to refuse to make a choice. God being God, however, can compel you to make a choice. You might refuse until the moment of compulsion comes and your answer, so to speak,
pops out by itself. But this would not be a conscious choice. Can God compel you to make a conscious choice? Not, I opine, under the ground rules of Christianity.
My “default choice” mechanism wouldn’t amount to a truly conscious choice, true.
But all this is weaseling, by playing games with the metaphor in which the question is framed, rather than the point of the question. How profoundly are you willing to screw other folks, for how much
gain; or, how much pain are you willing to endure for the good of others? Stealing a pen from work costs my fellow taxpayers an infinitesimally small amount, which may even be offset by their
fractional gain (my small pleasure in having a pen at hand, contributing to my overall good humor, spread over 250 million folks). Stealing $50 billion by computer fraud from a bank will specifically
hurt specific people a whole lot more than my tolerance level; I can’t think of any pleasure that $50 billion would bring me, that would outweigh my displeasure at the pain caused. OTOH, you don’t
see me sacrificing my life now trying to build a better world for others. At the level of the eternal bliss/damnation question, the choices are both so nearly equal, that I think my only real option
is to go mad. I’m reminded of another dog story: dogs asked to choose between an oval and a circle for a treat or a shock. The ovals were made successively more circular. At some point (varying, as I
recall, with the individual dogs), being unable to choose and yet having to, they started to lose it, barking and scrabbling and otherwise showing signs of deep agitation.
You point about playing games with the metaphor is well taken. The question takes the notion of self-interest vs self-sacrifice to the limit. Here are some thoughts on that:
In evolutionary theory there are explanations for the evolution of altruism. In the ordinary course of things evolution rewards the selfish individual who looks out for number one. However there
are circumstances where there is selection for limited forms of altruism. Parental care is the obvious one. Sacrifice for relatives is another (your relatives share your genes). Reciprocal
altruism in social species is another.
The forms of altruism found in animals are all forms of indirect self interest and are distinctly limited in scope. With humans, however, atruism is culturally mediated and has a much wider
scope. The genes in the cultureless animal cannot conceive of the common good nor can they plan for the future. Humans can. It is culture that permits us to have these thoughts of extended
self-sacrifice; culture takes us into larger realms. The question tests the boundaries of those realms.
A related question is what I call the question of the price. Take an action which you would not ordinarily do for money. The action depends on your personal predilictions. For the sake of
argument let the person being queried be heterosexual and the action be a homosexual act. This is something that one might do for nothing out of curiosity. However you are not being asked to do
it for nothing; you are being asked to name a price for doing it. Would you do it for $10, $1000, $1000000, or what? An interesting thing about this question is that different prices have
different meanings.
There is a limit to how much is paid for sexual favors as a straight transaction – it is on the order of a few thousand dollars for an evening. If your price is within that range you acting as a
prostitute selling your body for money. A high class prostitute, perhaps, but you are in that ball park. Then there are a range of prices which will make a real difference in your life but will
not make a radical difference. Thus a hundred thousand dollars would let you buy many things; it would not be enough so that you could stop working. Above that is an amount of money which is
enough so that you would never have to work again. Still further up is an amount that would enable you to live in luxury for the rest of your life. An even larger amount would give you resources
for political and social power.
What I am getting at here is that different prices have different meanings. This applies also to considerations of self-interest and self-sacrifice. (Your point, of course.)
Another thought is that the question raises the possibility of self-sacrifice as an absolute. It occurs to me that this is not a Christian idea but rather it is a pagan idea. Christ may have died
for us but he was conveniently resurrected. Prometheus, on the other hand, suffers for us forever.
Return to index of contributors
From: Ruby Dunagan
Date: 1/9/98
Subj: Intrigued
I was intrigued by two things that you have: One – you intro on building web pages and who they might attract. And the other- your horseback riding! My nephew has a small spread in Oklahoma and he
has hosted different people to ride and participate in the day to day stuff that goes on, so it was suggested to him that he start a cowboy school. That link has not got him very far, possibly
because people are looking for a vacation not a school. I wonder if you would exchange links with him when we get him under another link in the browsers! That is, he lists your place and you list
As you can tell my site is not exactly commercial. It is sort of dedicated to the spirit of free personal journalism or something pretentious like that. You will not find in it those annoying
advertising banners with the ticky tacky animated gif’s. Although, come to think on it I do have an EFF token on my main page. Oh well, one shouldn’t take sacred principles too seriously.
Be that as it may, sure. Little people helping little people and all that, you know. I dunno as it will help him a great deal but I’m game. Send me the URL when you get it up.
Return to index of contributors
From: Jim Belec
Date: 1/15/98
Subj: naked antelopes and watermelon seeds
Not sure how I arrived at your cluttered mind-scape, I closed my search pages soon after I began wandering about your place. I think I was searching for idiot savant references, or mental
calculations (math) suitable for my son . . . or some such.
In any case, I have made a shortcut to your pages so I can return and do the many topics you address (some) justice. Your humor suits me to a tee.
Good to hear from you; thanks for writing. Cluttered mind-scape is an excellent characterization. Re justice for my topics: I can’t afford justice; I want mercy! Have fun rambling through the
Return to index of contributors
From: Bob Oppenheimer
Date: 1/16/98
Subj: more humor
Hello Richard,
Here’s some new material. Well new for me at least. Your home page re-organization looks good though I do miss the (signature) “this site best viewed with a bottle of scotch”. After being barraged
with “this site best viewed with ….” it is a humorous difference.
[material snipped]
Thanks for the material. I don’t promise to use it but I might. I keep files of stuff and pick something out every so often. It’s much appreciated even if I don’t use it.
The “this site best viewed with a bottle of scotch” tag is still there. Maybe I should move it up to the top and make it bigger.
Re the reorg: I decided that the old layout was too wordy and didn’t really help people find their way around. Also I find it a lot more convenient. The layout of the site is sort of a maze of
twisty little passages but I don’t want to make it too hard. Also I figured that at the rate I’ve been adding material it makes sense to go to monthly table of contents and correspondence, just
as though it were an e-zine. Frankly I don’t know what it is.
Return to index of contributors
From: ron clarkson
Date: 1/16/98
Subj: link
please link my page to you sight www.kdi.com/~clarkson
Why? Have you looked at my site? Where do you think that a link to your site would belong? [Ron continued]
hello richard
yes i have been to your sight. as far as where the link should be, im not sure,. you can put them anywhere. i would recommend putting it somewhere toward the bottom so it will not distract in any way
form your page i would like to link your page as well. please send me the url again. if you need any help contact me.
The word is “site” rather than “sight”. I don’t think you understood what I was getting at: My web site literally has hundreds of pages with pages covering a large number of topics. I certainly
wouldn’t put it on my main page but there are a lot of other pages where it might be appropriate. Do not worry though. A link to your page will appear in my correspondence pages. (Click here to
see his page. Why I should have a link to it is a mystery to me.)
Return to index of contributors
From: mdr
Date: 1/18/98
Subj: dhmo
thanks for dhmo page. plan to use it as an introduction to the chemistry part of a science class!
It’s an oldie, one of those things that has been kicking around for years. There are several copies floating around on the net. It’s a good object lesson, though.
Return to index of contributors
From: Miriam Coyne
Date: 1/24/98
Subj: loved the Godot in various modes…
I absolutely loved the Waiting for Godot in various modes! Your “baffle-gab” interpretations were right on the money. You need to add feminism. You could go off on the fact that women were not the
dominant force in the play and the play is obviously a conspiracy by the white male dominanted society that rejects the power of women and their obvious importance in the cycle of world events.
Oh my, yes, I should have done feminism. I don’t know as it should be a conspiracy though – more on the order of “the phallocentric mindset of the patriarchy” with the servant being a symbolic
woman and Godot being the anti-goddess, the antithesis of the Goddess. I will concoct something. | {"url":"https://richardhartersworld.com/let98jan/","timestamp":"2024-11-12T00:43:06Z","content_type":"text/html","content_length":"51317","record_id":"<urn:uuid:9c33a6c6-c55c-41de-b41e-6b65d53c7646>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00596.warc.gz"} |
Lesson 1 Rational Vs Irrational Numbers Worksheet Answers
Lesson 1 Rational Vs Irrational Numbers Worksheet Answers function as fundamental devices in the world of maths, supplying a structured yet functional platform for students to check out and
understand mathematical ideas. These worksheets use a structured technique to comprehending numbers, nurturing a solid structure whereupon mathematical effectiveness thrives. From the simplest
counting exercises to the details of innovative estimations, Lesson 1 Rational Vs Irrational Numbers Worksheet Answers cater to students of diverse ages and ability degrees.
Introducing the Essence of Lesson 1 Rational Vs Irrational Numbers Worksheet Answers
Lesson 1 Rational Vs Irrational Numbers Worksheet Answers
Lesson 1 Rational Vs Irrational Numbers Worksheet Answers -
Rational Irrational B Irrational Stuck Review related articles videos or use a hint Report a problem Learn for free about math art computer programming economics physics chemistry biology medicine
finance history and more Khan Academy is a nonprofit with the mission of providing a free world class education for anyone anywhere
Rational and Irrational Numbers Worksheets A rational number is expressed in the form of p q where p and q are integers and q not equal to 0 Every integer is a rational number A real number that is
not rational is called irrational Irrational numbers include pi phi square roots etc
At their core, Lesson 1 Rational Vs Irrational Numbers Worksheet Answers are lorries for theoretical understanding. They encapsulate a myriad of mathematical principles, directing students via the
labyrinth of numbers with a collection of engaging and deliberate workouts. These worksheets go beyond the borders of traditional rote learning, urging energetic involvement and promoting an
instinctive grasp of mathematical partnerships.
Supporting Number Sense and Reasoning
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Example Verify 1 1 2 is a rational number Solution By simplifying 1 1 2 becomes 3 2 Numerator 3 is an integer Denominator 2 is an integer that is 2 0 Hence proved 3 2 is a rational number The
difference between the two are 1 Rational numbers can be expressed in a ratio of two integers while irrational numbers cannot be written or
The Rational Number System Worksheet Classify these numbers as rational or irrational and give your reason 1 a 7329 b 4 2 a 0 95832758941 b 0 5287593593593 Give an example of a number that would
satisfy these rules 3 a number that is real rational whole an integer and natural 4 a number that is real
The heart of Lesson 1 Rational Vs Irrational Numbers Worksheet Answers lies in cultivating number sense-- a deep comprehension of numbers' definitions and affiliations. They encourage exploration,
inviting students to dissect math operations, understand patterns, and unlock the secrets of series. Through provocative challenges and logical puzzles, these worksheets come to be portals to honing
reasoning skills, supporting the analytical minds of budding mathematicians.
From Theory to Real-World Application
Rational And Irrational Numbers Worksheet Answers
Rational And Irrational Numbers Worksheet Answers
The main difference between rational and irrational numbers is that rational numbers are numbers that can be stated in the form of frac p q where p and q are integers and q neq 0 whereas irrational
numbers are numbers that cannot be expressed so though both are real numbers When two numbers are divided if the digits in the
Rational vs Irrational Numbers Rational numbers can be written as a fraction of two integers while irrational numbers cannot Help students learn to correctly identify each with this eighth grade
number sense worksheet In this Rational vs Irrational Numbers worksheet students will gain practice differentiating between rational and
Lesson 1 Rational Vs Irrational Numbers Worksheet Answers act as channels linking academic abstractions with the apparent realities of day-to-day life. By infusing sensible circumstances right into
mathematical exercises, students witness the importance of numbers in their environments. From budgeting and measurement conversions to comprehending analytical data, these worksheets equip trainees
to possess their mathematical prowess beyond the boundaries of the class.
Diverse Tools and Techniques
Versatility is inherent in Lesson 1 Rational Vs Irrational Numbers Worksheet Answers, using a collection of instructional devices to satisfy diverse learning designs. Aesthetic aids such as number
lines, manipulatives, and electronic sources function as buddies in envisioning abstract principles. This diverse method makes sure inclusivity, suiting students with different preferences,
toughness, and cognitive designs.
Inclusivity and Cultural Relevance
In an increasingly diverse globe, Lesson 1 Rational Vs Irrational Numbers Worksheet Answers accept inclusivity. They transcend social limits, incorporating examples and issues that reverberate with
learners from varied backgrounds. By including culturally pertinent contexts, these worksheets cultivate a setting where every learner really feels represented and valued, boosting their link with
mathematical principles.
Crafting a Path to Mathematical Mastery
Lesson 1 Rational Vs Irrational Numbers Worksheet Answers chart a training course towards mathematical fluency. They impart determination, important thinking, and problem-solving abilities, important
qualities not only in mathematics however in numerous aspects of life. These worksheets empower students to browse the complex surface of numbers, nurturing a profound admiration for the style and
logic inherent in maths.
Accepting the Future of Education
In a period noted by technical innovation, Lesson 1 Rational Vs Irrational Numbers Worksheet Answers flawlessly adapt to digital systems. Interactive user interfaces and digital sources augment
standard understanding, providing immersive experiences that go beyond spatial and temporal boundaries. This combinations of traditional approaches with technical innovations heralds an appealing era
in education and learning, cultivating a much more vibrant and engaging learning setting.
Verdict: Embracing the Magic of Numbers
Lesson 1 Rational Vs Irrational Numbers Worksheet Answers epitomize the magic inherent in mathematics-- a captivating trip of exploration, discovery, and mastery. They go beyond standard rearing,
working as drivers for stiring up the flames of interest and inquiry. With Lesson 1 Rational Vs Irrational Numbers Worksheet Answers, learners embark on an odyssey, opening the enigmatic world of
numbers-- one problem, one solution, at a time.
Classifying Rational And Irrational Worksheet Rational Number Fraction Mathematics
Rational And Irrational Numbers Worksheet With Answers Pdf Ntr Blog
Check more of Lesson 1 Rational Vs Irrational Numbers Worksheet Answers below
Ordering Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Printables Rational Vs Irrational Numbers HP United Kingdom
Identifying Rational And Irrational Numbers Worksheet Printable Pdf Download
Rational And Irrational Numbers Worksheet With Answers Pdf Ntr Blog
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheets Online Free PDFs
Rational and Irrational Numbers Worksheets A rational number is expressed in the form of p q where p and q are integers and q not equal to 0 Every integer is a rational number A real number that is
not rational is called irrational Irrational numbers include pi phi square roots etc
Pre Algebra Unit 2 Chambersburg Area School District
Irrational Number a number that CANNOT be written as a RATIO of 2 integers Non repeating Non terminating decimals Square Root a number that produces a specified quantity when multiplied by itself 7
is the square root of 49 Radical Symbol
Rational and Irrational Numbers Worksheets A rational number is expressed in the form of p q where p and q are integers and q not equal to 0 Every integer is a rational number A real number that is
not rational is called irrational Irrational numbers include pi phi square roots etc
Irrational Number a number that CANNOT be written as a RATIO of 2 integers Non repeating Non terminating decimals Square Root a number that produces a specified quantity when multiplied by itself 7
is the square root of 49 Radical Symbol
Identifying Rational And Irrational Numbers Worksheet Printable Pdf Download
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet With Answers Pdf Ntr Blog
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet With Answers Escolagersonalvesgui
Irrational Numbers Worksheet For 9th Grade Lesson Planet
Irrational Numbers Worksheet For 9th Grade Lesson Planet
Classifying Rational And Irrational Numbers | {"url":"https://szukarka.net/lesson-1-rational-vs-irrational-numbers-worksheet-answers","timestamp":"2024-11-08T12:30:04Z","content_type":"text/html","content_length":"27082","record_id":"<urn:uuid:c5efdb23-7b2d-43d7-b09b-1b6388869d49>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00515.warc.gz"} |
Python Bisect: Find the first occurrence of a given number in a sorted list using Binary Search - w3resource
Python Bisect: Find the first occurrence of a given number in a sorted list using Binary Search
Python Bisect: Exercise-4 with Solution
Write a Python program to find the first occurrence of a given number in a sorted list using Binary Search (bisect).
Sample Solution:
Python Code:
from bisect import bisect_left
def Binary_Search(a, x):
i = bisect_left(a, x)
if i != len(a) and a[i] == x:
return i
return -1
nums = [1, 2, 3, 4, 8, 8, 10, 12]
x = 8
num_position = Binary_Search(nums, x)
if num_position == -1:
print(x, "is not present.")
print("First occurrence of", x, "is present at index", num_position)
Sample Output:
First occurrence of 8 is present at index 4
Python Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Python program to insert items into a list in sorted order.
Next: Write a Python program to find the index position of the largest value smaller than a given number in a sorted list using Binary Search (bisect).
What is the difficulty level of this exercise?
Test your Programming skills with w3resource's quiz.
It will be nice if you may share this link in any developer community or anywhere else, from where other developers may find this content. Thanks.
• Weekly Trends and Language Statistics | {"url":"https://www.w3resource.com/python-exercises/bisect/python-bisect-exercise-4.php","timestamp":"2024-11-09T22:14:31Z","content_type":"text/html","content_length":"137313","record_id":"<urn:uuid:666917e6-1a0f-4e34-b588-565b2cd6a41e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00678.warc.gz"} |
Full-Length 8th Grade MCAS Math Practice Test
Taking a Full-length 8th Grade MCAS Math practice test is the best way to help you get familiar with the test format and feel more confident. Not only will this help you measure your exam readiness
and solidify the concepts you’ve learned, but it is the best way to simulate test day.
To help you get the best out of this complete and realistic 8th Grade MCAS Math practice test and prepare your mind and body for the actual test, we recommend that you treat this practice test as a
real test. Prepare scratch papers, a pencil, a timer, and a calculator and take the test in one sitting and follow the time limits to the minute.
Take the following full-length 8th Grade MCAS Math practice test to simulate the test day experience. After you’ve finished, score your tests using the answer keys.
Good luck!
The Absolute Best Book to Ace the 8th Grade MCAS Math Test
No Product Here.
Time to refine your Math skill with a practice test
Take a REAL 8th Grade MCAS Mathematics test to simulate the test day experience. After you’ve finished, score your test using the answer key.
Before You Start
• You’ll need a pencil, a calculator, and a timer to take the test.
• It’s okay to guess. You won’t lose any points if you’re wrong. So be sure to answer every question.
• After you’ve finished the test, review the answer key to see where you went wrong.
• Calculators are permitted for the 8th Grade MCAS Math Test.
• Use the answer sheet provided to record your answers.
• The 8th Grade MCAS Mathematics test contains a formula sheet, which displays formulas relating to geometric measurement and certain algebra concepts. Formulas are provided to test-takers so that
they may focus on the application, rather than the memorization, of formulas.
• For each multiple-choice question, there are five possible answers. Choose which one is best.
Good Luck!
Best 8th Grade MCAS Math Prep Resource for 2022
Original price was: $18.99.Current price is: $13.99.
8th Grade MCAS Math Practice Test
1- A pizza cut into 8 slices. Jason and his sister Eva ordered two pizzas. Jason ate \(\frac{1}{2}\) of his pizza and Eva ate \(\frac{3}{4}\) of her pizza. What part of the two pizzas was left?
A. \(\frac{1}{2}\)
B. \(\frac{1}{3}\)
C. \(\frac{3}{8}\)
D. \(\frac{5}{8}\)
2- Robert is preparing to run a marathon. He runs \(3 \frac{1}{10}\) miles on Saturday and two times that many on Monday and Wednesday. Robert wants to run a total of 18 miles this week. How many
more miles does he need to run?
3- 20 more than twice a positive integer is 68 What is the integer?
A. 24
B. 28
C. 26
D. 30
4- \([3× (-21)+(5×2)]-(-25)+[(-3)×6]÷2=?\)
5- A girl 160 cm tall, stands 380 cm from a lamp post at night. Her shadow from the light is 100 cm long. How high is the lamp post?
6- If a tree casts a 38 ft shadow at the same time that a yardstick casts a 2 ft shadow, what is the height of the tree?
A. 24 ft.
B. 27 ft.
C. 57 ft.
D. 48 ft.
7- Mike is 7.5 miles ahead of Julia running at 5.5 miles per hour and Julia is running at the speed of 8 miles per hour. How long does it take Julia to catch Mike?
A. 2 hours
B. 5.5 hours
C. 7.5 hours
D. 3 hours
8- A company pays its employer $7,000 plus \(2\%\) of all sales profit. If \(x\) is the number of all sales profit, which of the following represents the employer’s revenue?
A. \(0.02x\)
B. \(0.98x-7,000\)
C. \(0.02x+7,000\)
D. \(0.98x+7,000\)
9- Jason needs an \(75\%\) average in his writing class to pass. On his first 4 exams, he earned scores of \(68\%\), \(72\%\), \(85\%\), and \(90\%\). What is the minimum score Jason can earn on his
fifth and final test to pass?
10- If \(25\%\) of a number is 8, what is the number?
A. 30
B. 34
C. 32
D. 36
11- An angle is equal to one-fifth of its supplement. What is the measure of that angle?
A. 20
B. 30
C. 45
D. 60
12- Which graph shows linear equation \(y=-2x+1\)?
13- What is the solution of the following system of equations?
A. \(x=48,y=22\)
B. \(x=50,y=20\)
C. \(x=20,y=50\)
D. \(x=22,y=48\)
14- Which of the following values for x and y satisfy the following system of equations?
A. \(x=3,y=2\)
B. \(x=2,y-3\)
C. \(x=-2,y=3\)
D. \(x=3,y=-2\)
15- The average of 21, 18, 16 and \(x\) is 20. What is the value of \(x\)?
A. 23
B. 25
C. 30
D. 20
16- Which of the equation represents the compound inequality?
A. \(3≤x<5\)
B. \(2≤x<4\)
C. \(2≤x<6\)
D. \(1≤x<4\)
17- Point A \((–2,6)\) and point B \((13,-2)\) are located on a coordinate grid. Which measurement is closest to the distance between point A and point B?
A. 8 units
B. 13 units
C. 15 units
D. 17 units
18- Point A \((9,7)\) and point B \((4,-5)\) are located on a coordinate grid. Which measurement is closest to the distance between point A and point B?
A. 8 units
B. 13 units
C. 15 units
D. 17 units
19- In the \(xy\)-plane, the point \((-8,8)\) and \((4,-10)\) are on line A. Which of the following equations of lines is parallel to line A?
A. \(y=\frac{3}{2} x+4\)
B. \(y=\frac{x}{2}-3\)
C. \(y=2x+4\)
D. \(y=-\frac{3}{2} x-4\)
20- What is the \(x\)-intercept of the line with equation \(10x-4y=5\)?
A. \(-5\)
B. \(-2\)
C. \(\frac{1}{2}\)
D. \(\frac{5}{4}\)
21- Giselle works as a carpenter and as a blacksmith. She earns $20 as a carpenter and $25 as a blacksmith. Last week, Giselle worked both jobs for a total of 30 hours and earned a total of $690. How
long did Giselle work as a carpenter last week, and how long did she work as a blacksmith?
A. (12, 20)
B. (10, 18)
C. (12, 18)
D. (14, 16)
22- Which of the following values for x and y satisfy the following system of equations?
A. \(x=16,y=20\)
B. \(x=-16,y=35\)
C. \(x=12,y=40\)
D. \(x=16,y=-40\)
23- A ride in a taxicab costs $1.25 for the first mile and $1.15 for each additional mile. Which of the following could be used to calculate the total cost \(y\) of a ride that was \(x\) miles?
A. \(x=1.25(y-1)+1.15\)
B. \(x=1.15(y-1)+1.25\)
C. \(y=1.25(x-1)+1.15\)
D. \(y=1.15(x-1)+1.25\)
24- A caterer charges $120 to cater a party for 15 people and $200 for 25 people. Assume that the cost, \(y\), is a linear function of the number of \(x\) people. Write an equation in slope-intercept
form for this function. What does the slope represent? How much would a party for 40 people cost?
A. $280
B. $330
C. $300
D. $320
25- An attorney charges a fixed fee of $250 for an initial meeting and $150 per hour for all hours worked after that. Write a linear equation representation of the cost of hiring this attorney. Find
the charge for 25 hours of work.
A. $4,000.00
B. $4,200.00
C. $3,800.00
D. $4,600.00
26- The sum of two numbers is 30. One of the numbers exceeds the other by 8. Find the numbers.
A. 9, 15
B. 12, 20
C. 10, 18
D. 11, 19
27- How is this number written in scientific notation?
A. \(0.5823× 10^{-10}\)
B. \(5.823 × 10^{-6}\)
C. \(5.823 × 10^{-7}\)
D. \(58.23 × 10^{-5}\)
28- How is this number written in scientific notation?
A. \(2.8×10^9\)
B. \(2.8×10^{10}\)
☐. \(28×10^{12}\)
D. \(2.8×10^8\)
29- Calculate the area shaded region.
A. \(2,950 \space mm^2\)
B. \(2,940 \space mm^2\)
C. \(3,000 \space mm^2\)
D. \(2,930 \space mm^2\)
30- A circle is graphed on a coordinate grid and then reflected across the \(y\)–axis. If the center of the circle was located at \((x,y)\), which ordered pair represents the new center after the
A. \((x,y)\)
B. \((x,-y)\)
C. \((-x,y)\)
D. \((-x,- y)\)
31- Jason built a rectangular tool shed that is 9 meters wide and has an area of 117 square meters. What is the length of Jason’s tool shed?
A. 10
B. 14
C. 13
D. 11
32- What is the estimated area of the shaded region?
A. \(11 \space cm^2\)
B. \(42 \space cm^2\)
C. \(153 \space cm^2\)
D. \(196 \space cm^2\)
33- Calculate the area of the shaded region.
A. \(6.5 \space m^2\)
B. \(6.86 \space m^2\)
C. \(7.3 \space m^2\)
D. \(6.95\space m^2\)
34- What is the median of these numbers?
A. 57
B. 58
C. 55
D. 56.5
35- What is the median of these numbers? \(1,3,9,5,11,15,26,14,18\)
A. 12
B. 11
C. 14
D. 10
36- What is the product of all possible values of \(x\) in the following equation?\(|-3x+4|=26\)
A. 12
B. 9
C. 11
D. 10
37- Out of 7 consonants and 4 vowels, how many words of 3 consonants and 2 vowels can be formed?
A. 24,400
B. 21,000
C. 21,300
D. 25,200
38- What is the area of the figure?
A. 112.5
B. 110
C. 115
D. 112
39- Find Volume of Pyramid?
A. \(2,592 \space cm^3\)
B. \(2,682 \space cm^3\)
C. \(2,590 \space cm^3\)
D. \(2,400 \space cm^3\)
40- 25 buses are running between two places P and Q. In how many ways can a person go from P to Q and return by a different bus?
A. 500
B. 620
C. 610
D. 600
The Best Books to Ace the 8th Grade MCAS Math Test
Original price was: $16.99.Current price is: $11.99.
Original price was: $18.99.Current price is: $13.99.
Original price was: $18.99.Current price is: $13.99.
Related to This Article
What people say about "Full-Length 8th Grade MCAS Math Practice Test - Effortless Math: We Help Students Learn to LOVE Mathematics"?
No one replied yet. | {"url":"https://www.effortlessmath.com/math-topics/full-length-mcas-8-math-practice-test/","timestamp":"2024-11-11T05:08:09Z","content_type":"text/html","content_length":"94371","record_id":"<urn:uuid:1cf316fc-6051-40b9-8cf6-f5de78496021>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00093.warc.gz"} |
How to Treat Missing Values in Your Data - DataScienceCentral.com
One of most excruciating pain points during Data Exploration and Preparation stage of an Analytics project are missing values.
How do you deal with missing values – ignore or treat them? The answer would depend on the percentage of those missing values in the dataset, the variables affected by missing values, whether those
missing values are a part of dependent or the independent variables, etc. Missing Value treatment becomes important since the data insights or the performance of your predictive model could be
impacted if the missing values are not appropriately handled.
The 2 tables above give different insights. The inference from the table on the left with the missing data indicates lower count for Android Mobile users and iOS Tablet users and higher Average
Transaction Value compared to the inference from the right table with no missing data. The inference from the data with missing values could adversely impact business decisions. The best scenario is
to get the actual value that was missing by going back to the Data Extraction & Collection stage and correcting possible errors during these stages. Generally, that won’t be the case and you will
still be left with missing values.
Let’s look at some techniques to treat the missing values:
I. Deletion
Unless the nature of missing data is ‘Missing completely at random’, the best avoidable method in many cases is deletion.
a. Listwise: In this case, rows containing missing variables are deleted.
In the above case, the entire observation for User A and User C will be ignored for listwise deletion
b. Pairwise: In this case, only the missing observations are ignored and analysis is done on variables present.
In the above case, 2 separate sample data will be analyzed, one with the combination of User, Device and Transaction and the other with the combination of User, OS and Transaction. In such a case,
one won’t be deleting any observation. Each of the samples will ignore the variable which has the missing value in it.
Both the above methods suffer from loss of information. Listwise deletion suffers the maximum information loss compared to Pairwise deletion. But, the problem with pairwise deletion is that even
though it takes the available cases, one can’t compare analyses because the sample is different every time.
II. Imputation
a. Popular Averaging Techniques
Mean, median and mode are the most popular averaging techniques, which are used to infer missing values. Approaches ranging from global average for the variable to averages based on groups are
usually considered.
For example: if you are inferring missing value for Revenue, you might assign the average defined by mean, median or mode to such missing value. You could also consider taking into account some other
variables such as Gender of the User and/or the Device OS to calculate such an average to be assigned to the missing values.
Though you can get a quick estimate of the missing values, you are artificially reducing the variation in the dataset as the missing observations could have the same value. This may impact the
statistical analysis of the dataset since depending on the percentage of missing observations imputed, metrics such as mean, median, correlation, etc may get affected.
The above table shows the difference in imputed missing values of Revenue arrived by taking its global mean and mean based on which OS platform it belongs to.
b. Predictive Techniques
Imputation of missing values from predictive techniques assumes that the nature of such missing observations are not observed completely at random and the variables chosen to impute such missing
observations have some relationship with it, else it could yield imprecise estimates.
In the examples discussed earlier, a predictive model could be used to impute the missing values for Device, OS, Revenues. There are various statistical methods like regression techniques, machine
learning methods like SVM and/or data mining methods to impute such missing values.
Let’s take a look at an example where we shall test all the techniques discussed above to infer or deal with such missing observations. With the information on Visits,Transactions, Operating System,
and Gender, we need to build a model to predict Revenue. The summary of the information is given below:
We have a total of 7200 missing data points (Transactions: 1800, Gender: 5400) out of 22,800 observations. Almost 8% and 24% data points are missing for ‘Transactions’ and ‘Gender’ respectively.
Revenue Prediction
We will be using a linear regression model to predict ‘Revenue’. A quick intuitive recap of Linear Regression Assume ‘y’ depends on ‘x’. We can explore their relationship graphically as below:
Missing Value Treatment
Let’s now deal with the missing data using techniques mentioned below and then predict ‘Revenue’.
A. Deletion
Steps Involved:
i) Delete
Delete or ignore the observations that are missing and build the predictive model on the remaining data. In the above example, we shall ignore the missing observations totalling 7200 data points for
the 2 variables i.e. ‘Transactions’ and ‘Gender’.
ii) Impute ‘Revenue’ by Linear Regression
Build a Linear model to predict ‘Revenue’ with 15,600 observations.
B. Impute by Average
Steps Involved:
i) Impute ‘Transactions’ by Mean
We shall impute the missing data points for ‘Transactions’ variable by looking at the group means of ‘Transactions’ by ‘OS’.
Mean of Transactions for Users on Android: 0.74
Mean of Transactions for Users on iOS: 1.54
All the missing observations for ‘Transactions’ will get 0.74 and 1.54 as its value for Users on Android and iOS respectively.
ii) Impute ‘Gender’ by Mode
Since ‘Gender’ is a categorical variable, we shall use Mode to impute the missing variables. In the given dataset, the Mode for the variable ‘Gender’ is ‘Male’ since it’s frequency is the highest.
All the missing data points for ‘Gender’ will be labeled as ‘Male’.
iii) Impute ‘Revenue’ by Linear Regression
Build a Linear model to predict ‘Revenue’ with the entire dataset totalling 22,800 observations.
C. Impute by Predictive Model
Steps Involved:
i) Impute ‘Gender’ by Decision Tree
There are several predictive techniques; statistical and machine learning to impute missing values. We will be using Decision Trees to impute the missing values of ‘Gender’. The variables used to
impute it are ‘Visits’, ‘OS’ and ‘Transactions’.
ii) Impute ‘Transactions’ by Linear Regression
Using a simple linear regression, we will impute ‘Transactions’ by including the imputed missing values for ‘Gender’ (imputed from Decision Tree). The variables used to impute it are ‘Visits’, ‘OS’
and ‘Gender’.
iii) Impute ‘Revenue’ by Linear Regression
Build a Linear model to predict ‘Revenue’ with the entire dataset totalling 22,800 observations.
Linear Regression Model Evaluation
A common and quick way to evaluate how well a linear regression model fits the data is the coefficient of determination or R^2.
• R^2 indicates the sensitivity of the predicted response variable with the observed response or dependent variable (Movement of Predicted with Observed).
• The range of R^2 is between 0 and 1.
R^2 will remain constant or keep on increasing as long as you add more independent variables to your model. This might result in overfitting. Overfitting leads to good fit on the data used to build
the model or in-sample data but may poorly fit out-of-sample or new data. Adjusted R^2 overcomes this shortcoming of R^2 to a great extent. Adjusted R^2 is a modified version of R^2 that has been
adjusted for the number of predictors in the model.
• The Adjusted R^2 will penalize R^2 for keeping on adding independent variables (k in the equation) that do not fit the model.
• Adjusted R^2 is not guaranteed to increase or remain constant but may decrease as you add more and more independent variables.
Model Comparison post-treatment of Missing Values
Let’s compare the linear regression output after imputing missing values from the methods discussed above:
In the above table, the Adjusted R^2 is same as R^2 since the variables that do not contribute to the fit of the model haven’t been taken into consideration to build the final model.
• It can be observed that ‘Deletion’ is the worst performing method and the best one is ‘Imputation by Predictive Model’ followed by ‘Imputation by Average’.
• ‘Imputation by Predictive Model’ delivers a better performance since it not only delivers a higher Adjusted R^2 but also requires one independent variable (‘Visits’) less to predict ‘Revenue’
compared to ‘Imputation by Average’.
Imputation of missing values is a tricky subject and unless the missing data is not observed completely at random, imputing such missing values by a Predictive Model is highly desirable since it can
lead to better insights and overall increase in performance of your predictive models.
Source Code and Dataset to reproduce the above illustration available here
This blog originally appeared here. | {"url":"https://www.datasciencecentral.com/how-to-treat-missing-values-in-your-data-1/","timestamp":"2024-11-08T17:04:00Z","content_type":"text/html","content_length":"169218","record_id":"<urn:uuid:31bb0f05-0787-40bb-80b4-3d0ca906177c>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00173.warc.gz"} |
The Binomial Distribution
1 Introduction
The Binomial Distribution is used when an event has only two possible outcomes. Within the context of a series of trials these are generally called success and failure. The probabilities of each of
these outcomes occurring are denoted by p and q respectively. Since the probability of a success is p, the probability of a failure is therefore
where Bernouilli distribution.
1.1 Examples
i) What is the probability of obtaining 4 heads out of 7 tosses of an unbiased coin?
The tossing of a head is classed as a success. Consequently, the probability of 4 heads is given by
ii) What is the probability of dealing 2 spades if 6 cards are dealt from a normal pack of playing cards?
The probability of dealing a spade is
2 Mean, variance and standard deviation of the Binomial Distribution
The mean,
The variance,
The standard deviation,
2.1 Examples
Find the mean, variance and standard deviation for i) and ii) in the examples given above
3 Moment generating function of the Binomial Distribution
The moment generating function of the binomial distribution is given by | {"url":"http://evlm.stuba.sk/~partner9/DBfiles/Statistics%20and%20Probability/Binomial/Binomial.html","timestamp":"2024-11-13T05:54:23Z","content_type":"application/xhtml+xml","content_length":"12475","record_id":"<urn:uuid:0bbeab64-e58d-4e11-b1d4-206582b60e63>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00521.warc.gz"} |
Excel Formula: Sum Values in Column H on Sheet2
This function demonstrates how to write an Excel formula that sums values in column H on Sheet2 based on a condition. The condition is that the corresponding value in column G on Sheet2 matches the
value in column D on Sheet1. The sum is then displayed in column H on Sheet1.
To achieve this, you can use the SUMIF function in Excel. The formula =SUMIF(Sheet2!G:G, D1, Sheet2!H:H) will sum the values in column H on Sheet2 where the corresponding value in column G matches
the value in cell D1 on Sheet1. The result will be shown in cell H1 on Sheet1.
For a step-by-step explanation and an example, please refer to the Excel Formula page.
Excel Formula
=SUMIF(Sheet2!G:G, D1, Sheet2!H:H)
Formula Explanation
The formula uses the SUMIF function to sum values in column H on Sheet2 based on a condition. The condition is that the corresponding value in column G on Sheet2 matches the value in column D on
Sheet1. The sum is then listed in column H on Sheet1.
Step-by-step explanation
1. Sheet2!G:G refers to the entire column G on Sheet2. This is the range where the condition will be checked.
2. D1 refers to the value in cell D1 on Sheet1. This is the value that will be used as the condition to match against column G on Sheet2.
3. Sheet2!H:H refers to the entire column H on Sheet2. This is the range from which the values will be summed.
4. The SUMIF function takes three arguments: the range to check for the condition, the condition to match against, and the range to sum.
5. The formula sums the values in column H on Sheet2 where the corresponding value in column G matches the value in cell D1 on Sheet1.
Let's consider the following example:
| D | H |
| 1 | |
| 2 | |
| 3 | |
| G | H |
| 1 | 10 |
| 2 | 20 |
| 1 | 30 |
| 3 | 40 |
| 2 | 50 |
The formula =SUMIF(Sheet2!G:G, D1, Sheet2!H:H) in cell H1 on Sheet1 would sum the values in column H on Sheet2 where the corresponding value in column G matches the value in cell D1 on Sheet1. In
this case, it would return the sum of 10, 30, and 40, which is 80. The result would be displayed in cell H1 on Sheet1. | {"url":"https://codepal.ai/excel-formula-generator/query/8aUinbGX/excel-formula-sum-values-column-h-sheet2","timestamp":"2024-11-07T23:36:51Z","content_type":"text/html","content_length":"81551","record_id":"<urn:uuid:87ffbed3-37a1-4ebe-8320-7c3b8e8ecf48>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00289.warc.gz"} |
With Category Theory, Mathematics Escapes From Equality | Quanta Magazine
The equal sign is the bedrock of mathematics. It seems to make an entirely fundamental and uncontroversial statement: These things are exactly the same.
But there is a growing community of mathematicians who regard the equal sign as math’s original error. They see it as a veneer that hides important complexities in the way quantities are related —
complexities that could unlock solutions to an enormous number of problems. They want to reformulate mathematics in the looser language of equivalence.
“We came up with this notion of equality,” said Jonathan Campbell of Duke University. “It should have been equivalence all along.”
The most prominent figure in this community is Jacob Lurie. In July, Lurie, 41, left his tenured post at Harvard University for a faculty position at the Institute for Advanced Study in Princeton,
New Jersey, home to many of the most revered mathematicians in the world.
Lurie’s ideas are sweeping on a scale rarely seen in any field. Through his books, which span thousands of dense, technical pages, he has constructed a strikingly different way to understand some of
the most essential concepts in math by moving beyond the equal sign. “I just think he felt this was the correct way to think about mathematics,” said Michael Hopkins, a mathematician at Harvard and
Lurie’s graduate school adviser.
Lurie published his first book, Higher Topos Theory, in 2009. The 944-page volume serves as a manual for how to interpret established areas of mathematics in the new language of “infinity
categories.” In the years since, Lurie’s ideas have moved into an increasingly wide range of mathematical disciplines. Many mathematicians view them as indispensable to the future of the field. “No
one goes back once they’ve learned infinity categories,” said John Francis of Northwestern University.
Yet the spread of infinity categories has also revealed the growing pains that a venerable field like mathematics undergoes whenever it tries to absorb a big new idea, especially an idea that
challenges the meaning of its most important concept. “There’s an appropriate level of conservativity in the mathematics community,” said Clark Barwick of the University of Edinburgh. “I just don’t
think you can expect any population of mathematicians to accept any tool from anywhere very quickly without giving them convincing reasons to think about it.”
Although many mathematicians have embraced infinity categories, relatively few have read Lurie’s long, highly abstract texts in their entirety. As a result, some of the work based on his ideas is
less rigorous than is typical in mathematics.
“I’ve had people say, ‘It’s in Lurie somewhere,’” said Inna Zakharevich, a mathematician at Cornell University. “And I say, ‘Really? You’re referencing 8,000 pages of text.’ That’s not a reference,
it’s an appeal to authority.”
Mathematicians are still grappling with both the magnitude of Lurie’s ideas and the unique way in which they were introduced. They’re distilling and repackaging his presentation of infinity
categories to make them accessible to more mathematicians. They are performing, in a sense, the essential work of governance that must follow any revolution, translating a transformative text into
day-to-day law. In doing so, they are building a future for mathematics founded not on equality, but on equivalence.
Infinite Towers of Equivalence
Mathematical equality might seem to be the least controversial possible idea. Two beads plus one bead equals three beads. What more is there to say about that? But the simplest ideas can be the most
Since the late 19th century, the foundation of mathematics has been built from collections of objects, which are called sets. Set theory specifies rules, or axioms, for constructing and manipulating
these sets. One of these axioms, for example, says that you can add a set with two elements to a set with one element to produce a new set with three elements: 2 + 1 = 3.
On a formal level, the way to show that two quantities are equal is to pair them off: Match one bead on the right side of the equal sign with one bead on the left side. Observe that after all the
pairing is done, there are no beads left over.
Set theory recognizes that two sets with three objects each pair exactly, but it doesn’t easily perceive all the different ways to do the pairing. You could pair the first bead on the right with the
first on the left, or the first on the right with the second on the left, and so on (there are six possible pairings in all). To say that two plus one equals three and leave it at that is to overlook
all the different ways in which they’re equal. “The problem is, there are many ways to pair up,” Campbell said. “We’ve forgotten them when we say equals.”
Lucy Reading-Ikkanda/Quanta Magazine
This is where equivalence creeps in. While equality is a strict relationship — either two things are equal or they’re not — equivalence comes in different forms.
When you can exactly match each element of one set with an element in the other, that’s a strong form of equivalence. But in an area of mathematics called homotopy theory, for example, two shapes (or
geometric spaces) are equivalent if you can stretch or compress one into the other without cutting or tearing it.
From the perspective of homotopy theory, a flat disk and a single point in space are equivalent — you can compress the disk down to the point. Yet it’s impossible to pair points in the disk with
points in the point. After all, there’s an infinite number of points in the disk, while the point is just one point.
Since the mid-20th century mathematicians have tried to develop an alternative to set theory in which it would be more natural to do mathematics in terms of equivalence. In 1945 the mathematicians
Samuel Eilenberg and Saunders Mac Lane introduced a new fundamental object that had equivalence baked right into it. They called it a category.
Categories can be filled with anything you want. You could have a category of mammals, which would collect all the world’s hairy, warm-blooded, lactating creatures. Or you could make categories of
mathematical objects: sets, geometric spaces or number systems.
A category is a set with extra metadata: a description of all the ways that two objects are related to one another, which includes a description of all the ways two objects are equivalent. You can
also think of categories as geometric objects in which each element in the category is represented by a point.
Imagine, for example, the surface of a globe. Every point on this surface could represent a different type of triangle. Paths between those points would express equivalence relationships between the
objects. In the perspective of category theory, you forget about the explicit way in which any one object is described and focus instead on how an object is situated among all other objects of its
“There are lots of things we think of as things when they’re actually relationships between things,” Zakharevich said. “The phrase ‘my husband,’ we think of it as an object, but you can also think of
it as a relationship to me. There is a certain part of him that’s defined by his relationship to me.”
Eilenberg and Mac Lane’s version of a category was well suited to keeping track of strong forms of equivalence. But in the second half of the 20th century, mathematicians increasingly began to do
math in terms of weaker notions of equivalence such as homotopy. “As math gets more subtle, it’s inevitable that we have this progression towards these more subtle notions of sameness,” said Emily
Riehl, a mathematician at Johns Hopkins University. In these subtler notions of equivalence, the amount of information about how two objects are related increases dramatically. Eilenberg and Mac
Lane’s rudimentary categories were not designed to handle it.
To see how the amount of information increases, first remember our sphere that represents many triangles. Two triangles are homotopy equivalent if you can stretch or otherwise deform one into the
other. Two points on the surface are homotopy equivalent if there’s a path linking one with the other. By studying homotopy paths between points on the surface, you’re really studying different ways
in which the triangles represented by those points are related.
But it’s not enough to say that two points are linked by many equal paths. You need to think about equivalences between all those paths, too. So in addition to asking whether two points are
equivalent, you’re now asking whether two paths that start and end at the same pair of points are equivalent — whether there’s a path between those paths. This path between paths takes the shape of a
disk whose boundary is the two paths.
You can keep going from there. Two discs are equivalent if there’s a path between them — and that path will take the form of a three-dimensional object. Those three-dimensional objects may themselves
be connected by four-dimensional paths (the path between two objects always has one more dimension than the objects themselves).
Ultimately, you will build an infinite tower of equivalences between equivalences. By considering the entire edifice, you generate a full perspective on whatever objects you’ve chosen to represent as
points on that sphere.
“It’s just a sphere, but it turns out, to understand the shape of a sphere, you need to go out to infinity in a sense,” said David Ben-Zvi of the University of Texas, Austin.
In the last decades of the 20th century, many mathematicians worked on a theory of “infinity categories” — something that would keep track of the infinite tower of equivalences between equivalences.
Several made substantial progress. Only one got all the way there.
Rewriting Mathematics
Jacob Lurie’s first paper on infinity category theory was inauspicious. On June 5, 2003, the 25-year-old posted a 60-page document called “On Infinity Topoi” to the scientific preprint site
arxiv.org. There, he began to sketch rules by which mathematicians could work with infinity categories.
This first paper was not universally well received. Soon after reading it, Peter May, a mathematician at the University of Chicago, emailed Lurie’s adviser, Michael Hopkins, to say that Lurie’s paper
had some interesting ideas, but that it felt preliminary and needed more rigor.
“I explained our reservations to Mike, and Mike relayed the message to Jacob,” May said.
Whether Lurie took May’s email as a challenge or whether he had his next move in mind all along is not clear. (Lurie declined multiple requests to be interviewed for this story.) What is clear is
that after receiving the criticism, Lurie launched into a multiyear period of productivity that has become legendary.
“I’m not inside Jacob’s brain, I can’t say exactly what he was thinking at that time,” May said. “But certainly there’s a huge difference between the draft we were reacting to and the final versions,
which are altogether on a higher mathematical plane.”
In 2006 Lurie released a draft of Higher Topos Theory on arxiv.org. In this mammoth work, he created the machinery needed to replace set theory with a new mathematical foundation, one based on
infinity categories. “He created literally thousands of pages of this foundational machinery that we’re all now using,” said Charles Rezk, a mathematician at the University of Illinois,
Urbana-Champaign, who did important early work on infinity categories. “I could not imagine producing Higher Topos Theory, which he produced in two or three years, in a lifetime.”
Then in 2011, Lurie followed it up with an even longer work. In it, he reinvented algebra.
Algebra provides a beautiful set of formal rules for manipulating equations. Mathematicians use these rules all the time to prove new theorems. But algebra performs its gymnastics over the fixed bars
of the equal sign. If you remove those bars and replace them with the wispier concept of equivalence, some operations become a lot harder.
Take one of the first rules of algebra kids learn in school: the associative property, which says that the sum or product of three or more numbers doesn’t depend on how the numbers are grouped: 2 ×
(3 × 4) = (2 × 3) × 4.
Proving that the associative property holds for any list of three or more numbers is easy when you’re working with equality. It’s complicated when you’re working with even strong notions of
equivalence. When you move to subtler notions of equivalence, with their infinite towers of paths between paths, even a simple rule like the associative property turns into a thicket.
“This complicates matters enormously, in a way that makes it seem impossible to work with this new version of mathematics we’re imagining,” said David Ayala, a mathematician at Montana State
In Higher Algebra, the latest version of which runs to 1,553 pages, Lurie developed a version of the associative property for infinity categories — along with many other algebraic theorems that
collectively established a foundation for the mathematics of equivalence.
Taken together, his two works were seismic, the types of volumes that trigger scientific revolutions. “The scale was completely massive,” Riehl said. “It was an achievement on the level of
Grothendieck’s revolution of algebraic geometry.”
Yet revolutions take time, and as mathematicians found after Lurie’s books came out, the ensuing years can be chaotic.
Digesting the Cow
Mathematicians have a reputation for being clear-eyed thinkers: A proof is correct or it’s not, an idea works or it doesn’t. But mathematicians are also human beings, and they react to new ideas the
way human beings do: with subjectivity, emotion, and a sense of personal stakes.
“I think a lot of writing about mathematics is done in the tone that mathematicians are searching for these glittering crystalline truths,” Campbell said. “That’s not how it goes. They’re people with
their own tastes and own domains of comfort, and they’ll dismiss things they don’t like for aesthetic or personal reasons.”
In that respect, Lurie’s work represented a big challenge. At heart it was a provocation: Here is a better way to do math. The message was especially pointed for mathematicians who’d spent their
careers developing methods that Lurie’s work transcended.
“There’s this tension to the process where people aren’t always happy to see the next generation rewriting their work,” Francis said. “This is one feature affecting infinity category theory, that a
lot of previous work gets rewritten.”
Lurie’s work was hard to swallow in other ways. The volume of material meant that mathematicians would need to invest years reading his books. That’s an almost impossible requirement for busy
mathematicians in midcareer, and it’s a highly risky one for graduate students who have only a few years to produce results that will get them a job.
Lurie’s work was also highly abstract, even in comparison with the highly abstract nature of everything else in advanced mathematics. As a matter of taste, it just wasn’t for everyone. “Many people
did view Lurie’s work as abstract nonsense, and many people absolutely loved it and took to it,” Campbell said. “Then there were responses in between, including just full-on not understanding it at
Scientific communities absorb new ideas all the time, but usually slowly, and with a sense of everyone moving forward together. When big new ideas arise, they present challenges for the intellectual
machinery of the community. “A lot of stuff got introduced at once, so it’s kind of like a boa constrictor trying to ingest a cow,” Campbell said. “There’s this huge mass that’s flowing through the
If you were a mathematician who saw Lurie’s approach as a better way to do mathematics, the way forward was lonely. Few people had read Lurie’s work, and there were no textbooks distilling it and no
seminars you could take to get your bearings. “The way you had to learn about this stuff really precisely was to just sit down and do it yourself,” said Peter Haine, a graduate student at the
Massachusetts Institute of Technology who spent a year reading Lurie’s work. “I think that’s the hard part. It’s not just sit down and do it yourself — it’s sit down and do it yourself by reading 800
pages of Higher Topos Theory.”
Like many new inventions, Higher Topos Theory requires mathematicians to interact a lot with the machinery that makes the theory work. It’s like making every 16-year-old hoping for a driver’s license
first learn how to rebuild an engine. “If there was a more driver-friendly version, it would become instantly more accessible to a wider mathematical audience,” said Dennis Gaitsgory, a mathematician
at Harvard who has collaborated with Lurie.
As people started reading Lurie’s work and using infinity categories in their own research, other problems emerged. Mathematicians would write papers using infinity categories. Reviewers at journals
would receive them and say: What is this?
“You have this situation where [papers] either come back from journals with absurd referee reports that reflect deep misunderstandings, or they just take several years to publish,” Barwick said. “It
can make people’s lives uncomfortable because an unpublished paper sitting on your website for years and years starts to look a little funny.”
Yet the biggest problem was not papers that went unpublished, but papers that used infinity categories and did get published — with errors.
Lurie’s books are the single, authoritative text on infinity categories. They are completely rigorous, but hard to completely grasp. They’re especially poorly suited to serving as reference manuals —
it’s difficult to look up specific theorems, or to check that a specific application of infinity categories that one might encounter in someone else’s paper really works out.
“Most people working in this field have not read Lurie systematically,” said André Joyal, a mathematician at the University of Quebec in Montreal whose earlier work was a key ingredient in Lurie’s
books. “It would take a lot of time and energy, so we sort of assume what’s in his book is correct because almost every time we check on something it is correct. Actually, all the time.”
The inaccessibility of Lurie’s books has led to an imprecision in some of the subsequent research based on them. Lurie’s books are hard to read, they’re hard to cite, and they’re hard to use to check
other people’s work.
“There is a feeling of sloppiness around the general infinity categorical literature,” Zakharevich said.
Despite all its formalism, math is not meant to have sacred texts that only the priests can read. The field needs pamphlets as well as tomes, it needs interpretive writing in addition to original
revelation. And right now, infinity category theory still exists largely as a few large books on the shelf.
“You can take the attitude that ‘Jacob tells you what to do, it’s fine,’” Rezk said. “Or you can take the attitude that ‘We don’t know how to present our subject well enough that people can pick it
up and run with it.’”
Yet a few mathematicians have taken up the challenge of making infinity categories a technique that more people in their field can run with.
A User-Friendly Theory
In order to translate infinity categories into objects that could do real mathematical work, Lurie had to prove theorems about them. And to do that, he had to choose a landscape in which to create
those proofs, just as someone doing geometry has to choose a coordinate system in which to work. Mathematicians refer to this as choosing a model.
Lurie developed infinity categories in the model of quasi-categories. Other mathematicians had previously developed infinity categories in different models. While those efforts were far less
comprehensive than Lurie’s, they’re easier to work with in some situations. “Jacob picked a model and checked that everything worked in that model, but often that’s not the easiest model to work in,”
Zakharevich said.
In geometry, mathematicians understand exactly how to move between coordinate systems. They’ve also proved that theorems proved in one setting work in the others.
With infinity categories, there are no such guarantees. Yet when mathematicians write papers using infinity categories, they often move breezily between models, assuming (but not proving) that their
results carry over. “People don’t specify what they’re doing, and they switch between all these different models and say, ‘Oh, it’s all the same,’” Haine said. “But that’s not a proof.”
For the past six years, a pair of mathematicians have been trying to make those guarantees. Riehl and Dominic Verity, of Macquarie University in Australia, have been developing a way of describing
infinity categories that moves beyond the difficulties created in previous model-specific frameworks. Their work, which builds on previous work by Barwick and others, has proved that many of the
theorems in Higher Topos Theory hold regardless of which model you apply them in. They prove this compatibility in a fitting way: “We’re studying infinity categories whose objects are themselves
these infinity categories,” Riehl said. “Category theory is kind of eating itself here.”
Riehl and Verity hope to move infinity category theory forward in another way as well. They’re specifying aspects of infinity category theory that work regardless of the model you’re in. This
“model-independent” presentation has a plug-and-play quality that they hope will invite mathematicians into the field who might have been staying away while Higher Topos Theory was the only way in.
“There’s a moat you have to get across to get into this world,” Hopkins said, “and they are lowering the drawbridge.”
Riehl and Verity expect to finish their work next year. Meanwhile, Lurie has recently started a project called Kerodon that he intends as a Wikipedia-style textbook for higher category theory.
Thirteen years after Higher Topos Theory formalized the mathematics of equivalence, these new initiatives are an attempt to refine and promote the ideas — to make the mathematics of equivalence more
universally accessible.
“Genius has an important role in developing mathematics, but actually the knowledge itself is the result of the activity of a community,” Joyal said. “It’s the real goal of knowledge to become the
knowledge of the community, not the knowledge of one or two persons.”
This article was reprinted on Wired.com. | {"url":"https://www.quantamagazine.org/with-category-theory-mathematics-escapes-from-equality-20191010/","timestamp":"2024-11-04T12:16:52Z","content_type":"text/html","content_length":"228194","record_id":"<urn:uuid:e05ed6fb-e4aa-4cb5-a386-8254d3a175c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00373.warc.gz"} |
Orthogonal/Unitary Diagonalization of Matrices
Main Concept
Introduction: Special Types of Matrices
The Adjoint of a Matrix
To find the adjoint of a matrix, M, the following transformation is applied: take the transpose of the
matrix and then take the complex conjugate of all elements of the matrix. The resulting matrix is called the
adjoint of M and is denoted by ${M}^{\mathit{*}}$.
Note that if all entries of M are real numbers then ${M}^{t}\mathit{=}\mathit{}{M}^{\mathit{*}}$
because each entry is the complex conjugate of itself.
Enter a matrix M = $\stackrel{\mathrm{take}\mathrm{transpose}}{→}$$\stackrel{\mathrm
{take}\mathrm{conjugate}}{→}$ = ${\mathrm{M}}^{*}$
A matrix U is said to be orthogonal if all of its entries are real numbers and ${M}^{-1}={M}^{*}$,
where ${M}^{*}$denotes the adjoint of M. If the entries of the matrix are complex numbers, M is said to
be unitary. An interesting fact is that if a matrix is orthogonal or unitary then its eigenvalues are real
numbers and are either 1 or -1.
A matrix N is said to be normal if $N\cdot {N}^{*}={N}^{*}\cdot N$.
A $n$x$n$ matrix M is said to be orthogonally/unitarily diagonalizable if there exists an orthogonal or
unitary $n$x$n$ matrix U such that for a diagonal $n$x$n$ matrix D:
This is equivalent to saying: M is similar to a diagonal matrix by using a orthogonal or unitary matrix as a
transition matrix.
Is there an easier way to check if a matrix is orthogonally/unitarily diagonalizable?
A matrix A is normal if and only if A is orthogonally/unitarily diagonalizable.
So to check if we can diagonalize the matrix, we must check first if it's normal. This is quite simple from
the definition of a normal matrix because it only requires for us to calculate the matrix's adjoint and
multiply to verify the condition.
There is, in fact, a procedure in which we can find the diagonal and transition matrices if we determine that
the matrix is normal. The procedure is explained step by step below.
Procedure: Step by Step
M =
Step 1: Check if the Matrix is Normal
We will compute both $M\cdot {M}^{\mathit{*}}$and ${\mathrm{M}}^{*}\cdot M$ and check if they are
equal or not.
M$${M}^{\mathit{*}}$= and ${M}^{\mathit{*}}\mathit{\cdot }M$ = .
The matrix normal.
Step 2: Eigenvalues and Multiplicities
We will calculate the eigenvalues of the matrix by finding the matrix's characteristic polynomial. We solve:
The characteristic polynomial for the matrix is: This gives eigenvalues with multiplicities of , where the
left side of each equation is the eigenvalue and the right side of each equation is the multiplicity of that
Step 3: Finding Eigenvectors
The next step is to find the eigenvectors for the matrix M. This can be done manually by finding the
solutions for v in the equation $\left(M-\mathrm{λ}\cdot I\right)\cdot v=0$ for each of the
eigenvalues $\mathrm{\lambda }$ of M. To solve this manually, the equation will give a system of equations
with the number of variables equal to the number of the dimensions of the matrix.
Choose the eigenvalue for which you want to find eigenvectors. $\mathrm{λ}$ =
An aid for solving $\left(M-\mathrm{\lambda }\cdot I\right)\cdot v=0$ is augmenting M into a matrix A
that has an extra column of zeros in the end. Then, applying Gaussian Elimination on the augmented matrix A
and transforming it into Reduced Row Echelon Form to simplify the calculations.
Then, transform the matrix into a system of linear equations and solve it. Note that $\left\{\mathrm{x__1}&
comma;\mathrm{x__2},...,\mathrm{x__n}\right\}$ represent the variables for the system in
the order: $\left[\begin{array}{c}\mathrm{x__1}\\ \mathrm{x__2}\\ ⋮\\ {x}_{n}\end{array}\right]$.
System: Solution (In vector form): . Therefore, an orthonormal basis for the eigenspace $\mathrm{E__&
lambda;}$ is (by using Gram-Schmidt): .
Therefore, combining the results for all eigenvalues we get that an orthonormal basis for ${\mathrm{&
reals;}}^{n}$ is:.
Step 4: Generate Matrices
The diagonal matrix to which M is similar to is the diagonal matrix that has M's eigenvalues as entries.
Note that, the eigenvalues are repeated in terms of multiplicity and equal eigenvalues need to be adjacent
to each other.
So we get
D =
The transition matrix will be the concatenation of the eigenvectors beside each other depending on the order
of the eigenvalues.
Q =
Main Concept
Introduction: Special Types of Matrices
The Adjoint of a Matrix
To find the adjoint of a matrix, M, the following transformation is applied: take the transpose of the
matrix and then take the complex conjugate of all elements of the matrix. The resulting matrix is called the
adjoint of M and is denoted by ${M}^{\mathit{*}}$.
Note that if all entries of M are real numbers then ${M}^{t}\mathit{=}\mathit{}{M}^{\mathit{*}}$
because each entry is the complex conjugate of itself.
Enter a matrix M = $\stackrel{\mathrm{take}\mathrm{transpose}}{→}$$\stackrel{\mathrm
{take}\mathrm{conjugate}}{→}$ = ${\mathrm{M}}^{*}$
A matrix U is said to be orthogonal if all of its entries are real numbers and ${M}^{-1}={M}^{*}$,
where ${M}^{*}$denotes the adjoint of M. If the entries of the matrix are complex numbers, M is said to
be unitary. An interesting fact is that if a matrix is orthogonal or unitary then its eigenvalues are real
numbers and are either 1 or -1.
A matrix N is said to be normal if $N\cdot {N}^{*}={N}^{*}\cdot N$.
A $n$x$n$ matrix M is said to be orthogonally/unitarily diagonalizable if there exists an orthogonal or
unitary $n$x$n$ matrix U such that for a diagonal $n$x$n$ matrix D:
This is equivalent to saying: M is similar to a diagonal matrix by using a orthogonal or unitary matrix as a
transition matrix.
Is there an easier way to check if a matrix is orthogonally/unitarily diagonalizable?
A matrix A is normal if and only if A is orthogonally/unitarily diagonalizable.
So to check if we can diagonalize the matrix, we must check first if it's normal. This is quite simple from
the definition of a normal matrix because it only requires for us to calculate the matrix's adjoint and
multiply to verify the condition.
There is, in fact, a procedure in which we can find the diagonal and transition matrices if we determine that
the matrix is normal. The procedure is explained step by step below.
Procedure: Step by Step
M =
Step 1: Check if the Matrix is Normal
We will compute both $M\cdot {M}^{\mathit{*}}$and ${\mathrm{M}}^{*}\cdot M$ and check if they are
equal or not.
M$${M}^{\mathit{*}}$= and ${M}^{\mathit{*}}\mathit{\cdot }M$ = .
The matrix normal.
Step 2: Eigenvalues and Multiplicities
We will calculate the eigenvalues of the matrix by finding the matrix's characteristic polynomial. We solve:
The characteristic polynomial for the matrix is: This gives eigenvalues with multiplicities of , where the
left side of each equation is the eigenvalue and the right side of each equation is the multiplicity of that
Step 3: Finding Eigenvectors
The next step is to find the eigenvectors for the matrix M. This can be done manually by finding the
solutions for v in the equation $\left(M-\mathrm{λ}\cdot I\right)\cdot v=0$ for each of the
eigenvalues $\mathrm{\lambda }$ of M. To solve this manually, the equation will give a system of equations
with the number of variables equal to the number of the dimensions of the matrix.
Choose the eigenvalue for which you want to find eigenvectors. $\mathrm{λ}$ =
An aid for solving $\left(M-\mathrm{\lambda }\cdot I\right)\cdot v=0$ is augmenting M into a matrix A
that has an extra column of zeros in the end. Then, applying Gaussian Elimination on the augmented matrix A
and transforming it into Reduced Row Echelon Form to simplify the calculations.
Then, transform the matrix into a system of linear equations and solve it. Note that $\left\{\mathrm{x__1}&
comma;\mathrm{x__2},...,\mathrm{x__n}\right\}$ represent the variables for the system in
the order: $\left[\begin{array}{c}\mathrm{x__1}\\ \mathrm{x__2}\\ ⋮\\ {x}_{n}\end{array}\right]$.
System: Solution (In vector form): . Therefore, an orthonormal basis for the eigenspace $\mathrm{E__&
lambda;}$ is (by using Gram-Schmidt): .
Therefore, combining the results for all eigenvalues we get that an orthonormal basis for ${\mathrm{&
reals;}}^{n}$ is:.
Step 4: Generate Matrices
The diagonal matrix to which M is similar to is the diagonal matrix that has M's eigenvalues as entries.
Note that, the eigenvalues are repeated in terms of multiplicity and equal eigenvalues need to be adjacent
to each other.
So we get
D =
The transition matrix will be the concatenation of the eigenvectors beside each other depending on the order
of the eigenvalues.
Q =
Introduction: Special Types of Matrices
The Adjoint of a Matrix
To find the adjoint of a matrix, M, the following transformation is applied: take the transpose of the matrix and then take the complex conjugate of all elements of the matrix. The resulting matrix
is called the adjoint of M and is denoted by ${M}^{\mathit{*}}$.
Note that if all entries of M are real numbers then ${M}^{t}\mathit{=}\mathit{}{M}^{\mathit{*}}$ because each entry is the complex conjugate of itself.
Enter a matrix M = $\stackrel{\mathrm{take}\mathrm{transpose}}{→}$$\stackrel{\mathrm
{take}\mathrm{conjugate}}{→}$ = ${\mathrm{M}}^{*}$
A matrix U is said to be orthogonal if all of its entries are real numbers and ${M}^{-1}={M}^{*}$, where ${M}^{*}$denotes the adjoint of M. If the entries of the matrix are complex
numbers, M is said to be unitary. An interesting fact is that if a matrix is orthogonal or unitary then its eigenvalues are real numbers and are either 1 or -1.
A matrix N is said to be normal if $N\cdot {N}^{*}={N}^{*}\cdot N$.
A $n$x$n$ matrix M is said to be orthogonally/unitarily diagonalizable if there exists an orthogonal or unitary $n$x$n$ matrix U such that for a diagonal $n$x$n$ matrix D:
This is equivalent to saying: M is similar to a diagonal matrix by using a orthogonal or unitary matrix as a transition matrix.
Is there an easier way to check if a matrix is orthogonally/unitarily diagonalizable?
A matrix A is normal if and only if A is orthogonally/unitarily diagonalizable.
So to check if we can diagonalize the matrix, we must check first if it's normal. This is quite simple from the definition of a normal matrix because it only requires for us to calculate the
matrix's adjoint and multiply to verify the condition.
There is, in fact, a procedure in which we can find the diagonal and transition matrices if we determine that the matrix is normal. The procedure is explained step by step below.
The Adjoint of a Matrix
To find the adjoint of a matrix, M, the following transformation is applied: take the transpose of the matrix and then take the complex conjugate of all elements of the matrix. The resulting matrix
is called the adjoint of M and is denoted by ${M}^{\mathit{*}}$.
Note that if all entries of M are real numbers then ${M}^{t}\mathit{=}\mathit{}{M}^{\mathit{*}}$ because each entry is the complex conjugate of itself.
Enter a matrix M = $\stackrel{\mathrm{take}\mathrm{transpose}}{→}$$\stackrel{\mathrm
{take}\mathrm{conjugate}}{→}$ = ${\mathrm{M}}^{*}$
To find the adjoint of a matrix, M, the following transformation is applied: take the transpose of the matrix and then take the complex conjugate of all elements of the matrix. The resulting matrix
is called the adjoint of M and is denoted by ${M}^{\mathit{*}}$. Note that if all entries of M are real numbers then ${M}^{t}\mathit{=}\mathit{}{M}^{\mathit{*}}$ because each entry is
the complex conjugate of itself.
A matrix U is said to be orthogonal if all of its entries are real numbers and ${M}^{-1}={M}^{*}$, where ${M}^{*}$denotes the adjoint of M. If the entries of the matrix are complex
numbers, M is said to be unitary. An interesting fact is that if a matrix is orthogonal or unitary then its eigenvalues are real numbers and are either 1 or -1. A matrix N is said to be normal if $N\
cdot {N}^{*}={N}^{*}\cdot N$. A $n$x$n$ matrix M is said to be orthogonally/unitarily diagonalizable if there exists an orthogonal or unitary $n$x$n$ matrix U such that for a diagonal
$n$x$n$ matrix D:
This is equivalent to saying: M is similar to a diagonal matrix by using a orthogonal or unitary matrix as a transition matrix.
Is there an easier way to check if a matrix is orthogonally/unitarily diagonalizable?
A matrix A is normal if and only if A is orthogonally/unitarily diagonalizable.
So to check if we can diagonalize the matrix, we must check first if it's normal. This is quite simple from the definition of a normal matrix because it only requires for us to calculate the matrix's
adjoint and multiply to verify the condition. There is, in fact, a procedure in which we can find the diagonal and transition matrices if we determine that the matrix is normal. The procedure is
explained step by step below.
Procedure: Step by Step
M =
Step 1: Check if the Matrix is Normal
We will compute both $M\cdot {M}^{\mathit{*}}$and ${\mathrm{M}}^{*}\cdot M$ and check if they are equal or not.
M$${M}^{\mathit{*}}$= and ${M}^{\mathit{*}}\mathit{\cdot }M$ = .
The matrix normal.
Step 2: Eigenvalues and Multiplicities
We will calculate the eigenvalues of the matrix by finding the matrix's characteristic polynomial. We solve:
The characteristic polynomial for the matrix is: This gives eigenvalues with multiplicities of , where the left side of each equation is the eigenvalue and the right side of each equation is the
multiplicity of that eigenvalue.
Step 3: Finding Eigenvectors
The next step is to find the eigenvectors for the matrix M. This can be done manually by finding the solutions for v in the equation $\left(M-\mathrm{λ}\cdot I\right)\cdot v=0$ for
each of the eigenvalues $\mathrm{\lambda }$ of M. To solve this manually, the equation will give a system of equations with the number of variables equal to the number of the dimensions of the
Choose the eigenvalue for which you want to find eigenvectors. $\mathrm{λ}$ =
An aid for solving $\left(M-\mathrm{\lambda }\cdot I\right)\cdot v=0$ is augmenting M into a matrix A that has an extra column of zeros in the end. Then, applying Gaussian Elimination on the
augmented matrix A and transforming it into Reduced Row Echelon Form to simplify the calculations.
Then, transform the matrix into a system of linear equations and solve it. Note that $\left\{\mathrm{x__1},\mathrm{x__2},...,\mathrm{x__n}\right\}$ represent the variables
for the system in the order: $\left[\begin{array}{c}\mathrm{x__1}\\ \mathrm{x__2}\\ ⋮\\ {x}_{n}\end{array}\right]$.
System: Solution (In vector form): . Therefore, an orthonormal basis for the eigenspace $\mathrm{E__λ}$ is (by using Gram-Schmidt): .
Therefore, combining the results for all eigenvalues we get that an orthonormal basis for ${\mathrm{ℝ}}^{n}$ is:.
Step 4: Generate Matrices
The diagonal matrix to which M is similar to is the diagonal matrix that has M's eigenvalues as entries. Note that, the eigenvalues are repeated in terms of multiplicity and equal eigenvalues need
to be adjacent to each other.
So we get
D =
The transition matrix will be the concatenation of the eigenvectors beside each other depending on the order of the eigenvalues.
Q =
Step 1: Check if the Matrix is Normal
We will compute both $M\cdot {M}^{\mathit{*}}$and ${\mathrm{M}}^{*}\cdot M$ and check if they are equal or not.
M$${M}^{\mathit{*}}$= and ${M}^{\mathit{*}}\mathit{\cdot }M$ = .
The matrix normal.
We will compute both $M\cdot {M}^{\mathit{*}}$and ${\mathrm{M}}^{*}\cdot M$ and check if they are equal or not. M$${M}^{\mathit{*}}$= and ${M}^{\mathit{*}}\mathit{\cdot }M$ = . The
matrix normal.
Step 2: Eigenvalues and Multiplicities
We will calculate the eigenvalues of the matrix by finding the matrix's characteristic polynomial. We solve:
The characteristic polynomial for the matrix is: This gives eigenvalues with multiplicities of , where the left side of each equation is the eigenvalue and the right side of each equation is the
multiplicity of that eigenvalue.
We will calculate the eigenvalues of the matrix by finding the matrix's characteristic polynomial. We solve: The characteristic polynomial for the matrix is: This gives eigenvalues with
multiplicities of , where the left side of each equation is the eigenvalue and the right side of each equation is the multiplicity of that eigenvalue.
Step 3: Finding Eigenvectors
The next step is to find the eigenvectors for the matrix M. This can be done manually by finding the solutions for v in the equation $\left(M-\mathrm{λ}\cdot I\right)\cdot v=0$ for
each of the eigenvalues $\mathrm{\lambda }$ of M. To solve this manually, the equation will give a system of equations with the number of variables equal to the number of the dimensions of the
Choose the eigenvalue for which you want to find eigenvectors. $\mathrm{λ}$ =
An aid for solving $\left(M-\mathrm{\lambda }\cdot I\right)\cdot v=0$ is augmenting M into a matrix A that has an extra column of zeros in the end. Then, applying Gaussian Elimination on the
augmented matrix A and transforming it into Reduced Row Echelon Form to simplify the calculations.
Then, transform the matrix into a system of linear equations and solve it. Note that $\left\{\mathrm{x__1},\mathrm{x__2},...,\mathrm{x__n}\right\}$ represent the variables
for the system in the order: $\left[\begin{array}{c}\mathrm{x__1}\\ \mathrm{x__2}\\ ⋮\\ {x}_{n}\end{array}\right]$.
System: Solution (In vector form): . Therefore, an orthonormal basis for the eigenspace $\mathrm{E__λ}$ is (by using Gram-Schmidt): .
Therefore, combining the results for all eigenvalues we get that an orthonormal basis for ${\mathrm{ℝ}}^{n}$ is:.
The next step is to find the eigenvectors for the matrix M. This can be done manually by finding the solutions for v in the equation $\left(M-\mathrm{λ}\cdot I\right)\cdot v=0$ for each
of the eigenvalues $\mathrm{\lambda }$ of M. To solve this manually, the equation will give a system of equations with the number of variables equal to the number of the dimensions of the matrix.
Choose the eigenvalue for which you want to find eigenvectors. $\mathrm{λ}$ = An aid for solving $\left(M-\mathrm{\lambda }\cdot I\right)\cdot v=0$ is augmenting M into a matrix A that
has an extra column of zeros in the end. Then, applying Gaussian Elimination on the augmented matrix A and transforming it into Reduced Row Echelon Form to simplify the calculations.
Then, transform the matrix into a system of linear equations and solve it. Note that $\left\{\mathrm{x__1},\mathrm{x__2},...,\mathrm{x__n}\right\}$ represent the variables
for the system in the order: $\left[\begin{array}{c}\mathrm{x__1}\\ \mathrm{x__2}\\ ⋮\\ {x}_{n}\end{array}\right]$. System: Solution (In vector form): . Therefore, an orthonormal basis for the
eigenspace $\mathrm{E__λ}$ is (by using Gram-Schmidt): . Therefore, combining the results for all eigenvalues we get that an orthonormal basis for ${\mathrm{ℝ}}^{n}$ is:.
Step 4: Generate Matrices
The diagonal matrix to which M is similar to is the diagonal matrix that has M's eigenvalues as entries. Note that, the eigenvalues are repeated in terms of multiplicity and equal eigenvalues need
to be adjacent to each other.
So we get
D =
The transition matrix will be the concatenation of the eigenvectors beside each other depending on the order of the eigenvalues.
Q =
The diagonal matrix to which M is similar to is the diagonal matrix that has M's eigenvalues as entries. Note that, the eigenvalues are repeated in terms of multiplicity and equal eigenvalues need to
be adjacent to each other.
The transition matrix will be the concatenation of the eigenvectors beside each other depending on the order of the eigenvalues. | {"url":"https://www.maplesoft.com/support/help/maple/view.aspx?path=MathApps%2FDiagonalizationOfMatrices","timestamp":"2024-11-06T16:49:12Z","content_type":"text/html","content_length":"193116","record_id":"<urn:uuid:c37aa668-1d7f-40bf-b158-16d3316b7bbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00557.warc.gz"} |
An Analysis of Problem Solving Activity in a Mathematics II Class
In the project Collaborative Lesson Research Development (CLRD), four Mathematics II teachers together with a UPNISMED facilitator collaboratively developed, critiqued, and revised a lesson on
Solving Quadratic Equation Using Quadratic Formula. The lesson utilized the approach teaching through problem solving. Teaching through problem solving is an approach wherein a problem is given to
the students at the start and is used as context to teach a topic as well as to develop skills and apply these skills to unfamiliar situations. It is characterized by students’ deep construction and
understanding of mathematical ideas and concepts. The problem in the lesson used a real life context and it involved different ways to solve it. The nature of the problem provided an opportunity for
the students to apply their previous knowledge and skills and experience thinking skills like representing, looking for patterns, and generalizing.
The study focused on the content of the problem solving activity of the lesson. How the students progressed in the problem solving process and how the teacher provided scaffolding so that the
students would complete the task were the ones given particular attention. To follow up the students’ progress and the scaffolding the teacher provided during the problem solving activity, Polya’s
Four Steps of Solving a Problem was used as a guide. At first, the students experienced difficulty in solving the problem. However the difficulty was addressed when the teacher provided the necessary
The result of the problem solving activity was an “eye opener” to the four Mathematics II teachers. They realized that the reason why the students had difficulty in solving the problem was that they
were not exposing the students to problems involving multiple solutions; to problems involving the skills of looking for a pattern, generalizing and “modelling”. The problems they usually give are
problems involving only one solution and an answer of numerical in nature.
The full text of the study is one of the chapters of the book titled “BOOK 1. LESSON STUDY: PLANNING TOGETHER, LEARNING TOGETHER” which will be published in print form by UP NISMED this first quarter
of 2013.
By Lydia Landrito
0 comments: | {"url":"http://lessonstudy.nismed.upd.edu.ph/2013/01/problem-solving-activity.html","timestamp":"2024-11-14T23:42:22Z","content_type":"application/xhtml+xml","content_length":"171066","record_id":"<urn:uuid:a36418b3-c304-4a36-af60-af66af90ed96>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00762.warc.gz"} |
Course 2019-2020 a.y. - Universita' Bocconi
30063 - MATEMATICA - MODULO 2 (APPLICATA) / MATHEMATICS - MODULE 2 (APPLIED)
Department of Decision Sciences
For the instruction language of the course see class group/s below
Go to class group/s:
Class group/s taught in English
Suggested background knowledge
A refresher of differential calculus is suggested.
Mission & Content Summary
An increasing number of economic activities entails financial and probabilistic features that cannot be any more neglected. Several car manufacturers directly supply leases. The leasing cost is
summarized in a internal interest rate which represent a sizeable source of the company revenues. Nowadays almost all investment opportunities are accompanied by information on the probability
distribution of their yields to maturity. A recent UE legislation states that some accounting items should be determined on the basis of financial and probabilistic principles too. Knowing what a
probability and a financial law are, is by now an essential component of the background of every student in Economics. The course objective is to provide students with the basic notions of
Probability and Financial Calculus that are required in many Economic, Financial and Management fields. The course consists of three parts: (i) integral calculus – instrumental in facing the second
part; (ii) probability calculus – basic notions and their proper use; (iii) financial calculus – basic notions and their application.
• Integral calculus: antiderivative; indefinite integral; integration methods; definite integral; integral function; generalized integrals and convergence criteria.
• Probability Calculus: classical, empirical and subjective approaches. Axiomatic approach: sample space, events algebra, probability measure. Conditional probability.
• Random numbers and vectors: distribution function, probability and probability density functions. Expected value and variance of a random number. Joint and marginal probability function of a
random vector; stochastic independence and linear correlation; covariance; expected value and variance of a linear combination of random numbers.
• Financial calculus: present and final value: financial laws of one and two variables. Decomposability. Annuities and loan amortization. Consumer credit.
• Fixed income bonds. Interest Rate Term Structure. Duration: financial immunization and volatility of the bond price.
• Financial choices: DCF, NPV and IRR. Generalizations: GNPV, APV and GAPV. Financial leverage. Decomposition of global indices.
Intended Learning Outcomes (ILO)
At the end of the course student will be able to...
• Recognize the proper meaning of standard indices of cost/profitability for a financial operation such as NPV, IRR, etc..
• Identify the proper meaning probabilistic statement and terms concerning random quantities such as uncorrelated random yields, default risk and so on.
• Reproduce the correct procedures for computing integrals, probabilities and financial quantities.
At the end of the course student will be able to...
• Apply the learned calculus methods to compute and/or asses the correctness of quantities which are relevant both in theory and in practice such as: the no arbitrage price of a bullet bond, the
internal effective rate of a loan, the expected returns rate of a portfolio, etc..
• Evaluate the profitability of a financial operation by choosing the proper method/model to adopt.
• Compute a probability measure that is coherent with the available information on the stochastic event/number.
Teaching methods
• Face-to-face lectures
• Exercises (exercises, database, software etc.)
Teaching and learning activities for this course are divided into (1) face-to-face-lectures, (2) in class exercises (3) self-assessment on line materials.
1. During the lectures convenient examples and applications allow students to identify the quantitative patterns and their main logical-mathematical properties.
2. The in class exercises allow students to apply the analytical tools illustrated during the course.
3. Besides the exercises proposed in class, further exercises, such as "mock exams" and "past written exam" are uploaded on-line. The on-line exercises allow students to individually practice and
self-assess their own skills.
Assessment methods
Continuous assessment Partial exams General exam
• Written individual exam (traditional/online) x x
The exam modality is written: the final grade depends exclusively on the student performance in the written exam. The written contains both closed-ended and open-ended questions. Their structure is
designed in order to assess:
• The ability to identify the proper tool to be used in the described framework.
• The ability to correctly apply the chosen tool to compute and/or choose the required result.
• The ability to describe the notions and the methods used.
• The ability to justify in a proper manner the achieved conclusions.
Teaching materials
• L. PECCATI, S. SALSA, A. SQUELLATI, Integral Calculus, Extract from Mathematics for Economics and Business, Milano, EGEA, 2008 (Chapter 7).
• E. CASTAGNOLI, M. CIGOLA, L. PECCATI, Probability. A Brief Introduction, Milano, EGEA, 2009, second edition.
• E. CASTAGNOLI, M. CIGOLA, L. PECCATI, Financial Calculus with Applications, Milano, EGEA, 2013.
Last change 27/05/2019 08:55 | {"url":"https://didattica.unibocconi.eu/ts/tsn_anteprima.php?cod_ins=30063&anno=2020&ric_cdl=TR07&IdPag=6203","timestamp":"2024-11-09T13:04:05Z","content_type":"text/html","content_length":"172679","record_id":"<urn:uuid:f7cb8d0e-7fce-4e5a-afac-e41a4896967c>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00733.warc.gz"} |
Multiple Imputation using Additive Regression, Bootstrapping,
aregImpute {Hmisc} R Documentation
Multiple Imputation using Additive Regression, Bootstrapping, and Predictive Mean Matching
The transcan function creates flexible additive imputation models but provides only an approximation to true multiple imputation as the imputation models are fixed before all multiple imputations are
drawn. This ignores variability caused by having to fit the imputation models. aregImpute takes all aspects of uncertainty in the imputations into account by using the bootstrap to approximate the
process of drawing predicted values from a full Bayesian predictive distribution. Different bootstrap resamples are used for each of the multiple imputations, i.e., for the ith imputation of a
sometimes missing variable, i=1,2,... n.impute, a flexible additive model is fitted on a sample with replacement from the original data and this model is used to predict all of the original missing
and non-missing values for the target variable.
areg is used to fit the imputation models. By default, linearity is assumed for target variables (variables being imputed) and nk=3 knots are assumed for continuous predictors transformed using
restricted cubic splines. If nk is three or greater and tlinear is set to FALSE, areg simultaneously finds transformations of the target variable and of all of the predictors, to get a good fit
assuming additivity, maximizing R^2, using the same canonical correlation method as transcan. Flexible transformations may be overridden for specific variables by specifying the identity
transformation for them. When a categorical variable is being predicted, the flexible transformation is Fisher's optimum scoring method. Nonlinear transformations for continuous variables may be
nonmonotonic. If nk is a vector, areg's bootstrap and crossval=10 options will be used to help find the optimum validating value of nk over values of that vector, at the last imputation iteration.
For the imputations, the minimum value of nk is used.
Instead of defaulting to taking random draws from fitted imputation models using random residuals as is done by transcan, aregImpute by default uses predictive mean matching with optional weighted
probability sampling of donors rather than using only the closest match. Predictive mean matching works for binary, categorical, and continuous variables without the need for iterative maximum
likelihood fitting for binary and categorical variables, and without the need for computing residuals or for curtailing imputed values to be in the range of actual data. Predictive mean matching is
especially attractive when the variable being imputed is also being transformed automatically. Constraints may be placed on variables being imputed with predictive mean matching, e.g., a missing
hospital discharge date may be required to be imputed from a donor observation whose discharge date is before the recipient subject's first post-discharge visit date. See Details below for more
information about the algorithm. A "regression" method is also available that is similar to that used in transcan. This option should be used when mechanistic missingness requires the use of
extrapolation during imputation.
A print method summarizes the results, and a plot method plots distributions of imputed values. Typically, fit.mult.impute will be called after aregImpute.
If a target variable is transformed nonlinearly (i.e., if nk is greater than zero and tlinear is set to FALSE) and the estimated target variable transformation is non-monotonic, imputed values are
not unique. When type='regression', a random choice of possible inverse values is made.
The reformM function provides two ways of recreating a formula to give to aregImpute by reordering the variables in the formula. This is a modified version of a function written by Yong Hao Pua. One
can specify nperm to obtain a list of nperm randomly permuted variables. The list is converted to a single ordinary formula if nperm=1. If nperm is omitted, variables are sorted in descending order
of the number of NAs. reformM also prints a recommended number of multiple imputations to use, which is a minimum of 5 and the percent of incomplete observations.
aregImpute(formula, data, subset, n.impute=5, group=NULL,
nk=3, tlinear=TRUE, type=c('pmm','regression','normpmm'),
pmmtype=1, match=c('weighted','closest','kclosest'),
kclosest=3, fweighted=0.2,
curtail=TRUE, constraint=NULL,
boot.method=c('simple', 'approximate bayesian'),
burnin=3, x=FALSE, pr=TRUE, plotTrans=FALSE, tolerance=NULL, B=75)
## S3 method for class 'aregImpute'
print(x, digits=3, ...)
## S3 method for class 'aregImpute'
plot(x, nclass=NULL, type=c('ecdf','hist'),
datadensity=c("hist", "none", "rug", "density"),
diagnostics=FALSE, maxn=10, ...)
reformM(formula, data, nperm)
an S model formula. You can specify restrictions for transformations of variables. The function automatically determines which variables are categorical (i.e., factor, category, or
character vectors). Binary variables are automatically restricted to be linear. Force linear transformations of continuous variables by enclosing variables by the identify function (I()).
formula It is recommended that factor() or as.factor() do not appear in the formula but instead variables be converted to factors as needed and stored in the data frame. That way imputations for
factor variables (done using impute.transcan for example) will be correct. Currently reformM does not handle variables that are enclosed in functions such as I().
an object created by aregImpute. For aregImpute, set x to TRUE to save the data matrix containing the final (number n.impute) imputations in the result. This is needed if you want to
x later do out-of-sample imputation. Categorical variables are coded as integers in this matrix.
data input raw data
subset These may be also be specified. You may not specify na.action as na.retain is always used.
n.impute number of multiple imputations. n.impute=5 is frequently recommended but 10 or more doesn't hurt.
a character or factor variable the same length as the number of observations in data and containing no NAs. When group is present, causes a bootstrap sample of the observations
group corresponding to non-NAs of a target variable to have the same frequency distribution of group as the that in the non-NAs of the original sample. This can handle k-sample problems as well
as lower the chance that a bootstrap sample will have a missing cell when the original cell frequency was low.
number of knots to use for continuous variables. When both the target variable and the predictors are having optimum transformations estimated, there is more instability than with normal
regression so the complexity of the model should decrease more sharply as the sample size decreases. Hence set nk to 0 (to force linearity for non-categorical variables) or 3 (minimum
nk number of knots possible with a linear tail-restricted cubic spline) for small sample sizes. Simulated problems as in the examples section can assist in choosing nk. Set nk to a vector to
get bootstrap-validated and 10-fold cross-validated R^2 and mean and median absolute prediction errors for imputing each sometimes-missing variable, with nk ranging over the given vector.
The errors are on the original untransformed scale. The mean absolute error is the recommended basis for choosing the number of knots (or linearity).
tlinear set to FALSE to allow a target variable (variable being imputed) to have a nonlinear left-hand-side transformation when nk is 3 or greater
The default is "pmm" for predictive mean matching, which is a more nonparametric approach that will work for categorical as well as continuous predictors. Alternatively, use "regression"
when all variables that are sometimes missing are continuous and the missingness mechanism is such that entire intervals of population values are unobserved. See the Details section for
more information. Another method, type="normpmm", only works when variables containing NAs are continuous and tlinear is TRUE (the default), meaning that the variable being imputed is not
transformed when it is on the left hand model side. normpmm assumes that the imputation regression parameter estimates are multivariately normally distributed and that the residual
type variance has a scaled chi-squared distribution. For each imputation a random draw of the estimates is taken and a random draw from sigma is combined with those to get a random draw from
the posterior predicted value distribution. Predictive mean matching is then done matching these predicted values from incomplete observations with predicted values from complete
potential donor observations, where the latter predictions are based on the imputation model least squares parameter estimates and not on random draws from the posterior. For the plot
method, specify type="hist" to draw histograms of imputed values with rug plots at the top, or type="ecdf" (the default) to draw empirical CDFs with spike histograms at the bottom.
type of matching to be used for predictive mean matching when type="pmm". pmmtype=2 means that predicted values for both target incomplete and complete observations come from a fit from
the same bootstrap sample. pmmtype=1, the default, means that predicted values for complete observations are based on additive regression fits on original complete observations (using
pmmtype last imputations for non-target variables as with the other methds), and using fits on a bootstrap sample to get predicted values for missing target variables. See van Buuren (2012)
section 3.4.2 where pmmtype=1 is said to work much better when the number of variables is small. pmmtype=3 means that complete observation predicted values come from a bootstrap sample
fit whereas target incomplete observation predicted values come from a sample with replacement from the bootstrap fit (approximate Bayesian bootstrap).
Defaults to match="weighted" to do weighted multinomial probability sampling using the tricube function (similar to lowess) as the weights. The argument of the tricube function is the
absolute difference in transformed predicted values of all the donors and of the target predicted value, divided by a scaling factor. The scaling factor in the tricube function is
match fweighted times the mean absolute difference between the target predicted value and all the possible donor predicted values. Set match="closest" to find as the donor the observation
having the closest predicted transformed value, even if that same donor is found repeatedly. Set match="kclosest" to use a slower implementation that finds, after jittering the complete
case predicted values, the kclosest complete cases on the target variable being imputed, then takes a random sample of one of these kclosest cases.
kclosest see match
Smoothing parameter (multiple of mean absolute difference) used when match="weighted", with a default value of 0.2. Set fweighted to a number between 0.02 and 0.2 to force the donor to
fweighted have a predicted value closer to the target, and set fweighted to larger values (but seldom larger than 1.0) to allow donor values to be less tightly matched. See the examples below to
learn how to study the relationship between fweighted and the standard deviation of multiple imputations within individuals.
curtail applies if type='regression', causing imputed values to be curtailed at the observed range of the target variable. Set to FALSE to allow extrapolation outside the data range.
for predictive mean matching constraint is a named list specifying R expression()s encoding constaints on which donor observations are allowed to be used, based on variables that are not
missing, i.e., based on donor observations and/or recipient observations as long as the target variable being imputed is not used for the recipients. The expressions must evaluate to a
constraint logical vector with no NAs and whose length is the number of rows in the donor observations. The expressions refer to donor observations by prefixing variable names by d$, and to a single
recipient observation by prefixing variables names by r$.
By default, simple boostrapping is used in which the target variable is predicted using a sample with replacement from the observations with non-missing target variable. Specify
boot.method boot.method='approximate bayesian' to build the imputation models from a sample with replacement from a sample with replacement of the observations with non-missing targets. Preliminary
simulations have shown this results in good confidence coverage of the final model parameters when type='regression' is used. Not implemented when group is used.
aregImpute does burnin + n.impute iterations of the entire modeling process. The first burnin imputations are discarded. More burn-in iteractions may be requied when multiple variables
burnin are missing on the same observations. When only one variable is missing, no burn-ins are needed and burnin is set to zero if unspecified.
pr set to FALSE to suppress printing of iteration messages
set to TRUE to plot ace or avas transformations for each variable for each of the multiple imputations. This is useful for determining whether transformations are reasonable. If
plotTrans transformations are too noisy or have long flat sections (resulting in "lumps" in the distribution of imputed values), it may be advisable to place restrictions on the transformations
(monotonicity or linearity).
tolerance singularity criterion; list the source code in the lm.fit.qr.bare function for details
B number of bootstrap resamples to use if nk is a vector
digits number of digits for printing
nclass number of bins to use in drawing histogram
datadensity see Ecdf
diagnostics Specify diagnostics=TRUE to draw plots of imputed values against sequential imputation numbers, separately for each missing observations and variable.
maxn Maximum number of observations shown for diagnostics. Default is maxn=10, which limits the number of observations plotted to at most the first 10.
nperm number of random formula permutations for reformM; omit to sort variables by descending missing count.
... other arguments that are ignored
The sequence of steps used by the aregImpute algorithm is the following.
(1) For each variable containing m NAs where m > 0, initialize the NAs to values from a random sample (without replacement if a sufficient number of non-missing values exist) of size m from the
non-missing values.
(2) For burnin+n.impute iterations do the following steps. The first burnin iterations provide a burn-in, and imputations are saved only from the last n.impute iterations.
(3) For each variable containing any NAs, draw a sample with replacement from the observations in the entire dataset in which the current variable being imputed is non-missing. Fit a flexible
additive model to predict this target variable while finding the optimum transformation of it (unless the identity transformation is forced). Use this fitted flexible model to predict the target
variable in all of the original observations. Impute each missing value of the target variable with the observed value whose predicted transformed value is closest to the predicted transformed value
of the missing value (if match="closest" and type="pmm"), or use a draw from a multinomial distribution with probabilities derived from distance weights, if match="weighted" (the default).
(4) After these imputations are computed, use these random draw imputations the next time the curent target variable is used as a predictor of other sometimes-missing variables.
When match="closest", predictive mean matching does not work well when fewer than 3 variables are used to predict the target variable, because many of the multiple imputations for an observation will
be identical. In the extreme case of one right-hand-side variable and assuming that only monotonic transformations of left and right-side variables are allowed, every bootstrap resample will give
predicted values of the target variable that are monotonically related to predicted values from every other bootstrap resample. The same is true for Bayesian predicted values. This causes predictive
mean matching to always match on the same donor observation.
When the missingness mechanism for a variable is so systematic that the distribution of observed values is truncated, predictive mean matching does not work. It will only yield imputed values that
are near observed values, so intervals in which no values are observed will not be populated by imputed values. For this case, the only hope is to make regression assumptions and use extrapolation.
With type="regression", aregImpute will use linear extrapolation to obtain a (hopefully) reasonable distribution of imputed values. The "regression" option causes aregImpute to impute missing values
by adding a random sample of residuals (with replacement if there are more NAs than measured values) on the transformed scale of the target variable. After random residuals are added, predicted
random draws are obtained on the original untransformed scale using reverse linear interpolation on the table of original and transformed target values (linear extrapolation when a random residual is
large enough to put the random draw prediction outside the range of observed values). The bootstrap is used as with type="pmm" to factor in the uncertainty of the imputation model.
As model uncertainty is high when the transformation of a target variable is unknown, tlinear defaults to TRUE to limit the variance in predicted values when nk is positive.
a list of class "aregImpute" containing the following elements:
call the function call expression
formula the formula specified to aregImpute
match the match argument
fweighted the fweighted argument
n total number of observations in input dataset
p number of variables
na list of subscripts of observations for which values were originally missing
nna named vector containing the numbers of missing values in the data
type vector of types of transformations used for each variable ("s","l","c" for smooth spline, linear, or categorical with dummy variables)
tlinear value of tlinear parameter
nk number of knots used for smooth transformations
cat.levels list containing character vectors specifying the levels of categorical variables
df degrees of freedom (number of parameters estimated) for each variable
n.impute number of multiple imputations per missing value
a list containing matrices of imputed values in the same format as those created by transcan. Categorical variables are coded using their integer codes. Variables having no missing values
imputed will have NULL matrices in the list.
x if x is TRUE, the original data matrix with integer codes for categorical variables
rsq for the last round of imputations, a vector containing the R-squares with which each sometimes-missing variable could be predicted from the others by ace or avas.
Frank Harrell
Department of Biostatistics
Vanderbilt University
van Buuren, Stef. Flexible Imputation of Missing Data. Chapman & Hall/CRC, Boca Raton FL, 2012.
Little R, An H. Robust likelihood-based analysis of multivariate data with missing values. Statistica Sinica 14:949-968, 2004.
van Buuren S, Brand JPL, Groothuis-Oudshoorn CGM, Rubin DB. Fully conditional specifications in multivariate imputation. J Stat Comp Sim 72:1049-1064, 2006.
de Groot JAH, Janssen KJM, Zwinderman AH, Moons KGM, Reitsma JB. Multiple imputation to correct for partial verification bias revisited. Stat Med 27:5880-5889, 2008.
Siddique J. Multiple imputation using an iterative hot-deck with distance-based donor selection. Stat Med 27:83-102, 2008.
White IR, Royston P, Wood AM. Multiple imputation using chained equations: Issues and guidance for practice. Stat Med 30:377-399, 2011.
Curnow E, Carpenter JR, Heron JE, et al: Multiple imputation of missing data under missing at random: compatible imputation models are not sufficient to avoid bias if they are mis-specified. J Clin
Epi June 9, 2023. DOI:10.1016/j.jclinepi.2023.06.011.
See Also
fit.mult.impute, transcan, areg, naclus, naplot, mice, dotchart3, Ecdf, completer
# Check that aregImpute can almost exactly estimate missing values when
# there is a perfect nonlinear relationship between two variables
# Fit restricted cubic splines with 4 knots for x1 and x2, linear for x3
x1 <- rnorm(200)
x2 <- x1^2
x3 <- runif(200)
m <- 30
x2[1:m] <- NA
a <- aregImpute(~x1+x2+I(x3), n.impute=5, nk=4, match='closest')
matplot(x1[1:m]^2, a$imputed$x2)
abline(a=0, b=1, lty=2)
# Multiple imputation and estimation of variances and covariances of
# regression coefficient estimates accounting for imputation
# Example 1: large sample size, much missing data, no overlap in
# NAs across variables
x1 <- factor(sample(c('a','b','c'),1000,TRUE))
x2 <- (x1=='b') + 3*(x1=='c') + rnorm(1000,0,2)
x3 <- rnorm(1000)
y <- x2 + 1*(x1=='c') + .2*x3 + rnorm(1000,0,2)
orig.x1 <- x1[1:250]
orig.x2 <- x2[251:350]
x1[1:250] <- NA
x2[251:350] <- NA
d <- data.frame(x1,x2,x3,y, stringsAsFactors=TRUE)
# Find value of nk that yields best validating imputation models
# tlinear=FALSE means to not force the target variable to be linear
f <- aregImpute(~y + x1 + x2 + x3, nk=c(0,3:5), tlinear=FALSE,
data=d, B=10) # normally B=75
# Try forcing target variable (x1, then x2) to be linear while allowing
# predictors to be nonlinear (could also say tlinear=TRUE)
f <- aregImpute(~y + x1 + x2 + x3, nk=c(0,3:5), data=d, B=10)
## Not run:
# Use 100 imputations to better check against individual true values
f <- aregImpute(~y + x1 + x2 + x3, n.impute=100, data=d)
modecat <- function(u) {
tab <- table(u)
table(orig.x1,apply(f$imputed$x1, 1, modecat))
plot(orig.x2, apply(f$imputed$x2, 1, mean))
fmi <- fit.mult.impute(y ~ x1 + x2 + x3, lm, f,
fcc <- lm(y ~ x1 + x2 + x3)
summary(fcc) # SEs are larger than from mult. imputation
## End(Not run)
## Not run:
# Example 2: Very discriminating imputation models,
# x1 and x2 have some NAs on the same rows, smaller n
x1 <- factor(sample(c('a','b','c'),100,TRUE))
x2 <- (x1=='b') + 3*(x1=='c') + rnorm(100,0,.4)
x3 <- rnorm(100)
y <- x2 + 1*(x1=='c') + .2*x3 + rnorm(100,0,.4)
orig.x1 <- x1[1:20]
orig.x2 <- x2[18:23]
x1[1:20] <- NA
x2[18:23] <- NA
#x2[21:25] <- NA
d <- data.frame(x1,x2,x3,y, stringsAsFactors=TRUE)
n <- naclus(d)
plot(n); naplot(n) # Show patterns of NAs
# 100 imputations to study them; normally use 5 or 10
f <- aregImpute(~y + x1 + x2 + x3, n.impute=100, nk=0, data=d)
plot(f, diagnostics=TRUE, maxn=2)
# Note: diagnostics=TRUE makes graphs similar to those made by:
# r <- range(f$imputed$x2, orig.x2)
# for(i in 1:6) { # use 1:2 to mimic maxn=2
# plot(1:100, f$imputed$x2[i,], ylim=r,
# ylab=paste("Imputations for Obs.",i))
# abline(h=orig.x2[i],lty=2)
# }
table(orig.x1,apply(f$imputed$x1, 1, modecat))
plot(orig.x2, apply(f$imputed$x2, 1, mean))
fmi <- fit.mult.impute(y ~ x1 + x2, lm, f,
fcc <- lm(y ~ x1 + x2)
summary(fcc) # SEs are larger than from mult. imputation
## End(Not run)
## Not run:
# Study relationship between smoothing parameter for weighting function
# (multiplier of mean absolute distance of transformed predicted
# values, used in tricube weighting function) and standard deviation
# of multiple imputations. SDs are computed from average variances
# across subjects. match="closest" same as match="weighted" with
# small value of fweighted.
# This example also shows problems with predicted mean
# matching almost always giving the same imputed values when there is
# only one predictor (regression coefficients change over multiple
# imputations but predicted values are virtually 1-1 functions of each
# other)
x <- runif(200)
y <- x + runif(200, -.05, .05)
r <- resid(lsfit(x,y))
rmse <- sqrt(sum(r^2)/(200-2)) # sqrt of residual MSE
y[1:20] <- NA
d <- data.frame(x,y)
f <- aregImpute(~ x + y, n.impute=10, match='closest', data=d)
# As an aside here is how to create a completed dataset for imputation
# number 3 as fit.mult.impute would do automatically. In this degenerate
# case changing 3 to 1-2,4-10 will not alter the results.
imputed <- impute.transcan(f, imputation=3, data=d, list.out=TRUE,
pr=FALSE, check=FALSE)
sd <- sqrt(mean(apply(f$imputed$y, 1, var)))
ss <- c(0, .01, .02, seq(.05, 1, length=20))
sds <- ss; sds[1] <- sd
for(i in 2:length(ss)) {
f <- aregImpute(~ x + y, n.impute=10, fweighted=ss[i])
sds[i] <- sqrt(mean(apply(f$imputed$y, 1, var)))
plot(ss, sds, xlab='Smoothing Parameter', ylab='SD of Imputed Values',
abline(v=.2, lty=2) # default value of fweighted
abline(h=rmse, lty=2) # root MSE of residuals from linear regression
## End(Not run)
## Not run:
# Do a similar experiment for the Titanic dataset
h <- lm(age ~ sex + pclass + survived, data=titanic3)
rmse <- summary(h)$sigma
f <- aregImpute(~ age + sex + pclass + survived, n.impute=10,
data=titanic3, match='closest')
sd <- sqrt(mean(apply(f$imputed$age, 1, var)))
ss <- c(0, .01, .02, seq(.05, 1, length=20))
sds <- ss; sds[1] <- sd
for(i in 2:length(ss)) {
f <- aregImpute(~ age + sex + pclass + survived, data=titanic3,
n.impute=10, fweighted=ss[i])
sds[i] <- sqrt(mean(apply(f$imputed$age, 1, var)))
plot(ss, sds, xlab='Smoothing Parameter', ylab='SD of Imputed Values',
abline(v=.2, lty=2) # default value of fweighted
abline(h=rmse, lty=2) # root MSE of residuals from linear regression
## End(Not run)
d <- data.frame(x1=runif(50), x2=c(rep(NA, 10), runif(40)),
x3=c(runif(4), rep(NA, 11), runif(35)))
reformM(~ x1 + x2 + x3, data=d)
reformM(~ x1 + x2 + x3, data=d, nperm=2)
# Give result or one of the results as the first argument to aregImpute
# Constrain imputed values for two variables
# Require imputed values for x2 to be above 0.2
# Assume x1 is never missing and require imputed values for
# x3 to be less than the recipient's value of x1
a <- aregImpute(~ x1 + x2 + x3, data=d,
constraint=list(x2 = expression(d$x2 > 0.2),
x3 = expression(d$x3 < r$x1)))
version 5.1-3 | {"url":"https://search.r-project.org/CRAN/refmans/Hmisc/html/aregImpute.html","timestamp":"2024-11-04T06:00:30Z","content_type":"text/html","content_length":"34108","record_id":"<urn:uuid:7db57ad3-f4f4-4d53-8ccf-93d234268bf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00228.warc.gz"} |
partial pivoting
lu selects a pivoting strategy based first on the number of output arguments and second on the properties of the matrix being factorized. In all cases, setting the threshold value(s) to 1.0 results
in partial pivoting, while setting them to 0 causes the pivots to …
1 dag sedan · 1. Solve the following system of equations using LU decomposition with partial pivoting: 2x1 - 6x2 - x3 = - 38 - 3x1 - x2 + 7x3 = - 34 -8x, + x2 - 2x3 = -20
2020 — PDF | PhD thesis https://lup.lub.lu.se/record/8776613 | Find, read and cite all the research you need The partial pressure gradient of hydrogen is used as the driving force. Figure 3.2: MatLab
simulation of ψin the object plane (top), back focal plane direction, pivoting around the point at the side facet. One of these new combustion concepts is Partially Premixed Combustion (PPC). PPC is
adjustment problems based on a junction tree decomposition is presented, A Matlab code can be used to describe the lateral spreading and centerline by pivoting more prominently forwards and backwards
around the knee level. The granites are suggested to have formed by partial melting in a thickened The integrated model is written in Matlab and is run on a cluster computer to achieve fast basis
selection scheme based on QR-factorization with column pivoting.
In general, for an n n matrix A, the LU factorization provided by Gaussian elimination with partial pivoting can be written in the form: (L 0 n 1 0L 2 L 1)(P n 1 P 2P 1)A = U; where L0 i = P n 1 P
i+1L iP 1 i+1 P 1 n 1. If L = (L 0 n 1 0L 2 L 1) 1 and P = P n 1 P 2P 1, then PA = LU. Firsty, the built-in function of LU, does partial pivoting and not complete pivoting. So, this submission is
worthy of its place here. In addition, an implementation of GECP, so far to my knowledge is wanted in many universities in courses of Numerical Linear Algebra. lu selects a pivoting strategy based
first on the number of output arguments and second on the properties of the matrix being factorized.
partial pivo- ting, fi. Fullständig pivotering (eng.
Russell's Paradox - A. Performing Gauss Elimination with MatLab. elimination with partial pivoting With this application you can calculate gauss, gauss 4 3.3 The Gaussian Elimination Method (GEM) and
LU factorization † Consider a
▫ Solving with \ (Gaussian elimination) more than MATLAB Central contributions by Dirk-Jan Kroon. Example code LU decomposition with partial pivoting, also forward substitution, and Matrix inverse.
The function lu in MATLAB and Octave determines the LU-factorization of a matrix A with pivoting. When applied to the matrix (2), it produces L = 0 1 1 0 , U = −1 1 0 1 . Thus, L is not lower
triangular. The matrix L can be thought of as a lower triangular matrix with the rows interchanged. More details on the function lu are provided in Exercise 4.1. 1
function [L,A]=LU_factor(A,n) % LU factorization of an n by n matrix A % using Gauss elimination without pivoting I am trying to implement my own LU decomposition with partial pivoting. pivoting
strategies, I will denote a permutation matrix that swaps rows with P k and will denote a permutation matrix that swaps columns by refering to the matrix as Q k. When computing the LU factorizations
of matrices, we will routinely pack the permutation matrices together into a single permutation matrix. 2019-04-21 The original problem is a quite big, nearly symmetric, complex sparse matrix, which
I would like to decompose. With partial pivoting I always run out of memory.
Gaussian elimination with no pivoting genp.m; LU factorization with no pivoting lunp.m; Gaussian elimination with partial pivoting gepp.m Partial column pivoting and complete (row and column)
pivoting are also possible, but not very popular. Example Consider again the matrix. A =.. 1. 1. 1.
Entusiastisk betydelse
edu. m % A is factored as A = L*U % Output: % L is lower triangular with the main diagonal part = 1s.
The LU decomposition algorithm then includes permutation matrices. Partial pivoting (P matrix) was added to the LU decomposition function. In addition, the LU function accepts an additional argument
which allows the user more control on row exchange. Matlab lu() function does row exchange once it encounters a pivot larger than the current pivot.
Taktloss brp 7
, so that the above equation is fullfilled. You should then test it on the following two examples and include your output. Example 1: A 1 3 5 2 4 7 1 1 0 L 1.00000 0.00000 0.00000 0.50000 1.00000
0.00000 0.50000 -1.00000 1.00000 U 2.00000 4.00000 7.00000 0.00000 1.00000 1.50000 0.00000 0.00000 -2.00000 P 0 1 0 1 0 0 0 0 1
2010-04-24 · To compute the LU factorization under default settings: [L U p q] = lucp(A) This produces a factorization such that L*U = A(p,q). Vectors p and q permute the rows and columns,
respectively. The pivot tolerance can be controlled: [L U p q] = lucp(A,tol) The algorithm will terminate if the absolute value of the pivot is less than tol. Matlab program for LU Factorization
using Gaussian elimination , using Gaussian elimination without pivoting.
About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators
1 0.
When computing the LU factorizations of matrices, we will routinely pack the permutation matrices together into a single permutation matrix. 2019-04-21 The original problem is a quite big, nearly
symmetric, complex sparse matrix, which I would like to decompose. With partial pivoting I always run out of memory. | {"url":"https://investerarpengarzglcse.netlify.app/5549/12249","timestamp":"2024-11-13T21:36:38Z","content_type":"text/html","content_length":"11812","record_id":"<urn:uuid:6123466c-7f43-4c66-a909-16b7730ffd6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00793.warc.gz"} |
Flatland - A Romance of Many Dimensions
I AM about to appear very inconsistent. In previous sections I have said that all figures in Flatland present the appearance of a straight line; and it was added or implied, that it is consequently
impossible to distinguish by the visual organ between individuals of different classes: yet now I am about to explain to my Spaceland critics how we are able to recognize one another by the sense of
If however the Reader will take the trouble to refer to the passage in which Recognition by Feeling is stated to be universal, he will find this qualification - "among the lower classes." It is only
among the higher classes and in our temperate climates that Sight Recognition is practised.
That this power exists in any regions and for any classes is the result of Fog; which prevails during the greater part of the year in all parts save the torrid zones. That which is with you in
Spaceland an unmixed evil, blotting out the landscape, depressing the spirits, and enfeebling the health, is by us recognized as a blessing scarcely inferior to air itself, and as the Nurse of arts
and Parent of sciences. But let me explain my meaning, without further eulogies on this beneficent Element.
If Fog were non-existent, all lines would appear equally and indistinguishably clear; and this is actually the case in those unhappy countries in which the atmosphere is perfectly dry and.
transparent. But wherever there is a rich supply of Fog objects that are at a distance, say of three feet, are appreciably dimmer than those at a distance of two feet eleven inches; and the result is
that by careful and constant experimental observation of comparative dimness and clearness, we are enabled to infer with great exactness the configuration of the object observed.
An instance will do more than a volume of generalities to make my meaning clear.
Suppose I see two individuals approaching whose rank I wish to ascertain. They are, we will suppose, a Merchant and a Physician, or in other words, an Equilateral Triangle and a Pentagon: how am I to
distinguish them?
By: Edwin A. Abbott - Exercept from, "Flatland - A Romance of Many Dimensions" | {"url":"https://www.studentsofbingham.com/algebrafall2024-73.html","timestamp":"2024-11-09T00:15:20Z","content_type":"text/html","content_length":"25776","record_id":"<urn:uuid:2d68867c-9d64-4cf3-ad57-87a7bedcaa2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00285.warc.gz"} |
What is a Contour Interval? Its Calculation and Uses in Maps - Safety HuntersWhat is a Contour Interval? Its Calculation and Uses in MapsWhat is a Contour Interval? Its Calculation and Uses in Maps - Safety Hunters
Last Updated on October 11, 2022 by admin_hunter
Contour lines, also known as isarithm, isoline, or isopleth, are lines drawn on topographic maps to represent the actual ground elevation or depression. These lines join areas that lie within the
same altitude above or below sea level.
In other words, if you were to travel along a specific contour line, the elevation would remain the same despite differences in the scenery. The difference in elevation or altitude between adjacent
contour lines is what’s known as contour interval. It is abbreviated as CI.
The Importance of Elevation Data
Elevation values are always given relative to the sea level. So, the values can be positive or negative. It depends on how high or low the represented area is from the sea level (taken as 0’ or zero
In the US, elevation is given in feet (ft). Other regions in the world usually use meters (m). The unit of conversion used is 1 m = 3.281 ft. For example, Leadville is the highest incorporated city
in the US. It has an elevation of 10,152 ft. The lowest elevation in the world is -1,391 ft (1,391 feet below sea level). The point is an area located in the Dead Sea.
If planning for a hike, elevation data can be helpful. The ability to interpret contour lines and calculate contour intervals can reveal crucial topography details. For instance, you can easily
identify overly steep slopes, depressions, and other features using your hiking map. Consequently, avoiding rough terrain becomes easy.
Moreover, elevation affects the climate and oxygen levels of a given area. Therefore, elevation data can be helpful when making decisions about where you’d like to live. Further, people who
experience altitude sickness can use the information to make informed decisions before traveling, hiking, or moving to a new area.
Elevation data is also crucial in city modeling. It helps to make decisions on the best approaches when developing urban infrastructure to protect the environment. So, it promotes orderliness in
developing rural areas as well as suburbs.
How to Calculate a Contour Interval (CI)
Studying the legend of a map is the easiest way to establish a contour interval. Unfortunately, sometimes the whole map isn’t available. So, the other option is to manually calculate the contour
level to find the vertical elevation of a specific area.
It’s worth noting that not all contour lines are labeled with their respective altitudes. Otherwise, the map would be difficult to read since sometimes elevation lines can be pretty close to each
other. On most maps, it’s the fifth contour line that features an elevation value. This fifth line is often darker and thicker than the rest. It’s known as an index contour.
Given that index contours are marked with their elevation, they are vital when calculating a contour interval. Here is how to go about it:
1. The first step is to identify the marked elevation lines (index contour), then calculate the difference between the two index contours.
• Index line A has an elevation of 2000 feet above the sea level
• Index line B is marked 2200 feet above the sea level
• Calculate the difference between the two index contours, which will be 2200-2000 = 200
2. Next, calculate the number of elevation lines between your two index contour lines. Add one to the figure obtained. For example, if the variety between elevation line indexed 2200 and that one
marked 2000 is 9, add 1, which will make it 10.
3. Using the above illustration and assuming the number of elevation lines is 10, divide the elevation difference (2000 feet) by 10 (variety in the number of contour lines) to find the contour
interval, i.e., 200/10 = 20
4. The contour interval is 20 or 20-unit contour levels.
Factors Affecting Choice of Contour Intervals in Topographic Maps
Surveyors first consider the most appropriate contour intervals before starting to work on any project. Here are the factors that affect the choice of contour intervals:
• Nature of the ground:If the variation levels like hills and ponds are high, then a higher CI (from 1m and above) is preferred. Lower CI (from 0.5m and below) works best with reasonably level
• The extent of the project:During the initial assessment, the survey leader may opt for higher CI (from 1m and above) to come up with a rough topography map. However, a lower CI (from 0.5m and
below) is required for the actual execution of the project.
• Availability of resources:If time and other resources are in short supply, then a high CI is the best option. Conversely, low CI, like 0.5m, 0.1m, or less, is ideal when resources are plentiful.
What Are the Uses of Contour Intervals in Surveying?
Contour intervals are essential when there is a need to represent a large area on a small piece of paper. A higher contour level is used for the large areas and smaller contour levels for small
areas. In addition, contour intervals make it easy to locate structures like bridges, dams, and roads on a topographic map.
Differences between a Contour Interval and a Horizontal Equivalent
As illustrated above, a contour interval helps find the vertical elevation of an area represented in a topographic map. In the same way, they can be used to calculate horizontal distance. For that
reason, they’re also known as horizontal equivalent.
Below are the primary differences between the two:
• While contour intervals are used to find the vertical elevation of an area, the horizontal equivalent represents the horizontal distance.
• To calculate contour intervals on maps, it’s essential to have at least two indexed elevation lines. Therefore, there is no need for a scale or any measurements. Conversely, you can’t calculate
the horizontal equivalent without a scale. The distance on the map is often calculated then the scale is used to find the actual figures on the ground.
• Contour intervals are usually a constant figure. On the flip side, the horizontal equivalent is affected by the slope—the wider the horizontal distance, the gentler the gradient.
If planning for a hike or a hunting escapade, it is vital to be able to interpret contour lines and calculate contour intervals. It will help you plan for the route to take and anticipate and affect
your body due to elevation change. | {"url":"https://safetyhunters.com/contour-interval/","timestamp":"2024-11-09T06:14:59Z","content_type":"text/html","content_length":"163050","record_id":"<urn:uuid:5ebc7b48-d886-4721-a1b2-3ea766e58eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00192.warc.gz"} |
Dmitry Miserev
Dimensional reduction of the interacting D-dimensional electron gas: multidimensional bosonization and beyond
We performed the dimensional reduction of the interacting spin-degenerate D-dimensional electron gas with arbitrary interaction within the semiclassical limit $k_F r \gg 1$, $E_F \tau \gg 1$, where
$k_F$ and $E_F$ are the Fermi momentum and the Fermi energy, respectively; $r$ and $\tau$ are spatial and temporal arguments of correlation functions. The dimensional reduction is performed exactly
in all orders of perturbation theory. Our results agree with the multidimensional bosonization in assumption that the fermion loop cancellation theorem (FLCT) is valid. The FLCT states that all
symmetrized fermion loops with more than two interaction vertices vanish. The FLCT is based on the linearity of the electron spectrum near the Fermi surface as well as on the assumption that the
backscattering leading to the Friedel oscillations of dressed interaction is negligible. However, we find that the diagrams containing large number of fermion loops acquire large infrared-divergent
factors in any D > 1. We estimate that this infrared divergence makes the spectral curvature near the Fermi surface a relevant perturbation in D > 2, the backscattering interaction is relevant in D >
1. As both the spectral curvature and the backscattering interaction explicitly violate the FLCT, we believe that the diagrams containing large number of fermion loops may be important for the
infrared physics near the Fermi surface. The dimensional reduction allows us to classify the most infrared-divergent diagrams. In particular, we suggest new self-consistent approaches based on the
resummation of the most infrared-divergent diagrams. The preprint is available on arXiv:2303.16732 | {"url":"https://qp2dm.dipc.org/node/141","timestamp":"2024-11-12T10:05:31Z","content_type":"text/html","content_length":"50169","record_id":"<urn:uuid:a9539a9d-e904-4d5a-8097-2f4882665900>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00456.warc.gz"} |
Solutions of rods
Tutorial: Solutions of rods with SEB
Contributors: Carsten Svaneborg (FKF SDU).
Illustration of isotroplic solution of rod-like molecules.
Before you start
• Complete the polymer tutorial to understand how SEB works and how scattering from a Gaussian polymer model is derived.
Learning outcomes
In this tutorial you will learn about the scattering from dilute isotropic solutions of rods, in particular
• how to calculate the form factor of a rod,
• and how the scattering from a stiff rod differs from a random walk polymer.
A rod is a simple geometric model of stiff molecules such as actin and microtubuli. Later we will also be using it as a building block in SEB to make more complicated structures.
Derivation of the scattering from a rod
We think of a single straight thin rod of length $b$ in a solution, where each point on the rod is a point scatterer, then the form factor can be stated as
$$F(q) = \left<\frac{\sin(q r)}{qr} \right>_P, $$
which is known as the Debye Formula. Since the rod can be oriented in any direction, the form factor only depends on the magnitute of the momentum transfer $q$, and the scattering pattern will be
axis symmetric around the direct beam.
Sketch of rod showing the meaning of the symbols.
The rod is straight, hence the probability $P$ denotes the probability of two random scatterers on a straight line being a distance $r$ apart. We randomly pick one scatterer from an uniform
distribution in the interval $[0,b]$. Thus $P(x_1)=1/b$ for $x_1\in [0,b]$ and zero elsewhere. This corresponds to the integral $ \int_0^b \frac{dx_1}{b} \cdots$. We pick the second scatterer $x_2$
from the same distribution, and with these two scatterers the distance between them is $r=|x_1-x_2|$, because the rod is straight. Thus the average corresponds to performing the following integrals:
$$ F(q) = \int_0^b \frac{dx_1}{b} \int_0^b \frac{dx_2}{b} \frac{\sin(q |x_1-x_2|)}{q|x_1-x_2|},$$
unfortunately the result of these integrals can not be expressed as a combination of the usual functions we know. The result is $$F(q)=\frac{2(\cos(x)-1+x Si(x))}{x^2}$$ where $x=qb$ and $Si(x)=\
int_0^x dt sin(t)/t$ is the sin integral function. This function has to be evaluated numerically and SEB can do this.
Scattering Equation Builder (SEB)
Scattering Equation Builder (SEB) is a C++ library for analytical derivation of form factors of complex structures. The structures are build out of basic building blocks called sub-units. Polymers
and rods are two of the sub-units supported by SEB.
Before you can use SEB you need to install a working C++ compiler, the GiNaC, GSL, and CLN libraries, and the SEB source code itself. See GitHub for the details of how to install SEB on various
operating system. Important you need to remember the folder, where you put the SEB source code. It has a subfolder "work" where you can save and compile your own programs.
Rods with SEB
To calculate the form factor of a rod, cut'n'paste the following C++ program into an text editor (e.g. notepad). Save it as "Rod.cpp" in the work folder under the SEB installation.
// Include SEB functionality
#include "SEB.hpp"
int main()
// Create world of sub-units
World w("World");
// Add a single rod-subunit named "A"
GraphID r = w.Add(new
(), "A");
// Wrap unit in a structure named Structure (this will make sense later)
w.Add(r, "Structure");
// Print out equation for the form factor
ex F=w.FormFactor("Structure");
cout << "Form Factor= " << F << "\n";
// To evaluate the equation, we need to define value of paramters
ParameterList params;
); // Lengths of the "A" rod
w.setParameter(params,"beta_A",1); // Scattering length
// Choose q values
DoubleVector qvec=w.logspace(0.01, 10.0, 1000 );
// Use Evaluate to save form factor data to a file
w.Evaluate( F, params, qvec, "formfactor_rod.q", "Form factor of a rod with beta=1 and L=1.");
Exercise 1 compare the form factor of a rod and a polymer
In exercise 2 of the Gaussian Polymer you generated a file "formfactor_polymer.q" with the form factor of a polymer. For large $q$-values we observed $F(q)\sim q^{-2}$ where $2$ happens to be the fractal dimension of a random walk.
□ 1a: The code above calculates the form factor of a rod. Bold face indicates the important changes made wrt. the polymer code. The radius of gyration of a rod is $R_g^2=b^2/12$, where $b$ is the length of the rod. Change the length of the rod indicated by $XXX$, such that the radius of gyration of the rod matches that of the polymer. Run the code as you did in the Polymer example
□ 1b:
Sketch what you expect a log-log plot of the form factors of a rod and a polymer looks like for small and large $q$ values. (Hint: A rod has fractal dimension $1$.) Then make the plot of the files produced by SEB to see whether they agree with your sketch. | {"url":"https://sastutorials.org/SEB/SEB_Rods/SEB_rod.html","timestamp":"2024-11-08T17:31:41Z","content_type":"text/html","content_length":"8529","record_id":"<urn:uuid:9b6c5934-d0de-4789-8a8e-fa154566284f>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00045.warc.gz"} |
Ms Tayke
Nineteen again?
Happy Birthday Ms Tayke!
Ms Tayke does not want anyone to know how old she is. We think she is 388 but she often subtracts 19 from her age to make her feel younger.
How many times can you subtract 19 from 388?
Time left four minutes.
No calculators allowed.
Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page.
Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website.
Educational Technology on Amazon
Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access
to it here is a concise URL for a version of this page without the comments:
However it would be better to assign one of the student interactive activities below.
Here is the URL which will take them to a related student activity. | {"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_May18.ASP","timestamp":"2024-11-13T18:31:24Z","content_type":"text/html","content_length":"24854","record_id":"<urn:uuid:e37fcce9-e436-4533-87e0-2c6062637605>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00363.warc.gz"} |
Mathematics and informatics in education
It is well known that mathematics – queen among sciences, pervades all segments of the life of modern man. Mathematical literacy starts primarily through the teaching of mathematics in primary school
and then it continues in secondary schools.
Within this section, professors and teachers of mathematics and informatics will have an opportunity to present us the learning and teaching processes nowadays, to share their impressions and
exchange experiences with other colleagues and to talk about their methods, to draw attention to current problems and to suggest some ideas for solving these problems and to give initiative for
thinking about new organized activities that will raise the popularity of mathematics to a higher level.
How to promote mathematics, how to motivate students, how to make mathematics more accessible and interesting in connection with everyday life are some of the topics that will appear in this section. | {"url":"http://alas.matf.bg.ac.rs/~konferencija/2010/section-3.php","timestamp":"2024-11-10T02:53:15Z","content_type":"text/html","content_length":"4195","record_id":"<urn:uuid:bef1a156-4e5e-4ef4-a3f2-d61a75f60073>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00252.warc.gz"} |
The Center for Mathematical Physics Hamburg is a joint venture of DESY, the physics and the mathematics department of Hamburg University. It has been founded in December 2004.
Its aim is to foster the activities in mathematical physics in Hamburg. The main focus of its activity are mathematical aspects of string theory and quantum field theory.
The ZMP is involved with the following activities:
The ZMP was involved with the following activities: | {"url":"https://www.zmp.uni-hamburg.de/mission.html","timestamp":"2024-11-06T18:54:11Z","content_type":"text/html","content_length":"348550","record_id":"<urn:uuid:3c858b0a-3d4a-46ea-931d-1d9798470b81>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00464.warc.gz"} |
GATE (TF) Textile 2009 Question Paper Solution | GATE/2009/TF/17 - Textile Triangle
GATE (TF) Textile 2009 Question Paper Solution | GATE/2009/TF/17
Question 17 (Textile Engineering & Fibre Science)
The value of breaking length in km (RKM) of a yarn numerically equal to
(A) Tenacity in N/tex
(B) Breaking load in N
(C) Tenacity in gf/tex
(D) Breaking load in gf
[Show Answer]
Option C is correct
Frequently Asked Questions | FAQs
What is the RKM of a yarn equal to?
The RKM is calculated by dividing the breaking strength of the yarn (expressed in cN/tex or grams per tex) by the linear density of the yarn (tex) and then converting the result from meters to
Mathematically, the RKM can be calculated as follows:
RKM = (Breaking Strength in cN/tex or g/tex) / (Linear Density in tex) * 1000
The RKM is an important parameter in assessing the strength and quality of yarn. Higher RKM values indicate stronger yarn that can withstand higher tensile forces before breaking. Yarns with higher
RKM values are generally considered to be more durable and suitable for applications that require strength, such as in technical textiles or industrial fabrics.
GATE Textile Engineering and Fibre Science (TF) Question Papers | GATE Textile Question Answer | GATE Textile Solved Question Papers | GATE Textile Papers | GATE Textile Answer Key | {"url":"https://www.textiletriangle.com/gate-textile/question-paper/2009-2/17-2/","timestamp":"2024-11-02T02:21:03Z","content_type":"text/html","content_length":"139743","record_id":"<urn:uuid:10d971cd-063f-4f46-89d8-fd6bee5dc61a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00581.warc.gz"} |
CS Fall 2022 Junior
Computer Science - Junior
COURSE #: COMP 3042
Course Description
This course introduces mathematical modeling of computational problems. It covers the common algorithms, algorithmic paradigms, design of algorithms used to solve these problems. The course
emphasizes the relationship between algorithms and programming and introduces basic performance measures and analysis techniques for these problems. It also covers the time complexity and space
complexity of different algorithms in order to find the best algorithm having less time and space complexity for different problems.
Course Learning Outcomes
By the completion of the course, the students should be able to
• Identify the key characteristics of a problem
• Analyze the suitability of a specific algorithm design technique for a problem.
• Apply different design techniques to design an algorithm
• Explain different time analysis techniques and notations of algorithms
• Analyze the time and space complexity of different algorithms
• Compare different algorithms to select a best solution for a given problem.
Course Assessments and Grading
Item Weight
Attendance & Activities 10%
Assignment (5 assignments) 15%
Quizzes (10 quizzes) 25%
Midterm exam (1 midterm exam) 20%
Final exam (1 final exam) 30%
COURSE #: COMP 3041
Course Description
This course teaches the general theory, concepts, and techniques related to the theory of automata. Practical examples related to programming languages are emphasized. Students will have the
opportunity to utilize theoretical aspects of automata theory by performing a medium-scale design project. Topics include Finite Automata, Transition Graphs, Nondeterminism, Finite Automata with
Output, Context-Free Grammars, Regular Grammars, Chomsky Normal Form, Pushdown Automata, Context-Free Languages, Non-Context-Free Languages, Parsing, and Turing Machines.
Course Learning Outcomes
By the completion of the course, the students should be able to:
• Use regular expressions, recursive definitions, finite automata, and transition graphs to understand the concept of formal languages.
• Apply different mechanisms to convert regular expressions to finite automata
• Use Different rules to construct context-free grammar for regular and non-regular languages.
• Apply Chomsky’s normal technique to remove ambiguity from a context-free grammar
• Construct a pushdown automaton and a Turing machine for a computer language.
Course Assessments and Grading
Item Weight
Attendance & Activities 10%
Assignment (5 assignments) 15%
Quizzes (10 quizzes) 25%
Midterm exam (1 midterm exam) 20%
Final exam (1 final exam) 30%
COURSE #: COMP 3021
Course Description
This course focuses on the basic architecture of computer systems including fundamental concepts such as components of the processor, interfacing with memory and I/O devices, organization of
peripherals, and machine-level operations. The course presents detailed deliberation on various system design considerations along with associated challenges commonly employed in computer
architecture such as pipelining, branch prediction, caching, etc., This course provides the students with an understanding of the various levels of abstraction in computer architecture, with emphasis
on instruction set level and register transfer level through practical examples.
Course Learning Outcomes
Upon the successful completion of this course, students will be able to:
• Describe the key components of the computer system along with their functionalities and limitations
• Explain the internal working of processor underneath the software layer and how decisions made in hardware affect the software/programmer
• Examine Instruction Set Architecture (ISA) designs and associated trade-offs
• Analyze factors effecting CPU performance e.g., pipelining and instruction-level parallelism
• Explain the I/O subsystems and memory modules of the computer
• Evaluate design and optimization decisions across the boundaries of different layers and system components
Course Assessments and Grading
Item Weight
Class participation and attendance 10%
Quiz activities 15%
Assignments 15%
Mid exam 30%
Final exam 30%
COURSE #: COMP 3071
Course Description
Artificial intelligence (AI) is a research field that studies how to realize the intelligent human behaviors on a computer. The ultimate goal of AI is to make a computer that can learn, plan, and
solve problems autonomously. In this course students will learn the basic methodologies for the design of artificial agents in complex environments. This course aims to expose students to the
fundamental concepts and techniques that enable them to build smart applications including search strategies, agents, machine learning, planning, knowledge representation, reasoning, information
retrieval and natural language processing.
Course Learning Outcomes
By completion of the course the students should be able to:
• Build an appropriate agent architecture, for a given problem to be solved by an intelligent agent.
• Understand an uninformed/informed search algorithm to solve a given search/optimization problem.
• Apply forward/backward planning algorithms to solve the planning problem.
• Apply resolution/inference to a set of logic statements available in a knowledge base to answer a query.
• Apply simple machine learning algorithms for classifying a set of data.
Course Assessment and Grading
Item Weight (%)
Mid Term exam 20
Final exam 30
Quizzes 15
Homework Assignments 20
Group Project 15
COURSE #: DMNS 3031
Course Description
This course is an introduction to statistics and probability. It is designed to equip students with understanding of foundations of statistics and probability and focuses on using modern statistical
packages in examining relevant applications. The course is a prerequisite for advanced statistics.
Learning Outcomes
At the end of this course, students should be able to:
• Define data for different types and scales of measurements.
• Identify descriptive statistics from inferential statistics
• Define the role of descriptive statistics and inferential statistics in quantitative analyses.
• Compute descriptive statistics for a dataset
• Create appropriate visualizations for different types of data using a statistical package such as R, Excel etc.
• Describe types of random variables, probability distributions and their properties.
• Identify and apply appropriate statistical tests to make valid generalizations about a population based on sample data.
• Interpret the results of statistical tests and outputs from a statistical programming package (R/Excel) to draw valid conclusions and communicate them orally and verbally.
Course Assessments and Grading
Item Weight
And 20%
Project 15%
Class Participation 5%
Midterm Exam 30%
Final exam 30% | {"url":"https://ucentralasia.org/schools/school-of-arts-and-sciences/course-catalogues/computer-science/cs-fall-2022-catalogues/cs-fall-2022-junior","timestamp":"2024-11-05T16:45:43Z","content_type":"text/html","content_length":"42497","record_id":"<urn:uuid:b3452452-1284-476b-abbd-9f4c4ea0c461>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00459.warc.gz"} |
Experiment #4: Energy Loss in Pipes
1. Introduction
The total energy loss in a pipe system is the sum of the major and minor losses. Major losses are associated with frictional energy loss that is caused by the viscous effects of the fluid and
roughness of the pipe wall. Major losses create a pressure drop along the pipe since the pressure must work to overcome the frictional resistance. The Darcy-Weisbach equation is the most widely
accepted formula for determining the energy loss in pipe flow. In this equation, the friction factor (f ), a dimensionless quantity, is used to describe the friction loss in a pipe. In laminar flows,
f is only a function of the Reynolds number and is independent of the surface roughness of the pipe. In fully turbulent flows, f depends on both the Reynolds number and relative roughness of the
pipe wall. In engineering problems, f is determined by using the Moody diagram.
2. Practical Application
In engineering applications, it is important to increase pipe productivity, i.e. maximizing the flow rate capacity and minimizing head loss per unit length. According to the Darcy-Weisbach equation,
for a given flow rate, the head loss decreases with the inverse fifth power of the pipe diameter. Doubling the diameter of a pipe results in the head loss decreasing by a factor of 32 (≈ 97%
reduction), while the amount of material required per unit length of the pipe and its installation cost nearly doubles. This means that energy consumption, to overcome the frictional resistance in a
pipe conveying a certain flow rate, can be significantly reduced at a relatively small capital cost.
3. Objective
The objective of this experiment is to investigate head loss due to friction in a pipe, and to determine the associated friction factor under a range of flow rates and flow regimes, i.e., laminar,
transitional, and turbulent.
4. Method
The friction factor is determined by measuring the pressure head difference between two fixed points in a straight pipe with a circular cross section for steady flows.
5. Equipment
The following equipment is required to perform the energy loss in pipes experiment:
• F1-10 hydraulics bench,
• F1-18 pipe friction apparatus,
• Stopwatch for timing the flow measurement,
• Measuring cylinder for measuring very low flow rates,
• Spirit level, and
• Thermometer.
6. Equipment Description
The pipe friction apparatus consists of a test pipe (mounted vertically on the rig), a constant head tank, a flow control valve, an air-bleed valve, and two sets of manometers to measure the head
losses in the pipe (Figure 4.1). A set of two water-over-mercury manometers is used to measure large pressure differentials, and two water manometers are used to measure small pressure differentials.
When not in use, the manometers may be isolated, using Hoffman clamps.
Since mercury is considered a hazardous substance, it cannot be used in undergraduate fluid mechanics labs. Therefore, for this experiment, the water-over-mercury manometers are replaced with a
differential pressure gauge to directly measure large pressure differentials.
This experiment is performed under two flow conditions: high flow rates and low flow rates. For high flow rate experiments, the inlet pipe is connected directly to the bench water supply. For low
flow rate experiments, the inlet to the constant head tank is connected to the bench supply, and the outlet at the base of the head tank is connected to the top of the test pipe [4].
The apparatus’ flow control valve is used to regulate flow through the test pipe. This valve should face the volumetric tank, and a short length of flexible tube should be attached to it, to prevent
The air-bleed valve facilitates purging the system and adjusting the water level in the water manometers to a convenient level, by allowing air to enter them.
Figure 4.1: F1-18 Pipe Friction Test Apparatus
7. Theory
The energy loss in a pipe can be determined by applying the energy equation to a section of a straight pipe with a uniform cross section:
If the pipe is horizontal:
Since v[in ]= v[out :]
The pressure difference (P[out]-P[in]) between two points in the pipe is due to the frictional resistance, and the head loss h[L ]is directly proportional to the pressure difference.
The head loss due to friction can be calculated from the Darcy-Weisbach equation:
f: Darcy-Weisbach coefficient
L: pipe length
D: pipe diameter
v: average velocity
g: gravitational acceleration.
For laminar flow, the Darcy-Weisbach coefficient (or friction factor f ) is only a function of the Reynolds number (Re) and is independent of the surface roughness of the pipe, i.e.:
For turbulent flow, f is a function of both the Reynolds number and the pipe roughness height, f; however, these effects are not well understood and may be negligible in many cases. Therefore, f
must be determined experimentally. The Moody diagram relates f to the pipe wall relative roughness (
Instead of using the Moody diagram, f can be determined by utilizing empirical formulas. These formulas are used in engineering applications when computer programs or spreadsheet calculation methods
are employed. For turbulent flow in a smooth pipe, a well-known curve fit to the Moody diagram is given by:
Reynolds number is given by:
where v is the average velocity, D is the pipe diameter, and
In this experiment, h[L] is measured directly by the water manometers and the differential pressure gauge that are connected by pressure tappings to the test pipe. The average velocity, v, is
calculated from the volumetric flow rate (Q ) as:
The following dimensions from the test pipe may be used in the appropriate calculations [4]:
Length of test pipe = 0.50 m,
Diameter of test pipe = 0.003 m.
Figure 4.2: Moody Diagram
Figure 4.3: Kinematic Viscosity of Water (v) at Atmospheric Pressure
8. Experimental Procedure
The experiment will be performed in two parts: high flow rates and low flow rates. Set up the equipment as follows:
• Mount the test rig on the hydraulics bench, and adjust the feet with a spirit level to ensure that the baseplate is horizontal and the manometers are vertical.
• Attach Hoffman clamps to the water manometers and pressure gauge connecting tubes, and close them off.
High Flow Rate Experiment
The high flow rate will be supplied to the test section by connecting the equipment inlet pipe to the hydraulics bench, with the pump turned off. The following steps should be followed.
• Close the bench valve, open the apparatus flow control valve fully, and start the pump. Open the bench valve progressively, and run the flow until all air is purged.
• Remove the clamps from the differential pressure gauge connection tubes, and purge any air from the air-bleed valve located on the side of the pressure gauge.
• Close off the air-bleed valve once no air bubbles observed in the connection tubes.
• Close the apparatus flow control valve and take a zero-flow reading from the pressure gauge.
• With the flow control valve fully open, measure the head loss shown by the pressure gauge.
• Determine the flow rate by timed collection.
• Adjust the flow control valve in a step-wise fashion to observe the pressure differences at 0.05 bar increments. Obtain data for ten flow rates. For each step, determine the flow rate by timed
• Close the flow control valve, and turn off the pump.
The pressure difference measured by the differential pressure gauge can be converted to an equivalent head loss (h[L]) by using the conversion ratio:
1 bar = 10.2 m water
Low Flow Rate Experiment
The low flow rate will be supplied to the test section by connecting the hydraulics bench outlet pipe to the head tank with the pump turned off. Take the following steps.
• Attach a clamp to each of the differential pressure gauge connectors and close them off.
• Disconnect the test pipe’s supply tube and hold it high to keep it filled with water.
• Connect the bench supply tube to the head tank inflow, run the pump, and open the bench valve to allow flow. When outflow occurs from the head tank snap connector, attach the test section supply
tube to it, ensuring that no air is entrapped.
• When outflow occurs from the head tank overflow, fully open the control valve.
• Remove the clamps from the water manometers’ tubes and close the control valve.
• Connect a length of small bore tubing from the air valve to the volumetric tank, open the air bleed screw, and allow flow through the manometers to purge all of the air from them. Then tighten
the air bleed screw.
• Fully open the control valve and slowly open the air bleed valve, allowing air to enter until the manometer levels reach a convenient height (in the middle of the manometers), then close the air
vent. If required, further control of the levels can be achieved by using a hand pump to raise the manometer air pressure.
• With the flow control valve fully open, measure the head loss shown by the manometers.
• Determine the flow rate by timed collection.
• Obtain data for at least eight flow rates, the lowest to give h[L]= 30 mm.
• Measure the water temperature, using a thermometer.
9. Results and Calculations
Please use this link for accessing excel workbook for this experiment.
9.1. Results
Record all of the manometer and pressure gauge readings, water temperature, and volumetric measurements, in the Raw Data Tables.
Raw Data Tables: High Flow Rate Experiment
Test No. Head Loss (bar) Volume (Liters) Time (s)
Raw Data Tables: Low Flow Rate Experiment
Test No. h[1] (m) h[2] (m) Head loss h[L] (m) Volume (liters) Time (s)
Water Temperature:
9.2. Calculations
Calculate the values of the discharge; average flow velocity; and experimental friction factor, f using Equation 3, and the Reynolds number for each experiment. Also, calculate the theoretical
friction factor, f, using Equation 4 for laminar flow and Equation 5 for turbulent flow for a range of Reynolds numbers. Record your calculations in the following sample Result Tables.
Result Table- Experimental Values
Test No. Head loss h[L] (m) Volume (liters) Time (s) Discharge (m^3/s) Velocity (m/s) Friction Factor, f Reynolds Number
Result Table- Theoretical Values
No. Flow Regime Reynolds Number Friction Factor, f
3 Laminar (Equation 4) 400
10 Turbulent (Equation 5) 10000
10. Report
Use the template provided to prepare your lab report for this experiment. Your report should include the following:
• Table(s) of raw data
• Table(s) of results
• Graph(s)
□ On one graph, plot the experimental and theoretical values of the friction factor, f (y-axis) against the Reynolds number, Re (x-axis) on a log-log scale. The experimental results should be
divided into three groups (laminar, transitional, and turbulent) and plotted separately. The theoretical values should be divided into two groups (laminar and turbulent) and also plotted
□ On one graph, plot h[L] (y-axis) vs. average flow velocity, v (x-axis) on a log-log scale.
• Discuss the following:
□ Identify laminar and turbulent flow regimes in your experiment. What is the critical Reynolds number in this experiment (i.e., the transitional Reynolds number from laminar flow to turbulent
□ Assuming a relationship of the form
□ What is the dependence of head loss upon velocity (or flow rate) in the laminar and turbulent regions of flow?
□ What is the significance of changes in temperature to the head loss?
□ Compare your results for f with the Moody diagram (Figure 4.2). Note that the pipe utilized in this experiment is a smooth pipe. Indicate any reason for lack of agreement.
□ What natural processes would affect pipe roughness? | {"url":"https://uta.pressbooks.pub/appliedfluidmechanics/chapter/experiment-4/","timestamp":"2024-11-07T04:24:43Z","content_type":"text/html","content_length":"108169","record_id":"<urn:uuid:372186d3-77d2-40c0-b026-2fa9aac8955d>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00771.warc.gz"} |
(1+x)n are in the ratio 1:2:3. Find n.
6. If the coefficients o... | Filo
Question asked by Filo student
are in the ratio . Find . 6. If the coefficients of the consicutive four terms in the expansion of be and respectively, show that, . [Council Sample Question '13]
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
14 mins
Uploaded on: 3/20/2023
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Complex Number and Binomial Theorem
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text are in the ratio . Find . 6. If the coefficients of the consicutive four terms in the expansion of be and respectively, show that, . [Council Sample Question '13]
Updated On Mar 20, 2023
Topic Complex Number and Binomial Theorem
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 126
Avg. Video Duration 14 min | {"url":"https://askfilo.com/user-question-answers-mathematics/are-in-the-ratio-find-6-if-the-coefficients-of-the-34363635313931","timestamp":"2024-11-13T02:09:16Z","content_type":"text/html","content_length":"422738","record_id":"<urn:uuid:ea3a8717-1671-4502-9156-3cda02e76dcd>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00385.warc.gz"} |
How many months are needed for an investment of 869,000 to generate
85,000 in interest, if the agreed rate is 12.5% per year.
How many months are needed for an investment of 869,000 to generate 85,000 in interest, if the agreed rate is 12.5% per year.
Answer to a math question How many months are needed for an investment of 869,000 to generate 85,000 in interest, if the agreed rate is 12.5% per year.
98 Answers
9.4 months
Frequently asked questions (FAQs)
Question: Solve the inequality 2x + 5 < 17.
Math question: Find the slope-intercept equation of a line passing through (4, -3) and (8, 6).
What is the number of ways to choose a committee of 4 from a group of 10 people? | {"url":"https://math-master.org/general/how-many-months-are-needed-for-an-investment-of-869-000-to-generate-85-000-in-interest-if-the-agreed-rate-is-12-5-per-year","timestamp":"2024-11-07T17:18:06Z","content_type":"text/html","content_length":"238342","record_id":"<urn:uuid:485c1523-3c96-4948-9dd6-bc08252455ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00230.warc.gz"} |
Zero Killer Sudoku - Djape .Net
Zero Killer Sudoku
A few visitors to my forum have been creating these puzzles for quite some time. Well, here is one from me.
Everything is exactly the same as in your ordinary Killer Sudoku puzzles, except that there are even fewer clues given to you to start with. Some cells are not joined in any cages, so you don’t have
any starting information about them.
I will start posting these Zero Killer Sudoku puzzles
every Sunday
on the
Daily Killer page
. They will usually be fairly difficult.
as of tomorrow, Monday, August 27, there will be a change on the
Daily Sudoku variants
page. After a few months of (non)consecutive puzzles, I will now start posting
Jigsaw Sudoku
(irregular nonets) and
Hyper Sudoku (Windoku)
Zero Killer Sudoku
for Sunday, August 26, 2007. – Difficulty:
(click to download or right-click to save the image!)
To see the solution to this puzzle click here
8 Comments
1. Unless I am picking up the wrong text file, I believe this to be invalid. R89C1 is 3 which works out to be 12. R9C89 is 4 which is 13 making R9C1 2 and R8C1 1. R56C3 is 3 (1,2) disabling 1 in the
rest of N4. The only place in R4 for 1 is R4C4 and making R9C9 3 and R9C8 1 (diagonal). Following this the only places for 1 and 2 in N1 are R12C2 with 2 in R2C2 (diagonal) and 1 in R1C2. This
has the effect of making the combination for R1C67 (5) 2 and 3 and thus the combination for R3C56 (5)1 and 4. Looking at the opposite diagonal R1C9 to R9C1, the only place for 1 is in R3C7, which
places that in direct opposition to R3C56.
Unless, of course, I have picked up the wrong text file, or we are allowed duplicate numbers, or I am missing something somewhere else that I am unfamiliar with.
I also placed the text file in JSudoku which also finds an invalid grid.
2. harvick29, your reasoning is Ok up to this point:
\”…and making R9C9 3 and R9C8 1 (diagonal)\”.
Why do you think this puzzle is diagonal (X)? It\’s not indicated neither in the puzzle image nor in the text file. In the text file there is a code \”16\” (which I use for Zero Killers) in
second position and if this was a diagonal, there would be \”d\”. Perhaps JSudoku is misinterpreting this \”16\”. If you open up the text file in PS v0.4 you won\’t see any diagonals.
Let me know if you can solve it as non-diagonal puzzle!
3. When in plugged into JSudoku it showed up as a diagonal.
I will try it again later today, it didn’t actually look that hard, but looks can be deceiving.
4. Flowed very smoothly without needing the diagonal. Only took a few minutes. Looking forward to next Sunday; hope they get more difficult.
5. Great to see this sort of killer – enjoyed the break from designing killers! Looking forward to more.
6. That was a great puzzle, thanks!
7. Tricky. I loved it!
8. Quite simple to solve, but a very nice concept. I look forward to next Sunday’s!
One Trackback
• […] to be the day for Killer Sudoku 10×10 and tomorrow is Sunday, which used to be the day for Zero Killer Sudoku puzzles, quite appropriately, I would say, today’s puzzle is a mix of the two
(and the first […]
Post a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
This entry was posted in Free sample puzzles, Jigsaw Sudoku, Killer Sudoku, Sudoku Variants and tagged CONSECUTIVE, Daily Killer, Daily Sudoku, Daily Sudoku Variants, hyper sudoku, IQ, Jigsaw Sudoku,
killer, killer sudoku, rules, Sudoku Variant, Sudoku Variants, variant, Weekend Special, windoku, ZERO, Zero Killer, Zero Killer Sudoku. Bookmark the permalink. Post a comment or leave a trackback:
Trackback URL. | {"url":"https://djapedjape.com/weekend-special-zero-killer-sudoku/","timestamp":"2024-11-02T05:01:10Z","content_type":"application/xhtml+xml","content_length":"55137","record_id":"<urn:uuid:503599f1-913e-4ac4-92ae-4d5e27fc8725>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00237.warc.gz"} |
Divide Decimals By Whole Numbers Worksheet 6th
Divide Decimals By Whole Numbers Worksheet 6th serve as foundational tools in the world of mathematics, giving a structured yet versatile platform for learners to explore and grasp mathematical
concepts. These worksheets offer a structured approach to understanding numbers, nurturing a solid foundation upon which mathematical effectiveness thrives. From the most basic counting workouts to
the complexities of sophisticated computations, Divide Decimals By Whole Numbers Worksheet 6th cater to students of diverse ages and skill levels.
Revealing the Essence of Divide Decimals By Whole Numbers Worksheet 6th
Divide Decimals By Whole Numbers Worksheet 6th
Divide Decimals By Whole Numbers Worksheet 6th -
Guide grade 5 and grade 6 students to grasp the concept of dividing decimals by tenths without adding zeros Our prepping worksheets enhance their skills to answer questions like 4 83 2 1 and 2 4 0 3
Download the set
Utilize this set of pdf worksheets and practice dividing whole numbers by decimals in hundredths Follow the steps of long division method and you will fly through the MCQs using mental division
skills too
At their core, Divide Decimals By Whole Numbers Worksheet 6th are vehicles for theoretical understanding. They encapsulate a myriad of mathematical concepts, leading learners through the labyrinth of
numbers with a collection of appealing and deliberate workouts. These worksheets transcend the borders of standard rote learning, encouraging energetic engagement and fostering an instinctive
understanding of mathematical connections.
Nurturing Number Sense and Reasoning
Dividing Decimals Worksheet Pdf
Dividing Decimals Worksheet Pdf
The worksheets provide calculation practice for decimal division topics mental math divisions and long division including dividing decimals by decimals They are meant for 5th 6th grades Free
Online reading math for K 5 www k5learning Dividing Decimals by Whole Numbers Grade 6 Decimals Worksheet Find the quotient 1 0 61 8 0 07625 3 4 2 1 4 2 5 0 35 9 0 038889
The heart of Divide Decimals By Whole Numbers Worksheet 6th hinges on cultivating number sense-- a deep understanding of numbers' significances and affiliations. They urge expedition, welcoming
students to dissect math operations, figure out patterns, and unlock the secrets of sequences. Via provocative challenges and rational challenges, these worksheets come to be entrances to developing
reasoning skills, nurturing the analytical minds of budding mathematicians.
From Theory to Real-World Application
Dividing Decimals Worksheets 6th Grade Worksheets For Kindergarten
Dividing Decimals Worksheets 6th Grade Worksheets For Kindergarten
Using these decimal division worksheets will help your child to understand how division and multiplication are related apply their division facts up to 10x10 to answer related questions involving
decimals to 1dp These sheets are aimed at
Dividing Decimals by Whole Numbers Video 93 on www corbettmaths Question 1 Four friends share 6 52 equally How much do they each receive Question 2 James has 3 65m of rope into 5 pieces of equal
length How long is equal piece of rope Question 3 The perimeter of a square is 53 3cm Work out the length of equal side
Divide Decimals By Whole Numbers Worksheet 6th work as channels linking theoretical abstractions with the apparent truths of daily life. By infusing functional circumstances into mathematical
workouts, learners witness the importance of numbers in their environments. From budgeting and measurement conversions to recognizing statistical information, these worksheets equip pupils to possess
their mathematical expertise past the boundaries of the class.
Diverse Tools and Techniques
Adaptability is inherent in Divide Decimals By Whole Numbers Worksheet 6th, using an arsenal of instructional devices to accommodate diverse learning styles. Visual aids such as number lines,
manipulatives, and electronic sources act as buddies in envisioning abstract ideas. This diverse approach makes certain inclusivity, fitting learners with different preferences, toughness, and
cognitive styles.
Inclusivity and Cultural Relevance
In a significantly varied globe, Divide Decimals By Whole Numbers Worksheet 6th welcome inclusivity. They go beyond social boundaries, integrating examples and troubles that resonate with learners
from varied histories. By including culturally appropriate contexts, these worksheets cultivate an environment where every student really feels represented and valued, enhancing their connection with
mathematical ideas.
Crafting a Path to Mathematical Mastery
Divide Decimals By Whole Numbers Worksheet 6th chart a program in the direction of mathematical fluency. They infuse perseverance, important reasoning, and analytical abilities, necessary qualities
not only in maths however in different facets of life. These worksheets encourage learners to browse the elaborate terrain of numbers, nurturing an extensive recognition for the sophistication and
reasoning inherent in mathematics.
Embracing the Future of Education
In a period marked by technical advancement, Divide Decimals By Whole Numbers Worksheet 6th perfectly adjust to electronic systems. Interactive interfaces and electronic resources increase
conventional understanding, offering immersive experiences that transcend spatial and temporal limits. This amalgamation of standard methodologies with technical developments declares an appealing
age in education, promoting a much more vibrant and interesting discovering atmosphere.
Final thought: Embracing the Magic of Numbers
Divide Decimals By Whole Numbers Worksheet 6th illustrate the magic inherent in mathematics-- an enchanting trip of exploration, exploration, and proficiency. They transcend conventional rearing,
working as drivers for stiring up the flames of inquisitiveness and inquiry. Via Divide Decimals By Whole Numbers Worksheet 6th, students start an odyssey, unlocking the enigmatic globe of numbers--
one issue, one remedy, at once.
Dividing Decimals Worksheets Math Monks
Dividing Decimals By Whole Numbers Worksheets 99Worksheets
Check more of Divide Decimals By Whole Numbers Worksheet 6th below
Dividing Decimals Practice Worksheet
Dividing Decimals Worksheets 6th Grade Worksheets For Kindergarten
Grade 6 Math Worksheet Decimals Long Division Of Decimals By Whole Grade 6 Math Worksheet
Divide Whole Numbers By Decimals Worksheets For Kids Online SplashLearn
Divide Decimals By Whole Numbers In 6th Grade
Division Of Decimals By Whole Numbers Worksheets Worksheets Master
Dividing Decimals And Whole Numbers Worksheets Math Worksheets 4 Kids
Utilize this set of pdf worksheets and practice dividing whole numbers by decimals in hundredths Follow the steps of long division method and you will fly through the MCQs using mental division
skills too
Division Of Decimal Numbers Worksheets Math Salamanders
On this page we have some worked examples and also some worksheets for dividing decimals by whole numbers The long division worksheets involve solving problems involving decimals to 1dp 2dp and 3dp
Utilize this set of pdf worksheets and practice dividing whole numbers by decimals in hundredths Follow the steps of long division method and you will fly through the MCQs using mental division
skills too
On this page we have some worked examples and also some worksheets for dividing decimals by whole numbers The long division worksheets involve solving problems involving decimals to 1dp 2dp and 3dp
Divide Whole Numbers By Decimals Worksheets For Kids Online SplashLearn
Dividing Decimals Worksheets 6th Grade Worksheets For Kindergarten
Divide Decimals By Whole Numbers In 6th Grade
Division Of Decimals By Whole Numbers Worksheets Worksheets Master
Dividing Decimals By Whole Numbers Worksheet Dividing Whole Numbers By Decimals Worksheet By
Dividing Decimals By Whole Numbers Worksheets Teaching Resources
Dividing Decimals By Whole Numbers Worksheets Teaching Resources
Dividing Decimals By Whole Numbers Worksheet | {"url":"https://szukarka.net/divide-decimals-by-whole-numbers-worksheet-6th","timestamp":"2024-11-03T15:53:01Z","content_type":"text/html","content_length":"26388","record_id":"<urn:uuid:fc26df97-4489-49ac-bec5-52afd9c8f0bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00655.warc.gz"} |
Mastering Physics Mechanics 2 - Pacificil
Figure 10.27 Calculating the moment of inertia for a skinny disk about an axis by way of its center. Zorch, an archenemy of Superman, decides to slow Earth’s rotation to as quickly as per 28.zero h
by exerting an opposing drive at and parallel to the equator. Superman isn’t immediately involved, because he is conscious of Zorch can only exert a force of 4.00 × 107 N (a little higher than a
Saturn V rocket’s thrust). How lengthy should Zorch push with this drive to perform his goal? (This period provides Superman time to devote to different villains.) Explicitly show the way you observe
the steps present in theProblem-Solving Strategy for Rotational Dynamics section .
A continuous mass distribution incorporates infinite point mass particles. A middle of mass can be defined for such a system of particles with the help of integration. First, break the system into
infinite small point plenty after which combine to get the placement of the center of mass. Derive this outcome by beginning with the outcome for a strong sphere. Imagine the spherical shell to be
created by subtracting from the stable sphere of radius R a strong sphere with a barely smaller radius.
The bigger the inertia, the higher the drive that is required to bring some change in its velocity in a given amount of time. The scalar moments of inertia appear as parts in a matrix when a system
of particles is assembled right into a rigid body that moves in three-dimensional house. This inertia matrix appears in the calculation of the angular momentum, kinetic power and resultant torque of
the inflexible system of particles. Here, k known as the radius of gyration of the body in regards to the given axis.
Mention the elements on which moment of inertia relies upon. Ben Tooclose is being chased via the woods by a bull moose that he was attempting to photograph. The monumental mass of the bull moose is
extraordinarily intimidating.
About an axis perpendicular to the motion of the inflexible system and through the center of mass is called the polar second of inertia. Specifically, it’s the second moment of mass with respect to
the orthogonal distance from an axis . This defines the relative position what is ecp yusercontent vector and the speed vector for the rigid system of the particles transferring in a plane.
This would be the brick which offers essentially the most resistance. This very method of detecting the mass of an object can be utilized on Earth in addition to in places where gravitational forces
are negligible for bricks. An object in movement will preserve its state of motion. The presence of an unbalanced pressure adjustments the rate of the thing. | {"url":"https://pacificil.com/mastering-physics-mechanics-2/","timestamp":"2024-11-11T01:50:10Z","content_type":"text/html","content_length":"41912","record_id":"<urn:uuid:82755fce-7d38-4faf-9a42-6c33f87ecd78>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00142.warc.gz"} |
Jyoti Prakash Saha
Study materials for Math Olympiad
• Available for download.
• Students enrolled in 8th, 9th, 10th, 11th, 12th standard, or anyone interested may use the notes posted at the above links.
International Olympiads
Participation of India in IMO, APMO, EGMO
For ease of reference, here is a brief overview of participation of India in IMO, APMO, EGMO in recent times, which relies on the information available at the following webpages, which are
significantly more detailed.
There are several Mathematical Olympiads of International repute, where students participated from India. They include
Hello! I’m Jyoti Prakash Saha.
• 🔭 I’m an Assistant Professor at the Department of Mathematics, IISER Bhopal.
• I am the Regional Coordinator of the Mathematics Olympiad program for the Madhya Pradesh region.
• For regional coordination, I work together with Dr. Kartick Adhikari, who is the Joint Regional Coordinator of the Mathematics Olympiad program for the Madhya Pradesh region.
• 📫 For any queries related to the Mathematics Olympiad program, please feel free to write to jpsaha@iiserb.ac.in.
Mathematical Olympiad program in India
The Homi Bhabha Centre for Science Education (HBCSE) organizes the Math Olympiad program in India. The Math Olympiad program organized by HBCSE, is the only one leading to participation in the
following International Mathematical Olympiads — IMO, APMO, EGMO. No other contests are recognized.
□ The students enrolled in the 8th, 9th, 10th, 11th or 12th standard may participate in IOQM, provided certain additional conditions are met. The precise details are available at the webpage of
the Homi Bhabha Centre for Science Education (HBCSE). Please visit this webpage for the updates and further details.
Some stages of the Math Olympiad program 2024 — 2025 are
• IOQM (Indian Olympiad Qualifier in Mathematics)
□ Scheduled on 8th September, 2024, during 10:00hrs — 13:00hrs.
□ The paper consists of 30 questions worth 100 marks in total.
□ There are 10 questions worth 2 marks, 10 questions worth 3 marks, 10 questions worth 5 marks.
□ Websites: MTAI, HBCSE.
□ A few problems from IOQM 2023 have been discussed here.
• RMO (Regional Mathematical Olympiad)
• INMO (Indian National Mathematical Olympiad)
• IMOTC (International Mathematical Olympiad Training Camp)
□ A month-long training camp, held sometime during from April to May.
□ Through the TSTs (Team Selection Tests), it leads to the selection of six students to represent India at IMO.
□ Websites: HBCSE.
• PDC (Pre-Departure Camp)
□ Held before leaving for IMO.
□ Websites: HBCSE.
• Past Question papers
The past papers are available at the webpage of HBCSE. The links to the past Question papers along with AoPS links are available at this page.
□ For EGMO, the stages are IOQM, RMO, INMO, EGMOTC, EGMOPDC, EGMO (along with certain criteria at the stages).
□ For APMO, the stages are IOQM, RMO, INMO, APMO (along with certain criteria at the stages).
□ INMOTC is a camp, organized before INMO.
is a British mathematician. He has been the leader of the UK IMO team during 2002–2010, 2013–2018, 2022. He has been awarded the IMO Golden Microphone thrice (during 2006, 2009, 2014).
Click here to know more
The nations which do consistently well at this competition (IMO) must have at least one (and probably at least two) of the following attributes:
□ A large population.
□ A significant proportion of its population in receipt of a good education.
□ A well-organized training infrastructure to support mathematics competitions.
□ A culture which values intellectual achievement.
Alternatively, you need a cloning facility and a relaxed regulatory framework.
From time to time I am approached by students interested in advice about becoming more effective contestants in mathematics olympiads. Here it is.
Do lots and lots, and then more, past papers. Begin with national mathematical olympiads, starting with the less difficult papers. Now, I am not going to risk insulting any countries by saying
that their national maths olympiads are easy. Work it out for yourself. Countries which have small populations, and no great tradition of success in maths competitions, will generally have easier
questions. When you become very good at those, then move on to hard national maths olympiad problems and the less demanding international competitions.
I am often approached by students from developing countries. Sometimes students complain that there is no satisfactory educational or training regime in my country. Please check that this is
true! The IMO contact person in your country may tell you otherwise. In the worst case, where there is no competent organization providing free (or nearly free) assistance to young
mathematicians, then you will have to help yourself. Try to locate other young people in your country who are interested in mathematics, and work together. Fortunately there is a vast collection
of free resources on the internet: over 25 thousand past problems from maths competitions are available at the extensive Art of Problem Solving site, and if you explore, you will find discussions
of solutions. Don’t look up the solutions too quickly (be prepared to spend many hours thinking about each problem). If you want to start on some problems which are less demanding than a full
national maths olympiad, here are plenty of British Maths Olympiad round 1 problems. The round 2 problems are more challenging.
Goal of this website (aka Why another website? What is its use?!)
Click here to know
• To provide a brief introduction to Mathematical Olympiad.
• To serve as a website for the MOPSS program at IISER Bhopal, to be held in person, from August 2024 to November 2024.
□ We have plans to post notes containing the details of those sessions.
• To provide handouts on the topics of Algebra, Combinatorics, Geometry, and Number Theory, and to keep it posted in an organized manner across different sub-topics.
□ These notes may be useful to the students who would like to have a look at some of the past RMO problems before getting started, or just curious about it.
□ These notes may also serve as a reference to anyone who would like to provide guidance to students, but may not have enough time to organize the relevant questions across the topics and
• To provide assistance to anyone on Mathematics Olympiad.
I am enthusiastic about math/math olympiads and/or teaching math to high schoolers. How may I contribute?
Click here to know
• One may reach to schools, to high schoolers.
• One may explain about Olympiads, and spread awareness about it.
• One may encourage people (for instance, students, teachers or anyone enthusiastic/curious about math olympiad) to go through this website (and suggest a careful reading of the homepage!).
• Next, a student interested in math olympiad, may browse through the handouts posted here (this will grow with time).
• A person with passion in teaching high school students could use the handouts as a problem bag, or in other way.
• What else? For instance, if one has interest in a science subject(s) other than (or parallel to) mathematics, then one may refer to the webpage of HBCSE, which has information about olympiads (
past papers) on the following subjects, and may repeat the same process as above adapted to those subjects!
□ Astronomy
□ Biology
□ Chemistry
□ Junior Science
□ Physics
I do not have much time for the above, but I find it interesting. Is there something that I can do?
Click here to know
• Yes! You could spread the message, only if you find it worth doing and willing to do so, by
Jul 03, The deadline for submitting application for MOPSS has been extended to 21st July, 2024. The students enrolled in the 11th or 12th standard may write to Jyoti Prakash Saha (
2024 jpsaha@iiserb.ac.in) if they would like to participate in MOPSS.
Jun 01, Mathematics Olympiad Problem Solving Sessions (MOPSS) will be organized at IISER Bhopal. Please visit this page for the details.
Jul 19, 2024 Past Papers
Jul 17, 2024 Rational and irrational numbers
Jul 05, 2024 Arithmetic progressions
Jun 26, 2024 IOQM
Jun 23, 2024 More on Polynomials
Jun 22, 2024 Binomial theorem
Jun 21, 2024 Functional equations
Jun 20, 2024 Reduction of the degree by taking a difference
Jun 19, 2024 Growth of Polynomials
Jun 18, 2024 Quartics
Jun 17, 2024 Cubic polynomials
Jun 16, 2024 Quadratic polynomials
Jun 15, 2024 Polynomials
Jun 14, 2024 Invariance principle
Jun 13, 2024 $$ a^3+b^3+c^3 - 3abc $$
Jun 12, 2024 Warm Up
Jun 03, 2024 Problem set for MOPSS | {"url":"https://jpsaha.github.io/MOTP/","timestamp":"2024-11-06T07:17:16Z","content_type":"text/html","content_length":"52234","record_id":"<urn:uuid:a5ec9941-69c8-42c7-9a98-f36d8a7f62ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00462.warc.gz"} |
ThmDex – An index of mathematical definitions, results, and conjectures.
▼ Set of symbols
▼ Alphabet
▼ Deduction system
▼ Theory
▼ Zermelo-Fraenkel set theory
▼ Set
▼ Binary cartesian set product
▼ Binary relation
▼ Map
▼ Operation
▼ N-operation
▼ Binary operation
▼ Enclosed binary operation
▼ Groupoid
▼ Semigroup
▼ Standard N-operation
▼ Mean
▼ Complex mean
▼ Real mean
▶ R3568: Real AM-GM inequality
▶ R4666: Real GM-HM inequality
▶ R4118: Real arithmetic expression for unsigned real geometric mean
▶ R5185: Tight lower bound to a finite product of positive real numbers
▶ R5182: Tight upper bound to a finite product of unsigned real numbers
▶ R5211: Tight upper bound to a product of three unsigned real numbers
▶ R5210: Tight upper bound to a product of two unsigned real numbers
▶ R1557: Weighted real AM-GM inequality | {"url":"https://thmdex.org/d/2455","timestamp":"2024-11-15T04:36:20Z","content_type":"text/html","content_length":"11465","record_id":"<urn:uuid:ab8748cd-f197-448f-bc02-af69a4f400e7>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00788.warc.gz"} |
A certain gas has a density of 1.30 g/L at STP. What will its density, in g/L, be at 546 K and 380 torr pressure ? | Socratic
A certain gas has a density of 1.30 g/L at STP. What will its density, in g/L, be at 546 K and 380 torr pressure ?
1 Answer
The gas will have the same density at 546 K and 380 torr that it had at STP.
There are two ways to prove this - a longer one and a shorter one. Here's the longer one.
Standard Temperature and Pressure - STP - is defined as having a temperature of $\text{273.15 K}$ and a pressure of $\text{1.00 atm}$. You can use the ideal gas law equation for the two states at
which the gas is found - for STP (1) and for the second state (2) (not-STP). So,
${P}_{1} {V}_{1} = n R {T}_{1}$ - the number of moles of gas does not change;
The number of moles is defined as the ratio between the gas' mass and its molar mass; plug this equation into the above one
$n = \frac{m}{M} _ m \implies {P}_{1} {V}_{1} = \frac{m}{M} _ m \cdot R {T}_{1} = m \cdot {T}_{1} \cdot \frac{R}{M} _ m$
Solve for $\frac{R}{M} _ m$, which is a constant (product of two constants)
$\frac{R}{M} _ m = \frac{{P}_{1} {V}_{1}}{m \cdot {T}_{1}}$
Density is defined as mass per unit of volume, $\rho = \frac{m}{V}$; if you look closely, you'll notice that the above equation has ${V}_{1} / m$, which is equal to $\frac{1}{\rho}$. Plug this into
the equation to get
$\frac{R}{M} _ m = {P}_{1} / {T}_{1} \cdot \frac{1}{{\rho}_{1}}$
If you write the equation for the non-STP state, you'll get the exact same thing ,except you'll now have ${\rho}_{2}$
$\frac{R}{M} _ m = {P}_{2} / {T}_{2} \cdot \frac{1}{{\rho}_{2}}$
This means that
${P}_{1} / {T}_{1} \cdot \frac{1}{{\rho}_{1}} = {P}_{2} / {T}_{2} \cdot \frac{1}{{\rho}_{2}}$
In you case,
$\text{1.00 atm"/"273 K" * 1/"1.30 g/L" = ((760/380)"atm")/"546 K" * 1/(rho_2) => rho_2 = "1.30 g/L}$
Now for the shorter answer. Notice that the pressure drops by half and the temperature doubles; if you take into account the fact that the number of moles is constant, the obvious conclusion is that
volume remains constant as well.
SInce density is mass per volume, and neither mass nor volume change, then the density will be the same for both the STP and the non-STP conditions.
Impact of this question
6965 views around the world | {"url":"https://socratic.org/questions/a-certain-gas-has-a-density-of-1-30-g-l-at-stp-what-will-its-density-in-g-l-be-a#131064","timestamp":"2024-11-11T07:44:33Z","content_type":"text/html","content_length":"37554","record_id":"<urn:uuid:a01e9d67-f484-4af1-b756-6ba66c310834>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00525.warc.gz"} |
Talk:Zeckendorf number representation - Rosetta CodeTalk:Zeckendorf number representation
Consensus on the sequence
Googling around, there seems to be a lack of consensus on the sequence of Fibonacci numbers used and how the sequence is indexed. If you include both F(1) = 1 and F(2) = 1, then the representation 1
is not unique. Similarly, if you include F(0) = 0, then the representation of 0 is not unique.
Mathworld's page on Zeckendorf's theorem mentions that it applies to {F-1}, that is, the Fibonacci sequence with one of the 1s removed, and presumably without the 0. —Sonia 22:39, 10 October 2012
This is me asking for help. Help!
Anyone care to suggest a rework of the task that would be better. Or would a re-wording allowing different starting conditions so long as they were stated suffice? Technically the task is still draft
but it would be good if any change meant minor or no change to the existing solutions of others. P.S. Thanks Sonia, Ledrug, TimToady for the comments so far. --Paddy3118 20:02, 11 October 2012 (UTC)
Perl 6, wrong fib sequence
Could there also be a submission for Perl6 that used the sequence starting 1, 1, 2, ... The present example could be put second as an alternative, with the present description of how it differs from
the task description if you like. --Paddy3118 12:14, 11 October 2012 (UTC) | {"url":"https://rosettacode.org/wiki/User:Rabuf?oldid=220420","timestamp":"2024-11-11T13:34:28Z","content_type":"text/html","content_length":"51103","record_id":"<urn:uuid:201a5d0c-ce2e-4799-b7b3-73f83a498339>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00018.warc.gz"} |
Intermediate Algebra for College Students (7th Edition) Chapter 4 - Section 4.3 - Equations and Inequalities Involving Absolute Value - Exercise Set - Page 283 50
$(-\infty,-3)\cup (7,\infty)$
Work Step by Step
We are asked to solve the absolute value inequality: $|x-2|\gt 5$ We remove the absolute value sign by splitting the inequality into two as follows: $x-2 \lt -5$ or $x-2 \gt 5$ Now, we solve for
(isolate) $x$: $x-2+2 \lt -5+2$ or $x-2+2 \gt 5+2$ $x \lt -3$ or $x \gt 7$ In interval notation, this can be written as: $(-\infty,-3)\cup (7,\infty)$ See the number line graph below. | {"url":"https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-4-section-4-3-equations-and-inequalities-involving-absolute-value-exercise-set-page-283/50","timestamp":"2024-11-10T11:17:46Z","content_type":"text/html","content_length":"86792","record_id":"<urn:uuid:b16b41d7-0093-45e1-a838-c68f7372dbc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00736.warc.gz"} |
How Many 20 Oz In A Gallon?
In our increasingly fossil fuel dependent society, figuring out how much gas you have to spend can be tricky! Most people these days have GPS systems in their cars that tell them where they are at
any given time, but what about when they’re going somewhere or coming back home?
Some car brands will release information about your gas mileage for different vehicles, but this only tells you average numbers across all conditions. It would also need to know how fast you were
driving before it could calculate how far you traveled!
There is a way to get more precise information about how many miles per gallon (mpg) you are getting every time you fill up though. This info comes from something called gasoline octane rating.
Octane ratings determine how powerful the fuel is and how efficient your engine burns it. The higher the number, the less efficiently the fuel combusts and the hotter it becomes which means better
performance for the vehicle.
The national standard for determining an octane level of regular unleaded gasoline is 87 octane. Anything above 90 is considered premium grade or high-octane gas. A lot of areas around the country
actually use 93 as the norm instead of 88!
This article will go into detail about why this matters and how to read yours correctly.
Conversions of volume to weight
When you are talking about fluid consumption, it is important to know what liquid we are talking about!
If you are drinking water then your normalizer should be half a gallon per day which is one bottle every two hours. If you drink milk then try to aim for three cups per cup of coffee or tea, or any
other beverage!
For soda, plain water is better than diet drinks as this can sometimes have added sugar content that could contribute to obesity. One popular brand has 4 ounces (or 8 tablespoons) of sugar in each 16
ounce can!
To calculate how many bottles of water you use daily, just subtract your current level from a full bottle and divide by 2. This will give you the amount of time until you reach a new goal!”
This article clearly explained why being aware of our fluid intake is very important.
Converting volume to area
Now that we have done some calculations using diameter, we can move onto another way to determine how many bottles of liquid you have by looking at how much space they take up.
Volume is determined by two things; how much liquid there is and how full the container is. The first one is called total liquid content while the second one is known as vessel or container capacity.
To calculate the amount of space a bottle takes up, you need to know its height, width and diameter. All three of these are found under the ‘volume’ section above!
Height + Width + Diameter = Volume
Knowing this, we can now find the ratio between the length (height, width) and the diameter of the bottle. This will give us the dependent variable which we can then plug into the equation above to
get the final result.
The dependent variable is the volume divided by the surface area so we would multiply both sides of the equation by the same thing. We will use the ratio here!
This gives us our new expression for determining how many bottles you have per square foot. Since we already calculated the radius in our earlier equations, all we need to do now is divide the
height, width and radius together to solve for the depth.
Remember, when solving quadratics, make sure to add and subtract the opposite side before finding the solution.
Exact number of 20 oz in a gallon
The amount of liquid you have in your car depends on how much gas you have, what type of vehicle you have, and whether or not you are fully loaded with drinks and snacks.
The easiest way to determine this is by looking at the fuel gauge. It will tell you exactly how much gas there is in the tank!
But what about when you run out of gas? Or if you need more than one liter of gasoline?
Fortunately, it’s easy to find the answers to these questions in math. And while some people may think that figuring out fluid levels is too complicated, we will break down all the steps here for you
so that you can easily understand them.
So let us begin!
How many liters in 1 US gallon?
A standard U.S. gallon contains 3.78 l of water, so we can use that as our base unit for measuring liquids. This means that one liter equals 1/3.78 = 0.318 of a U.S. gallon.
Converting from gallons to litres takes into account the difference in volume between the two units, and multiplies by 8.93 to get the conversion factor. So 1 US gallon -> 0.318 * 8.93 = 2.848
This also applies to other fluids such as alcohol (like petrol) where they measure their fluid level in “oz.
Approximate number of 20 oz in a gallon
The average person consumes around 2 gallons of water per day, which is about 648 litres or 168 pounds of water every week!
The vast majority of this (around 4-5 gallons) is consumed at work, for taking showers, washing hands, etc. Another 1 to 3 gallons are typically spent during the night when we wake up and need more
The last 0.5–1 gallon is usually spent while sleeping, and some people drink an additional quart of water before bed. This means that most of us spend less than 10% of our daily intake drinking
With all these numbers factoring in, the average person uses about 5 ounces of pure liquid alcohol (i.e. vodka, gin, whiskey, etc.
Tips for measuring liquid properly
The second most important factor when it comes to knowing how many ounces are in a gallon is learning how to measure liquids correctly. You should know that there are two ways to determine this!
The first way is by using a darby gun. A Darby gun contains rods that are sized appropriately depending on what you want to test. For example, if you wanted to find the amount of water in milk then
you would use distilled water which has no dissolved particles. By moving the rod through the liquid, you can determine how much water there is.
For another example, if you wanted to check the density of olive oil then you could use sunflower oil as a carrier fluid. Again, we need to make sure our darby gun isn’t mixed up with water or oil so
those must be dried and pre-weighed. After the oil is poured into the tube, pull out the rod and see how much oil there is!
This method only works for denser liquids than water though so it cannot be used to figure out how much air there is in a container. This article will not go into more detail about measurement types
but I do recommend looking into them as they are very helpful!
The second way to do this is via a gascope. A gascope is similar to a glass meter burette except it does not require any power source.
Know the difference between a gallon and a quart
There are two main ways to determine how many ounces of fluid you have in a given volume. The first is by looking at the liquid’s density, or weight, compared to water. If the ratio is less than
one-to-one (less dense), then you have more liquid than water!
The second way is using the height measurement for the liquid in relation to the neck of the container it is in. For example, if the top of the liquid reaches as high as the rim of the bottle, you
have filled up that amount of space, so there are just enough inches left over to measure half the diameter of the bottle — which is why we use “half full” as our definition of what an empty bottle
Pour liquid into the nearest ounce amounts
There is no exact way to determine how many ounces of a specific product you have, but there are some rules of thumb that work well. First, remember that one pound equals 454 grams!
Converting from pounds to ounces is easy when using dry ingredients such as flour or powdery substances like sugar. Simply divide the number of pounds by 2, then multiply that amount by 16. For
example, if you had two pounds of dried rice, which we would assume for this calculation is a total weight of four cups, then two divided by two is equal to one cup, so add sixteen to get twenty-one
ozs per cup.
For liquids, there is not an easy rule of thumb for determining how much water you have.
Check the expiration date
Recent changes to the way fuel is measured makes comparing gas prices very difficult. When gasoline was first introduced, it was sold in what we call octane levels or “octanes” for your car.
The number before the -es was an indication of how powerful the liquid is and which cars can run it with no problems. A higher number means more power so therefore engines that work better with this
fuel are made longer because they need less frequent re-fueling!
As time went on people noticed that although the price per liter remained the same, the amount you get in your tank changed.
This is due to the fact that there is not just one grade of high performance fuel, there are two! The lower numbers refer to 100% regular unleaded gasoline while the higher ones indicate 95+ percent
So if you pay the exact same amount per gallon at both stations, you will get slightly different amounts depending on which fuel station you go to. This is why it is important to know how many
gallons you get from a full tank!
A lot of sites and sources use the standard international definition of 1 US gallon equals 3.8 L. That does not always make sense though as some countries have different definitions! For example,
France uses the metric system where 1 US gallon equals 0. | {"url":"https://cumbernauld-media.com/how-many-20-oz-in-a-gallon/","timestamp":"2024-11-09T10:45:57Z","content_type":"text/html","content_length":"162936","record_id":"<urn:uuid:a88f9339-65d0-43c9-ae67-6d3eabddb8c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00187.warc.gz"} |
Development of Various Types of Independent Phase Based Pulsewidth Modulation Techniques for Three-Phase Voltage Source Inverters
School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS 39762, USA
Author to whom correspondence should be addressed.
Submission received: 24 October 2023 / Revised: 13 November 2023 / Accepted: 21 November 2023 / Published: 27 November 2023
Discontinuous pulse-width-modulation (DPWM) methods have been extensively used in the industrial area to reduce overall losses, which decreases the corresponding thermal stress on the power switches
of converters. However, local thermal overload can arise due to different aging conditions of semiconductor devices or failure in the cooling system. This leads to reduced reliability of the
converter system due to the low expected lifespan of the most aged switches or phase legs. In this paper, the modified DPWM strategies for independent control of per-phase switching loss are
introduced to deal with this matter. The proposed per-phase DPWM techniques are generated by modifying the conventional three-phase DPWM methods for reducing the switching loss in a specific leg,
whereas the output performance is not degraded. This paper reports on output performance, including output current total harmonic distortion (THD) and power loss of switching devices, analysis for
the various modified DPWM strategies for independent control of per-phase switching loss, which is applicable in 2-level 3-phase voltage source inverters (2L3P VSIs). The results are compared to the
corresponding continuous PWM technique to verify and analyze the effectiveness and accuracy of the modified DPWM strategies for independent control of per-phase switching loss.
1. Introduction
The increasing number of power electronics applications in industry, consumer electronics, and transportation emphasizes the rising significance of power converters in the power system. Their
importance becomes evident in their role in offering grid services. However, there is potential for improvement in converter reliability as costly failures and system downtime are associated with
them [
]. Due to the discovery that power semiconductor devices within power modules experience frequent failures [
], significant efforts have been dedicated to enhancing the expected lifespan of these components. Semiconductor manufacturers optimize the hardware to increase reliability, but it also sharply
raises the cost of devices. The reliability of power modules is closely linked to thermal stress, as evidenced by the correlation between thermal cycles and lifespan, demonstrated in [
], which has been corroborated by several follow-up studies [
]. Therefore, various control strategies which aim at increasing the reliability of power converters by controlling thermal stress are extensively developed [
]. Among developed solutions, the decrease of switching loss is extensively used because the switching loss of the power switch is predominantly impacted by switching frequency. The research
presented in [
] proposed regulating the switching frequency as a means to relieve thermal stress. However, this approach leads to an undesired fluctuation in the output current spectrum, making it unsuitable for
many applications. Typically, the DPWM is employed to minimize switching losses by clamping the output voltage at either the positive or negative dc-link voltage [
]. This leads to a significant decrease in losses when compared to continuous PWM since the power semiconductors remain unswitched during the clamping intervals. However, the clamping behavior leads
to an increase in the THD of the output currents, necessitating a larger output filter or a higher switching frequency.
Under ideal operating conditions, the three-phase converters can achieve a similar aging condition in each phase leg. However, in practical operating conditions, within the three-phase converter,
individual phase legs may exhibit diverse aging conditions or expected lifespan arising from uneven stress during operation or prior replacement of power switches. Furthermore, manufacturing
techniques have been recognized as a potential origin of power semiconductor device failures [
]. However, the active thermal control methods mentioned earlier do not take into consideration the case that the converter legs could undergo differing aging conditions despite efforts to enhance
the converter’s overall reliability. Another approach to managing switching losses at the device level involves existing gate control. The relationship between the gate voltage, resistor, and the
lifespan of power devices has been established through a precise field mission profile [
]. The gate control does not impact the converter’s output current ripple, but it necessitates an additional circuit, which increases the system’s complexity. The gate drive voltage has an impact on
saturation voltage and switching loss, allowing for the regulation of power device losses. When the operating temperature of power transistors increases, their on-state resistance and power losses
increase as well. To mitigate this, it is recommended to increase the amplitude of the applied driving level [
]. Nevertheless, the precise control of voltage levels required for this technique makes it impractical. In addition, the author in [
] proposes a voltage and current control scheme for a 2-level 4-wire converter that aims at controlling phase output power depending on its aging condition. Nevertheless, this approach has its
constraints because of the converter’s design. The author in [
] proposes a hybrid offset approach to control the phase switching frequency following the aging degree. Since the output current is directly used to identify the clamping areas, this method is only
effective when the power factor is close to unity where the output voltage and current are in phase. An approach based on modifying the DPWM for individual loss reduction for each phase of the
voltage source rectifier is proposed in [
]. However, this study only considers a single case of the DPWM technique for per-phase control, which cannot give a comprehensive study about per-phase DPWM techniques. Hence, it is essential to
implement various per-phase control strategies that focus on minimizing switching losses to prolong the lifespan of a specific phase within the inverter.
In this paper, the modified DPWM strategies for independent control of per-phase switching loss, aimed at delaying the failure of the most aged phase leg to extend the lifespan of the entire
converter, are proposed. Different from conventional three-phase DPWM techniques, the per-phase DPWM strategies generate clamping regions for only the most aged phase leg, whereas the two remaining
phase legs are operated as in continuous PWM. This led to an increase in the lifetime of the most aged leg but avoided degrading the output performance of the converter too much. The simulation and
experimental results are explored by implementing various modified per-phase DPWM strategies in a 2L3P VSI. In
Section 2
of this paper, an initial overview is provided concerning the 2L3P VSI and the principle of previous three-phase DPWM strategies. In
Section 3
, the modified DPWM strategies for independent control of per-phase switching loss are presented. The subsequent Section,
Section 4
, outlines an evaluation conducted through simulation, accompanied by the inclusion of experimental findings. The final Section offers a summarization of the developed comparative assessment of the
various modified per-phase DPWM techniques proposed in this study.
2. Three-Phase DPWM Techniques
The typical configuration of a 2L3P VSI is shown in
Figure 1
$V d c$
stands for the dc-link voltage, and
represents a virtual neutral point of the dc-link.
$S x 1$
$S x 2 ( x = a , b , c )$
indicate the switching patterns for upper and lower switches in each phase, respectively.
$i o x ( x = a , b , c )$
stands for phase output currents. The three pole-voltage references or modulation voltages of three-phase VSI,
$v m o d x ( x = a , b , c )$
, can be defined as:
$v m o d a = v r e f a + v Z S V v m o d b = v r e f b + v Z S V v m o d c = v r e f c + v Z S V$
$v r e f x ( x = a , b , c )$
is the phase reference voltage and
$v Z S V$
is the zero-sequence voltage signal. The zero-sequence voltage
$v Z S V$
corresponds to the voltage difference between the neutral point of load and the midpoint of dc-link [
]. When
$v Z S V = 0$
, it yields sinusoidal PWM.
The modulation voltage
$v m o d x ( x = a , b , c )$
are physically limited by the dc-link voltage
$V d c$
$− 0.5 V d c ≤ v m o d x ≤ 0.5 V d c$
In the linear modulation range, the modulation voltages are physically limited as shown in Equation (2). Hence, from Equations (1) and (2), the condition of zero-sequence voltage can be yielded as
follows [
$− 0.5 V d c − V m i n ≤ v Z S V ≤ 0.5 V d c − V m a x$
$V m i n = m i n ( v r e f a , v r e f b , v r e f c )$
$V m a x = m a x ( v r e f a , v r e f b , v r e f c )$
$m i n ( v r e f a , v r e f b , v r e f c )$
stands for a function which selects the minimum value among
$v r e f a , v r e f b , v r e f c$
, and
$m a x ( v r e f a , v r e f b , v r e f c )$
for the maximum value among
$v r e f a , v r e f b , v r e f c$
. Various PWM strategies can be established through appropriate
$v Z S V$
, which satisfies the condition in Equation (3). The use of an injected zero-sequence voltage signal for a three-phase inverter initiated the research on non-sinusoidal carrier-based PWM. Different
zero-sequence signals lead to different non-sinusoidal PWM modulators and different advantages, such as lower harmonic currents, higher available modulation index, or switching loss reduction
compared to the sinusoidal PWM.
Here, the various conventional three-phase DPWM approaches will be introduced based on a PWM scheme. Following previous knowledge [
], it is common that introducing an identical zero-sequence voltage to the voltage references of the inverter does not alter the fundamental voltages delivered to the load. Thanks to this
characteristic, various DPWM strategies have been previously proposed with the goal of diminishing switching losses.
Figure 2
depicts the general control diagram of VSI using a carrier-based PWM (CBPWM) scheme, where three modulation voltages
$v m o d x ( x = a , b , c )$
are compared to one triangular carrier to generate the switching patterns
$S x ( x = a , b , c )$
Based on the zero-sequence voltage
$v Z S V$
and modulation voltage
$v m o d x ( x = a , b , c )$
, PWM strategies are divided into two categories as follows: the continuous PWM (CPWM) and DPWM. The CPWM maintains modulation voltages without clamping throughout a cycle of the fundamental voltage,
except for instances of overmodulation. Two widely recognized CPWM techniques are the sinusoidal PWM (SPWM) and space vector PWM (SVPWM) [
]. On the other hand, in DPWM, modulation voltages contain clamping interval that equals one-third of the fundamental period. The switching state is kept unchanged during the clamping interval.
Consequently, the switching frequency of the VSI is lowered, leading to a reduction in the corresponding switching losses. Various three-phase DPWM techniques have been investigated in this Section
based on the positioning of the clamping interval.
Figure 3
illustrates the reference voltage signal, modulation voltage, and zero-sequence voltage signal waveforms in phase
of six common DPWM strategies. DPWMx (x = 0, 1, 2, 3) consists of clamping the upper switch and lower switch for alternatively, respectively,
$60 o$
within a fundamental period. The clamping intervals are distributed symmetrically. Among the DPWMx (x = 0, 1, 2, 3) strategies, the DPWM1 scheme generates clamping intervals at peaks of reference
voltages [
]. Meanwhile, the DPWM0 and DPWM2 approaches have the clamping phase-shifted by
$+ 30 o$
$− 30 o$
with respect to DPWM1, respectively [
]. It will be observed that it is possible to provide other intermediate placements of saturations to favor certain operating points. The DPWM3 divides the clamping interval of
$120 o$
into four intervals, each spanning
$30 o$
]. In the DPWMMAX, only the upper switch in the phase leg of the inverter remains in a high state for the clamping interval of
$1 20 o$
in the fundamental period [
], while only the lower switches in the phase leg of the inverter maintain the high state for the same duration in the DPWMMIN [
]. Furthermore, the DPWM1 positions the center of each clamping interval in alignment with the peak of the reference voltage, rendering it appropriate for applications requiring a unity power factor.
The DPWM0 and DPWM2 prove effective for power factors with a
$30 o$
leading and lagging phase, respectively. In contrast, the DPWM3 can find utility in reactive power compensation applications.
A generalized relation that enables constructing the zero-sequence voltage signal as a function of
$V m a x$
$V m i n$
is expressed as follows:
$v Z S V = V d c 2 1 − 2 α − α V m i n + α − 1 V m a x$
In Equation (4)
can take any form (constant or time-varying) ranging between zero and unity [
]. The DPWMx is obtained when
$α = 1 − 0.5 1 + s i g n c o s 3 ( ω t + δ ) }$
and varying the modulation angle
. When
$α = 0.5$
, the zero-sequence voltage
$v Z S V = − 0.5 ( V m i n + V m a x )$
, results in the continuous modulation, named SVPWM. The zero-sequence voltage of DPWMMAX is determined by the maximum value among three reference voltages, resulting in
$v Z S V = V d c 2 − V m a x$
, which is equivalent to
$α = 0$
. If
$α = 1$
, the DPWMMIN is obtained. The zero-sequence voltage of the DPWMMIN is determined by the maximum value among three reference voltages, resulting in
$v Z S V = − V d c 2 − V m i n$
. It is important to recognize that employing the DPWMMIN or DPWMMAX leads to an imbalanced allocation of switching loss and thermal stress between the upper and lower switching devices. Certainly,
under the DPWMMAX approach, the upper switch has a lower switching loss than the lower one, whereas it is the opposite of the DPWMMIN technique.
Table 1
summarizes the different types of DPWM following the values of
. In
Table 1
defines the load angle, which is the phase angle between voltage and current caused by the load condition.
In previous conventional DPWM, the current is disregarded, resulting in a fixed pattern of modulation that is chosen based on its application. To increase the efficiency of the DPWM methods, a
generalized DPWM (GDPWM) is proposed in [
]. The GDPWM detects the magnitude of phase output current relative to the inverter reference voltage and determines the optimum clamping interval instantaneously. Variable clamping intervals can be
achieved based on the magnitude of the phase current.
Figure 4
depicts the zero-sequence voltage signal, modulation voltage signals, and phase output current waveforms obtained by the GDPWM with load angle
$φ = 20 o$
$φ = 70 o$
. The clamping intervals follow the peak of corresponding phase currents and are different between two cases of load angle.
3. Per-Phase DPWM Technique for Independent Control of Switching Loss
As previously discussed, the aging state among phase legs of converter might be different attributed to reasons like the manufacturing procedure, uneven distribution of thermal stress, and prior
replacements. Consequently, the remaining useful lifespan of phase leg is not similar to each other. The converter would cease operation in the event of malfunction in any one leg, it becomes
essential to extend lifespan of the most aged leg. A fundamental concept for extending the lifespan of the most aged leg involves lowering their switching loss. However, a reduction in switching
frequency will lower the output performance of the VSIs. Furthermore, improperly decreasing the switching frequency of a specific phase within the VSI will lead to a degradation in the output
currents. Therefore, lowering the switching frequency of specific legs using a modified modulation strategy is the most suitable solution without degrading the output performance and any additional
circuits or equipment. Therefore, in this study, the DPWM strategies will be modified to precisely manage the switching frequency of specific phase legs in 2L3P VSI. As for the proposed per-phase
DPWM strategies, it requires determining the most aged phase leg. The diagnosis methods require the aging indicator to identify the current aging status of the power semiconductor devices. The aging
indicator identification on the power semiconductor device has been carried out by performing an accelerated aging experiment with monitoring of selected parameters. Accelerated aging tests allow the
effects of failure mechanisms to be analyzed and aging indicators to be identified. Various electrical aging indicators for the IGBT have been investigated and proposed in the literature such as
changes in temperature, component transconductance, collector-emitter on-state voltage
$V c e , o n$
, and threshold voltage
$V t h$
. However, the diagnosis method is out of scope of this study, a detailed description is not included. To control the switching frequency of a specific leg, phase
is considered as an example; the possible clamping interval can be determined, as depicted in
Figure 5
. The reference voltage of phase
includes two parts, including possible clamping interval and impossible clamping interval, corresponding to the cases that
$V m a x V m i n = v r e f a$
$V m a x V m i n ≠ v r e f a$
, respectively.
As indicated earlier, it is common that introducing an identical zero-sequence voltage to the voltage references of the inverter does not alter the fundamental voltages delivered to the load. Hence,
to avoid compromising the output current quality but reducing the switching frequency of specific phase, the calculation of zero-sequence voltage will be adjusted based on the possible clamping and
impossible clamping intervals as follows:
$v Z S V 1 = V d c 2 1 − 2 α − α V m i n + α − 1 V m a x a t p o s s i b l e c l a m p i n g i n t e r v a l o f v r e f x v Z S V 2 = − 0.5 V m i n + V m a x a t i m p o s s i b l e c l a m p i n g
i n t e r v a l o f v r e f x$
$v Z S V _ 1$
indicates the zero-sequence voltage at the possible clamping interval, whereas
$v Z S V _ 2$
indicates the zero-sequence voltage at the impossible clamping interval. To guarantee the output performance and avoid deteriorating the output currents waveform at the possible clamping interval,
$v Z S V _ 1$
is injected into both three phases voltage references. Meanwhile, at impossible clamping intervals,
$v Z S V _ 2$
, which is calculated as in the SVPWM method, is injected. The flowchart of zero-sequence voltage generation in the modified DPWM for independent control of per-phase switching loss is presented in
Figure 6
Figure 7
shows the zero-sequence voltage signal, modulation voltages, and corresponding switching pattern obtained by various per-phase DPWM methods
$( V d c = 200 V , l o a d a n g l e φ = 20 o )$
. Assuming that phase
is the most aged leg. In
Figure 7
, the injected zero-sequence voltage signal generated the clamping intervals at
$± V d c / 2$
in phase
modulation voltage, which results that switching pattern
$S a 1$
keeps current state during the clamping period. Meanwhile, phase
modulation voltages do not clamp, which results that the switching patterns
$S b 1$
$S c 1$
continuously change their status. Additionally, the zero-sequence voltage signal is different from the previous one in the standard DPWM strategies. As observed in
Figure 7
, the clamping interval of phase
modulation voltage is equivalent to the standard three-phase DPWM strategies shown in
Figure 3
, whereas the non-clamping interval of phase
modulation voltages is equivalent to the standard SVPWM strategy.
In addition to the modified DPWM strategies, the GDPWM can also be modified for independent control of per-phase switching loss. The same principle as in the modified DPWM strategies is applied to
the GDPWM. The flowchart of zero-sequence voltage generation in the modified GDPWM for independent control of per-phase switching loss is depicted in
Figure 8
a. The zero-sequence voltage is determined by evaluating the absolute value of
$i m a x$
$i m i n$
, where
$i m i n = m i n ( i o a , i o b , i o c )$
$i m i n = m a x ( i o a , i o b , i o c )$
at the clamping interval.
Figure 8
b shows the output currents, zero-sequence voltage signal, modulation voltages, and corresponding switching patterns
$( V d c = 200 V , l o a d a n g l e φ = 20 o )$
. Due to the load angle
$φ = 20 o$
, the per-phase GDPWM is quite similar to the per-phase DPWM2 strategy. The generated modulation voltage of phase
has a clamping interval corresponding to the interval that phase
output current has the highest absolute magnitude. During the clamping interval, the switching pattern
$S a 1$
does not change its status. Similar to the per-phase DPWM strategies, modulation voltages of phases
do not have clamping intervals. Hence, the switching pattern
$S b 1$
$S c 1$
continuously change their status.
4. Verification and Evaluation Results
The performance of different DPWM strategies for independent control of per-phase switching loss is validated using simulation and experiment findings. The performance comparison among the per-phase
DPWM control schemes concerning the loss calculation is acquired by employing a thermal module in the PSIM software. To ensure the power loss calculation is correct, the model of power switches is
selected from the device library in PSIM, whereas the device’s information from manufacturer’s datasheet is added into corresponding device. The 2L3P VSI has listed parameters in
Table 2
Figure 9
depicts the block diagram of a closed-loop current control-based proportional-integral (PI) controller for 2L3P VSI. The use of a PI controller is favored due to its simplicity and ease of
implementation when contrasted with more advanced controllers. The inherent stability of the PI controller makes it less susceptible to oscillations than its advanced ones. Moreover, tuning the gains
of a PI controller is generally straightforward, while more complex controllers may involve adjusting additional parameters that require careful attention. The value of PI gains is listed in
Table 2
. The phase voltage reference signals are generated by the PI controller, whereas different zero-sequence voltage signals corresponding to different per-phase DPWM strategies are generated using
schemes shown in
Figure 6
Figure 8
The simulation of a 2L3P VSI is implemented in the PSIM software environment. The simulation results of output currents, modulation voltage, zero-sequence voltage, and switching patterns obtained by
the various per-phase DPWM strategies for independent control of per-phase switching loss are shown in
Figure 10
. The output currents obtained by the various per-phase DPWM strategies have sinusoidal waveform and accurate magnitude and phase. The switching pattern of phase
$a$$S 1 a$
generated by various per-phase DPWM strategies has the clamping interval, which relates to the clamping interval of phase
modulation voltage. Meanwhile, the switching pattern of phase
$b$$S 1 b$
$c$$S 1 c$
do not include clamping interval due to the corresponding modulation voltages do not clamp at
$± V d c / 2$
. As shown in
Figure 10
g, the per-phase GDPWM strategy generates the clamping interval corresponding to the interval that phase
output current has the highest absolute value as expected. Due to the load angle
$φ = 20 o$
, the per-phase GDPWM is quite similar to the per-phase DPWM2 strategy.
The output current THD percentages obtained by various per-phase DPWM strategies are shown in
Figure 11
a. The output performance of continuous modulation, i.e., the SVPWM method, is considered a reference object for comparison. As observed in
Figure 11
a, the SVPWM method has the lowest phase
and the average THD percentage at 0.73%. The clamping interval or unmodulated period in the per-phase DPWM strategies results in poorer output current quality, e.g., higher THD percentage. The phase
output current THD obtained by the per-phase DPWMx (x = 0, 1, 2, 3) and per-phase GDPWM strategies are about 42% higher than that of the SVPWM scheme, while the average output current THD acquired by
these per-phase DPWM strategies is about 26% higher than the SVPWM scheme. It is noticeable that from
Figure 11
b, the per-phase DPWM strategies decrease switching frequency in phase
by approximately 33% compared to the SVPWM method. The power loss of phase
, including switching and conduction losses, and total loss of VSI are presented in
Figure 11
c,d, respectively. In terms of power loss in phase
, the conduction loss slightly increases in per-phase DPWM strategies due to the clamping interval compared to SVPWM. In
Figure 11
c, the switching loss reduction in the various per-phase DPWM strategies is different due to the different positions of the clamping interval. Looking at
Figure 11
c in more detail, per-phase DPWM2 and per-phase GDPWM strategies have the same reduction of switching loss at about 47% compared to the switching loss of the SVPWM. Meanwhile, the switching loss of
the per-phase DPWM1 and per-phase DPWM3 strategies are less reduced than other the per-phase DPWM strategies at about 32%. The chart in
Figure 11
d shows the total loss of the VSI resulting from the different per-phase DPWM strategies. As observed in
Figure 11
d, it is noticeable that the reduction of total loss obtained by various per-phase DPWM strategies is smaller than that of phase
switching loss in
Figure 12
c. This is due to the two remaining phase legs
$b ,$
are kept continuously operating at a predefined switching frequency as in the SVPWM method, resulting in similar switching loss and conduction loss in phases
. The total loss reduction obtained by the per-phase DPWM strategies is slight, ranging from 6.7% to 9.2% compared to the SVPWM method. As for efficiency, the difference among approaches is
The performance of 2L3P VSI, implemented at another load angle
$( φ = 75 o )$
, is shown in
Figure 12
. Because the phase difference between phase output current and phase output voltage increases along with the rise of load angle, the performance of various per-phase DPWM strategies will change
remarkably. As observed in
Figure 12
a, the output current THD percentage in phase
and average value of SVPWM is the lowest at 0.39%. The phase
output current THD percentage of the per-phase DPWM0 is the highest at 0.86%, which increases by 120% compared to the SVPWM. Regarding the average output current THD, the per-phase DPWM2 strategy has
the highest at 0.74%, which increases by about 90% compared to the SVPWM. In terms of switching frequency, due to the same magnitude of the clamping interval, the reduction acquired by the per-phase
DPWM strategies is similar to the previous load angle
$φ = 20 o$
. It can be noticed from
Figure 12
c that the reduction of switching loss acquired by various per-phase DPWM strategies is lower than load angle
$φ = 20 o$
. The phase
switching loss of the per-phase GDPWM schemes decreases by about 37.5%, compared to the SVPWM method. Meanwhile, the decrease in switching loss acquired by remaining the per-phase DPWM strategies
ranges from 18% to 37%. Regarding the total loss of VSI, the per-phase DPWM2 and per-phase GDPWM schemes have the lowest loss, which is about 7.5% lower than that of the SVPWM method. Hence, it can
be concluded that the increase in load angle weakens the effect of the per-phase DPWM strategies in terms of switching loss reduction. As for efficiency, the difference among approaches is
Figure 13
presents a performance comparison among the per-phase DPWM strategies at variation of carrier frequency. The SVPWM has the highest switching frequency, and the remaining per-phase DPWM strategies
exhibit the same switching frequency. Because of continuous modulation, the SVPWM exhibits the lowest average output current THD under variation of carrier frequency. In terms of power loss, the
conduction loss of phase
obtained by the per-phase DPWM strategies is similar under variation of carrier frequency. Meanwhile, the per-phase GDPWM has the lowest switching and total losses under variation of carrier
frequency thanks to the clamping interval corresponding to the interval that magnitude of conducted current is the largest.
Figure 14
presents a performance comparison among the per-phase DPWM strategies at variation of modulation index. As presented, the switching frequency in different control schemes does not change under
variation of modulation index. Due to continuous modulation, SVPWM has the lowest average output current THD under variation of modulation index. In terms of power loss, both conduction and switching
losses increase following the rise of modulation index. The conduction loss of phase
acquired by different approaches are similar. Meanwhile, the per-phase GDPWM has the lowest switching and total losses under variation of modulation index thanks to the clamping interval is
corresponding to the interval that magnitude of conducted current is the largest.
Figure 15
presents a performance comparison among per-phase DPWM strategies at variation of load angle. As presented, the switching frequency in different control schemes does not change under variation of
load angle. Due to continuous modulation, the SVPWM has the lowest average output current THD under variation of load angle. As can be seen, the output current THD decreases when the load angle
increase. In terms of power loss, conduction loss in phase
decreases along with the rise of load angle. Meanwhile, the switching loss in phase
increase when the load angle increase. It validates that the increase of load angle reduces the effect of the per-phase DPWM strategies. Meanwhile, the per-phase GDPWM has the lowest switching and
total losses under variation of load angle thanks to the clamping interval is corresponding to the interval that magnitude of conducted current is the largest.
The per-phase DPWM strategies are also verified and implemented in the laboratory on an experimental setup of the 2L3P VSI connected to a three-phase
$R − L$
load, as displayed in
Figure 16
. The experiment is conducted using the identical parameters outlined in
Table 2
. The control schemes are applied and run using a Texas Instrument TMS320F28335 digital signal processor (DSP).
Figure 17
depicts the experimental results of output current waveforms, phase
modulation voltage, and zero-sequence voltage signal obtained by different per-phase DPWM strategies. It can be seen that all per-phase DPWM strategies generate sinusoidal output currents. The
waveform of phase
modulation voltage and zero-sequence voltage signal acquired by the different per-phase DPWM strategies is matched to the simulation waveforms.
Figure 18
presents the experimental waveforms of output current waveforms, modulation voltage, and switching pattern of phase
obtained by different per-phase DPWM strategies. As observed in
Figure 18
, phase
switching state is kept at a high state or low state, correctly corresponding to the clamping interval of the modulation voltage. The experiment results present identical waveforms as in the
simulation Section, which verifies the correctness and effectiveness of the per-phase DPWM strategies.
In the experiment, the THD of output currents are measured by using waveform inspector function in MSO3054 Oscilloscope from Tektronix.
Figure 19
presents the measured THD of each per-phase DPWM strategy against the THD given by the SVPWM. As can be seen in
Figure 19
, due to being the CPWM, the SVPWM has the lowest THD value, whereas the per-phase DPWM3 has the highest THD.
For further verification, the proposed per-phase DPWM strategies are employed under unbalanced load condition. It should be noted that the 2L3P VSI does not have the flow path for the zero-sequence
current of the unbalanced load, this results in unbalanced output currents when implementing the proposed per-phase DPWM approaches. However, these proposed per-phase DPWM approaches do not require
information of load, thus, the generation of clamping interval in each per-phase DPWM is guaranteed. As can be seen in
Figure 20
, the output currents obtained by the per-phase DPWM strategies are sinusoidal and correct in terms of magnitude following the difference in load resistance. Meanwhile, the switching pattern of phase
$a$$S 1 a$
generated by various per-phase DPWM strategies has the clamping interval, which relates to the clamping interval of phase
modulation voltage. Meanwhile, the switching pattern of phase
$b$$S 1 b$
$c$$S 1 c$
do not include clamping interval due to the corresponding modulation voltages do not clamp at
$± V d c / 2$
. It verifies that the proposed per-phase DPWM strategies operate correctly under unbalanced load conditions.
5. Conclusions
This paper explores the output performance, including output current THD and power loss of switching devices, of the various modified DPWM strategies for independent control of per-phase switching
loss, which are applicable in 2L3P VSIs. From the simulation and experimental result, it can be concluded that the per-phase DPWM strategies can precisely manage the switching frequency and switching
loss of specific legs in 2L3P VSI. Thanks to the clamping interval, which is always located at the peak absolute value of output current, per-phase GDPWM is the most effective way to decrease the
switching loss of specific phase legs, though it slightly increases the output current THD percentage. However, it requires knowing the instantaneous magnitude of phase output currents. The remaining
per-phase DPWM strategies, including the DPWMx (x = 0, 1, 2, 3) per-phase, the per-phase DPWMMIN, and the per-phase DPWMMAX strategies, prettily decrease the switching loss but are not stable, where
the performance of these techniques is dependent on the load power factors and applications. Additionally, the increase in load angle weakens the effect of per-phase DPWM strategies in terms of
switching loss reduction. The trade-off-between reducing loss and deterioration of the harmonic of the output current may reduce efficiency in electrical loads, particularly inductive loads like
electric motor. This can be resolved by using a passive filter to attenuate specific harmonics or designing an active power filter to cancel out the undesired harmonics. This work will be considered
in future research.
Author Contributions
Conceptualization, S.K.; methodology, S.K. and S.C.; software, M.H.N.; validation, M.H.N.; formal analysis, M.H.N.; investigation, M.H.N.; resources, S.K.; data curation, M.H.N.; writing—original
draft preparation, M.H.N.; writing—review and editing, S.K. and S.C.; visualization, M.H.N.; supervision, S.K.; project administration, S.K.; funding acquisition, S.K. All authors have read and
agreed to the published version of the manuscript.
This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea Government, Ministry of Science and ICT (MSIT) under Grant 2020R1A2C1013413; and by the Technology
Development Program to Solve Climate Changes through the NRF funded by the MSIT under Grant 2021M1A2A2060313.
Data Availability Statement
The data is unavailable due to privacy.
Conflicts of Interest
The authors declare no conflict of interest.
2L3P 2-level 3-phase
VSI Voltage source inverter
THD Total harmonic distortion
CPWM Continuous pulse-width-modulation
DPWM Discontinuous pulse-width-modulation
SPWM Sinusoidal pulse-width-modulation
SVPWM Space vector pulse-width-modulation
CBPWM Carrier-based pulse-width modulation
GDPWM Generalized discontinuous pulse-width-modulation
1. Falck, J.; Felgemacher, C.; Rojko, A.; Liserre, M.; Zacharias, P. Reliability of Power Electronic Systems: An Industry Perspective. IEEE Ind. Electron. Mag. 2018, 12, 24–35. [Google Scholar] [
2. Wang, H.; Liserre, M.; Blaabjerg, F. Toward Reliable Power Electronics: Challenges, Design Tools, and Opportunities. IEEE Ind. Electron. Mag. 2013, 7, 17–26. [Google Scholar] [CrossRef]
3. Wang, H.; Liserre, M.; Blaabjerg, F.; Rimmen, P.d.P.; Jacobsen, J.B.; Kvisgaard, T.; Landkildehus, J. Transitioning to Physics-of-Failure as a Reliability Driver in Power Electronics. IEEE J.
Emerg. Sel. Top. Power Electron. 2014, 2, 97–114. [Google Scholar] [CrossRef]
4. Yang, S.; Bryant, A.; Mawby, P.; Xiang, D.; Ran, L.; Tavner, P. An industry-based survey of reliability in power electronic converters. In Proceedings of the 2009 IEEE Energy Conversion Congress
and Exposition, San Jose, CA, USA, 20–24 September 2009; pp. 3151–3157. [Google Scholar]
5. Baker, N.; Liserre, M.; Dupont, L.; Avenas, Y. Improved Reliability of Power Modules: A Review of Online Junction Temperature Measurement Methods. IEEE Ind. Electron. Mag. 2014, 8, 17–27. [Google
Scholar] [CrossRef]
6. Held, M.; Jacob, P.; Nicoletti, G.; Scacco, P.; Poech, M.H. Fast power cycling test of IGBT modules in traction application. In Proceedings of the Second International Conference on Power
Electronics and Drive Systems, Singapore, 26–29 May 1997; Volume 421, pp. 425–430. [Google Scholar]
7. Sathik, M.; Jet, T.K.; Gajanayake, C.J.; Simanjorang, R.; Gupta, A.K. Comparison of power cycling and thermal cycling effects on the thermal impedance degradation in IGBT modules. In Proceedings
of the IECON 2015–41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 1170–1175. [Google Scholar]
8. Moeini, R.; Tricoli, P.; Hemida, H.; Baniotopoulos, C. Increasing the reliability of wind turbines using condition monitoring of semiconductor devices: A review. In Proceedings of the 5th IET
International Conference on Renewable Power Generation (RPG) 2016, London, UK, 21–23 September 2016; pp. 1–6. [Google Scholar]
9. Andresen, M.; Ma, K.; Buticchi, G.; Falck, J.; Blaabjerg, F.; Liserre, M. Junction Temperature Control for More Reliable Power Electronics. IEEE Trans. Power Electron. 2018, 33, 765–776. [Google
Scholar] [CrossRef]
10. Chen, G.; Cai, X. Adaptive Control Strategy for Improving the Efficiency and Reliability of Parallel Wind Power Converters by Optimizing Power Allocation. IEEE Access 2018, 6, 6138–6148. [Google
Scholar] [CrossRef]
11. Peyghami, S.; Davari, P.; Blaabjerg, F. System-Level Reliability-Oriented Power Sharing Strategy for DC Power Systems. IEEE Trans. Ind. Appl. 2019, 55, 4865–4875. [Google Scholar] [CrossRef]
12. Song, Y.; Wang, B. Evaluation Methodology and Control Strategies for Improving Reliability of HEV Power Electronic System. IEEE Trans. Veh. Technol. 2014, 63, 3661–3676. [Google Scholar] [
13. Andresen, M.; Buticchi, G.; Falck, J.; Liserre, M.; Muehlfeld, O. Active thermal management for a single-phase H-Bridge inverter employing switching frequency control. In Proceedings of the PCIM
Europe 2015 International Exhibition and Conference for Power Electronics, Intelligent Motion, Renewable Energy and Energy Management, Nuremberg, Germany, 19–20 May 2015; pp. 1–8. [Google Scholar
14. Wei, L.; McGuire, J.; Lukaszewski, R.A. Analysis of PWM Frequency Control to Improve the Lifetime of PWM Inverter. IEEE Trans. Ind. Appl. 2011, 47, 922–929. [Google Scholar] [CrossRef]
15. Ko, Y.; Andresen, M.; Buticchi, G.; Liserre, M. Discontinuous-Modulation-Based Active Thermal Control of Power Electronic Modules in Wind Farms. IEEE Trans. Power Electron. 2019, 34, 301–310. [
Google Scholar] [CrossRef]
16. Kaczorowski, D.; Mittelstedt, M.; Mertens, A. Investigation of discontinuous PWM as additional optimization parameter in an active thermal control. In Proceedings of the 2016 18th European
Conference on Power Electronics and Applications (EPE’16 ECCE Europe), Karlsruhe, Germany, 5–9 September 2016; pp. 1–10. [Google Scholar]
17. Valentine, N.; Das, D.; Sood Phd, B.; Pecht, M. Failure Analyses of Modern Power Semiconductor Switching Devices. Int. Symp. Microelectron. 2015, 2015, 000690–000695. [Google Scholar] [CrossRef]
18. Sintamarean, C.; Wang, H.; Blaabjerg, F.; Iannuzzo, F. The impact of gate-driver parameters variation and device degradation in the PV-inverter lifetime. In Proceedings of the 2014 IEEE Energy
Conversion Congress and Exposition (ECCE), Pittsburgh, PA, USA, 14–18 September 2014; pp. 2257–2264. [Google Scholar]
19. Liang, W.; Castellazzi, A. Temperature adaptive driving of power semiconductor devices. In Proceedings of the 2010 IEEE International Symposium on Industrial Electronics, Bari, Italy, 4–7 July
2010; pp. 1110–1114. [Google Scholar]
20. Minh Hoang, N.; Sangshin, K. Active Thermal Control Algorithm with Independent Power Control Based on Three-phase Four-wire Converter. Trans. Korean Inst. Electr. Eng. 2022, 71, 967–978. [Google
Scholar] [CrossRef]
21. Kim, J.; Nguyen, M.-H.; Kwak, S.; Choi, S. Lifetime Extension Method for Three-Phase Voltage Source Converters Using Discontinuous PWM Scheme with Hybrid Offset Voltage. Machines 2023, 11, 612. [
Google Scholar] [CrossRef]
22. Nguyen, M.-H.; Kwak, S.; Choi, S. Individual Loss Reduction Technique for Each Phase in Three-Phase Voltage Source Rectifier Based on Carrier-Based Pulse-Width Modulation. J. Electr. Eng.
Technol. 2023, 1–10. [Google Scholar] [CrossRef]
23. Hava, A.M.; Kerkman, R.J.; Lipo, T.A. Simple analytical and graphical methods for carrier-based PWM-VSI drives. IEEE Trans. Power Electron. 1999, 14, 49–61. [Google Scholar] [CrossRef]
24. Keliang, Z.; Danwei, W. Relationship between space-vector modulation and three-phase carrier-based PWM: A comprehensive analysis [three-phase inverters]. IEEE Trans. Ind. Electron. 2002, 49,
186–196. [Google Scholar] [CrossRef]
25. Ojo, O. The generalized discontinuous PWM scheme for three-phase voltage source inverters. IEEE Trans. Ind. Electron. 2004, 51, 1280–1289. [Google Scholar] [CrossRef]
26. Hava, A.M.; Kerkman, R.J.; Lipo, T.A. A high-performance generalized discontinuous PWM algorithm. IEEE Trans. Ind. Appl. 1998, 34, 1059–1071. [Google Scholar] [CrossRef]
27. Kolar, J.W.; Ertl, H.; Zach, F.C. Influence of the modulation method on the conduction and switching losses of a PWM converter system. In Proceedings of the Conference Record of the 1990 IEEE
Industry Applications Society Annual Meeting, Seattle, WA, USA, 7–12 October 1990; Volume 501, pp. 502–512. [Google Scholar]
28. Ogasawara, S.; Akagi, H.; Nabae, A. A novel PWM scheme of voltage source inverters based on space vector theory. Arch. Für Elektrotechnik 1990, 74, 33–41. [Google Scholar] [CrossRef]
29. Taniguchi, K.; Ogino, Y.; Irie, H. PWM technique for power MOSFET inverter. IEEE Trans. Power Electron. 1988, 3, 328–334. [Google Scholar] [CrossRef]
30. Asiminoaei, L.; Rodriguez, P.; Blaabjerg, F.; Malinowski, M. Reduction of Switching Losses in Active Power Filters with a New Generalized Discontinuous-PWM Strategy. IEEE Trans. Ind. Electron.
2008, 55, 467–471. [Google Scholar] [CrossRef]
Figure 3. Modulation voltages and zero-sequence voltage signal waveforms obtained by different DPWM strategies (a) DPWM0, (b) DPWM1, (c) DPWM2, (d) DPWM3, (e) DPWMMIN, (f) DPWMMAX.
Figure 4. Modulation voltages, zero-sequence voltage signal waveforms, and phase output current waveforms obtained by GDPWM with (a) load angle $φ = 20 o$, (b) load angle $φ = 70 o$.
Figure 6. Flowchart of zero-sequence voltage generation in modified DPWM for independent control of per-phase switching loss.
Figure 7. Zero-sequence voltage, modulation voltages, and switching patterns waveforms obtained by various DPWM for independent control of per-phase switching loss (a) Per-phase DPWM0, (b) Per-phase
DPWM1, (c) Per-phase DPWM2, (d) Per-phase DPWM3, (e) Per-phase DPWMMIN, (f) Per-phase DPWMMAX $( V d c = 200 V , l o a d a n g l e φ = 20 o )$.
Figure 8. (a) Flowchart of zero-sequence voltage generation in modified GDPWM for independent control of per-phase switching loss, (b) Output currents, zero-sequence voltage signal, modulation
voltages, and corresponding switching patterns waveforms obtained by per-phase GDPWM $( V d c = 200 V , l o a d a n g l e φ = 20 o )$.
Figure 10. The simulation waveforms of output currents, modulation voltage, zero-sequence voltage, and switching patterns obtained by different Per-phase DPWM strategies (a) Per-phase DPWM0, (b)
Per-phase DPWM1, (c) Per-phase DPWM2, (d) Per-phase DPWM3, (e) Per-phase DPWMMIN, (f) Per-phase DPWMMAX, (g) Per-phase GDPWM.
Figure 11. Comparison results of conventional SVPWM and various per-phase DPWM strategies (a) Average output current THD, (b) Switching frequency of phase $a$, (c) Power loss in phase $a$, (d) Total
loss, (e) Efficiency. $( V d c = 200 V , m o d u l a t i o n i n d e x = 0.87 , l o a d a n g l e φ = 20 o )$.
Figure 12. Comparison results of conventional SVPWM and various per-phase DPWM strategies (a) Average output current THD, (b) Switching frequency of phase $a$, (c) Power loss in phase $a$, (d) Total
loss, (e) Efficiency. $( V d c = 200 V , m o d u l a t i o n i n d e x = 0.42 , l o a d a n g l e φ = 75 o )$.
Figure 13. Comparison results of the conventional SVPWM and various per-phase DPWM strategies under variation of carrier frequency (a) Switching frequency of phase $a$, (b) Average switching
frequency, (c) Average output current THD, (d) Conduction loss in phase $a$, (e) Switching loss in phase $a$, (f) Total loss. $( V d c = 200 V , c a r r i e r f r e q u e n c y f c r = 10 k H z , l o
a d a n g l e φ = 20 o )$.
Figure 14. Comparison results of the conventional SVPWM and various per-phase DPWM strategies under variation of modulation index (a) Switching frequency of phase $a$, (b) Average switching
frequency, (c) Average output current THD, (d) Conduction loss in phase $a$, (e) Switching loss in phase $a$, (f) Total loss. $( V d c = 200 V , c a r r i e r f r e q u e n c y f c r = 10 k H z , l o
a d a n g l e φ = 20 o )$.
Figure 15. Comparison results of the conventional SVPWM and various per-phase DPWM strategies under variation of load angle (a) Switching frequency of phase $a$, (b) Average switching frequency, (c)
Average output current THD, (d) Conduction loss in phase $a$, (e) Switching loss in phase $a$, (f) Total loss. $( V d c = 200 V , c a r r i e r f r e q u e n c y f c r = 10 k H z , l o a d a n g l e
φ = 20 o )$.
Figure 17. The experimental waveforms of output currents, phase $a$ modulation voltage, and zero-sequence voltage signal obtained by different per-phase DPWM strategies (a) Per-phase DPWM0, (b)
Per-phase DPWM1, (c) Per-phase DPWM2, (d) Per-phase DPWM3, (e) Per-phase DPWMMIN, (f) Per-phase DPWMMAX, (g) Per-phase GDPWM.
Figure 18. The experimental waveforms of output currents, modulation voltage, and switching pattern of phase $a$ obtained by different Per-phase DPWM strategies (a) Per-phase DPWM0, (b) Per-phase
DPWM1, (c) Per-phase DPWM2, (d) Per-phase DPWM3, (e) Per-phase DPWMMIN, (f) Per-phase DPWMMAX, (g) Per-phase GDPWM.
Figure 19. Comparison results of conventional SVPWM and various per-phase DPWM strategies in terms of output current THD obtained from experimental results.
Figure 20. The simulation waveforms of output currents, modulation voltage, zero-sequence voltage, and switching patterns under unbalanced load condition $( R a = R , R b = 2 R , R c = 0.5 R )$
obtained by different Per-phase DPWM strategies (a) Per-phase DPWM0, (b) Per-phase DPWM1, (c) Per-phase DPWM2, (d) Per-phase DPWM3, (e) Per-phase DPWMMIN, (f) Per-phase DPWMMAX, (g) Per-phase GDPWM.
$α$ $δ$
SVPWM 0.5 x
DPWM0 $1 − 0.5 1 + s i g n c o s 3 ( ω t + δ ) }$ $φ + 30 o$
DPWM1 $1 − 0.5 1 + s i g n c o s 3 ( ω t + δ ) }$ $φ$
DPWM2 $1 − 0.5 1 + s i g n c o s 3 ( ω t + δ ) }$ $φ − 30 o$
DPWM3 $1 − 0.5 1 + s i g n c o s 3 ( ω t + δ ) }$ $φ − 60 o$
DPWMMIN 1 x
DPWMMAX 0 x
Parameter Value
$dc - link voltage V d c$ (V) 200
dc-link capacitance (μF) 680
$Load resistance R$ (Ω) 10
$Load inductance L f$ (mH) 10
Carrier frequency (kHz) 10
Fundamental frequency (Hz) 60
P gain 5
I gain 100
Sampling frequency (kHz) 10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Nguyen, M.H.; Kwak, S.; Choi, S. Development of Various Types of Independent Phase Based Pulsewidth Modulation Techniques for Three-Phase Voltage Source Inverters. Machines 2023, 11, 1054. https://
AMA Style
Nguyen MH, Kwak S, Choi S. Development of Various Types of Independent Phase Based Pulsewidth Modulation Techniques for Three-Phase Voltage Source Inverters. Machines. 2023; 11(12):1054. https://
Chicago/Turabian Style
Nguyen, Minh Hoang, Sangshin Kwak, and Seungdeog Choi. 2023. "Development of Various Types of Independent Phase Based Pulsewidth Modulation Techniques for Three-Phase Voltage Source Inverters"
Machines 11, no. 12: 1054. https://doi.org/10.3390/machines11121054
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2075-1702/11/12/1054","timestamp":"2024-11-12T20:41:12Z","content_type":"text/html","content_length":"568350","record_id":"<urn:uuid:d9827441-f000-44f2-be46-d038715ff108>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00396.warc.gz"} |
Properties of Exponential Functions | Mr. Orr is a Geek.com
Properties of Exponential Functions
Goal :
By the end of this homework activity you –
• should be able to compare graphs of different exponential functions.
• should be able to recognize basic properties of exponential functions (y-intercept, increasing, decreasing) by looking at the equation.
• In-class work will be page Pg. 185 # 1 – 7, 10, 14
Your job is to understand how the values of a and b affect the graph of y = ab^x
1. Graph #1 – Use the Desmos graph. Adjust the slider. Make note of how the different values of b change the graph.
2. Graph #2 – Use the Desmos graph. Adjust the sliders to see how the different values affect the graph of an exponential function.
3. Double click on the board and leave a notes under each topic heading about the properties of ab^x that you have discovered.
4. Complete the following quiz | {"url":"https://mrorr-isageek.com/3u/u3-exponential-functions/properties-of-exponential-functions/","timestamp":"2024-11-06T17:42:40Z","content_type":"text/html","content_length":"101957","record_id":"<urn:uuid:450360ca-bcb2-45d3-86b5-33af2937fac7>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00546.warc.gz"} |
Number Line and Comparing Numbers - Math Angel
🎬 Video Tutorial
• (0:01) Number Line: Shows positive numbers to the right of 0 and negative numbers to the left.
• (0:33) Equal Intervals on a Number Line: Numbers are placed at equal intervals on a number line.
• (0:57) Inequality Symbols for Comparison: Inequality symbols ($>$, $<$, $\geq$, $\leq$) are used to compare numbers.
• (1:35) “Greater or Equal To” and “Less or Equal To” symbols: Combining equal and inequality signs expresses conditions like “at least” ($\geq$) or “at most” ($\leq$).
Membership Required
You must be a member of Math Angel Plus or Math Angel Unlimited to watch this video.
Already a member?
Log in here | {"url":"https://math-angel.io/lessons/number-line-inequality-symbols/","timestamp":"2024-11-13T02:08:00Z","content_type":"text/html","content_length":"275821","record_id":"<urn:uuid:d95d67a1-01ed-40f7-bc11-d648d8b4e499>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00342.warc.gz"} |
[Solved] Determining Interest Cost Using the Simpl | SolutionInn
Determining Interest Cost Using the Simple Interest Formula. What are the interest cost and the total amount
Determining Interest Cost Using the Simple Interest Formula. What are the interest cost and the total amount due on a six-month loan of
$1,500 at 13.2 percent simple annual interest?
Fantastic news! We've Found the answer you've been seeking! | {"url":"https://www.solutioninn.com/study-help/personal-finance/determining-interest-cost-using-the-simple-interest-formula-what-are-1678381","timestamp":"2024-11-02T14:27:14Z","content_type":"text/html","content_length":"74326","record_id":"<urn:uuid:caa5e07f-a2ac-47de-8d38-cfa5362ae664>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00017.warc.gz"} |
confidence interval
Confidence interval interpretation calculator
The confidence interval can be used only if the number of successes np Use the TI, 83+, or 84+ calculator command invNorm(,0,1) to find Z Remember that the area to the right of Z is and the area to
the left of Z is Interpretation. 41 A Confidence Interval for A Population Proportion. During an election year, we see articles in the newspaper that state confidence intervals in terms of
proportions or percentages. For example, a poll for a particular candidate running for president might show that the candidate has 40% of the vote within three percentage points (if the sample is
large enough). May 31, · Example 1 - Confidence Interval for Variance Calculator The mean replacement time for a random sample of 12 microwaves is years with a standard deviation of years. Construct
a 95% confidence interval for the population standard deviation.
If the ratio equals to 1, the 2 groups are equal. It shifts the point estimate from 0. R and GraphPad use a sample size-dependent adjustment when calculating confidence intervals, which makes a
notable difference for small sample sizes. Confidence Interval for a Proportion: Motivation The reason to create a confidence interval for a proportion is to capture this web page uncertainty when
estimating a population proportion.
Your Answer
To find the area between two points we :. Jensen, Tom. Arrow down to and enter The Z-value is a test statistic for Z-tests that measures the difference between an observed statistic and its
hypothesized population parameter in units of the standard deviation.
Scientific control Randomized experiment Randomized controlled trial Random assignment Blocking Interaction Factorial experiment. In the earliest modern controlled clinical trial of a medical
treatment for acute strokepublished by Dyken and White inthe investigators were unable to reject interpdetation null hypothesis of no effect of cortisol on stroke. Your email address will not be
published. What confidence interval interpretation calculator T critical value mean? Skip to main content. Chapter Review Some statistical measures, like many survey questions, measure qualitative
link than confidence interval interpretation calculator data.
Assume that the children in the confidence interval interpretation calculator class are a random sample of the population.
Posted on April 21, December 2, by Zach. Monday night beginning ice-skating class.
This is what is computed by this risk ratio calculator. List two difficulties the company might have in obtaining random results, if this survey were done by email. Main article: Confidence band.
Negative events in exposed group.
The purpose: Confidence interval interpretation calculator
Lisinopril used for migraines 321
Confidence interval interpretation calculator Does nifedipine cause tachycardia
CAN YOU TAKE 2.5 Confidence interval interpretation calculator OF CRESTOR 160
DOES ASHWAGANDHA INTERFERE WITH SLEEP Does doxazosin cause ed
The Overflow Blog.
Suppose randomly selected people are surveyed to determine if they own a tablet. How do you find the critical value? Determine confidence interval interpretation calculator level of confidence used
to construct the interval of the population proportion confidence interval interpretation calculator dogs that compete in professional events. Series A, Mathematical and Physical Sciences,pp. Jensen,
Tom. The shaded area under the Student's t distribution curve is equal to the level of significance.
Bearing in mind, what is the margin of error in a confidence interval?
Cross-sectional study Cohort study Natural experiment Quasi-experiment. Skip to content Menu. Add a comment. | {"url":"https://digitales.com.au/blog/wp-content/review/mens-health/confidence-interval-interpretation-calculator.php","timestamp":"2024-11-03T17:05:56Z","content_type":"text/html","content_length":"35454","record_id":"<urn:uuid:0b429090-ef51-4e59-802a-2ad71acbd63f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00845.warc.gz"} |
Solve LeetCode's Balanced Tree: A Full Guide
Mastering LeetCode's Height-Balanced Binary Tree: A Comprehensive Guide
Elevate your coding interview skills by mastering the Height-Balanced Binary Tree problem on LeetCode with solutions in Python, TypeScript, and Java.
In the realm of software engineering interviews, the ability to tackle algorithm and data structure problems is a cornerstone of success. Today, I delve into the Height-Balanced Binary Tree problem
(LeetCode 110 Balanced Binary Tree), a question that tests your understanding of binary trees and recursion—a common theme in coding interviews.
Let's embark on a journey to unravel this challenge, offering solutions and insights that cater to both experienced engineers and newcomers to the interview scene.
Introduction to the Problem
The Height-Balanced Binary Tree problem is a fundamental question that asks us to determine if a given binary tree is height-balanced. A binary tree is considered height-balanced if, for every node,
the depth of the two subtrees never differs by more than one. For instance, consider the following examples:
• Example 1: Input: root = [3,9,20,null,null,15,7] Output: true. This tree is balanced as the depths of the left and right subtrees of all nodes differ by no more than one.
• Example 2: Input: root = [1,2,2,3,3,null,null,4,4] Output: false. This tree is not balanced because the depth difference between the left and right subtrees of the node with value 1 is more than
• Example 3: Input: root = [] Output: true. An empty tree is trivially balanced.
Solution Strategy
The essence of solving this problem lies in calculating the height of the subtrees for every node and ensuring the height difference does not exceed one. This can be achieved through a recursive
depth-first search (DFS) strategy, which efficiently traverses the tree. The Big O notation for this algorithm is O(n), where n is the number of nodes in the tree. This is because each node is
visited exactly once.
Python Solution Using Recursion
Here's a recursive solution in Python, which elegantly captures the essence of our strategy:
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def isBalanced(root: TreeNode) -> bool:
def checkHeight(node):
# Base case: An empty tree is height-balanced
if not node:
return 0
left = checkHeight(node.left)
right = checkHeight(node.right)
# If left or right is unbalanced, or the height difference is > 1
if left == -1 or right == -1 or abs(left - right) > 1:
return -1 # Mark as unbalanced
return 1 + max(left, right) # Return the height of the tree rooted at `node`
return checkHeight(root) != -1
This solution employs a helper function checkHeight that returns -1 if the subtree is unbalanced and otherwise returns its height. This dual-purpose approach minimizes the computational overhead and
allows using boolean and number return types.
Understanding Postorder Traversal
Before delving into the Python solution using postorder traversal, let's briefly understand what it is. Postorder traversal is a way of traversing a binary tree where we first visit the left subtree,
then the right subtree, and finally the node itself. This traversal method is particularly useful for problems where you need to visit children nodes before the parent, as it is in checking for a
balanced binary tree.
Python Solution Using Post Order Traversal
This approach utilizes a non-recursive technique, leveraging a stack for traversal:
def isBalanced(root: Optional[TreeNode]) -> bool:
stack = []
node = root
last = None
depths = {}
while stack or node:
if node:
node = node.left
node = stack[-1]
if not node.right or last == node.right:
node = stack.pop()
left = depths.get(node.left, 0)
right = depths.get(node.right, 0)
if abs(left - right) > 1:
return False
depths[node] = 1 + max(left, right)
last = node
node = None
node = node.right
return True
TypeScript Solution
The TypeScript solution mirrors the recursive Python solution with slight syntactical adjustments:
interface TreeNode {
val: number;
left: TreeNode | null;
right: TreeNode | null;
function isBalanced(root: TreeNode | null): boolean {
const checkHeight = (node: TreeNode | null): number => {
if (node === null) return 0;
const left = checkHeight(node.left);
const right = checkHeight(node.right);
if (left === -1 || right === -1 || Math.abs(left - right) > 1) return -1;
return 1 + Math.max(left, right);
return checkHeight(root) !== -1;
Java Solution
Lastly, this Java solution also reflects the recursive approach:
public class TreeNode {
int val;
TreeNode left;
TreeNode right;
TreeNode(int x) { val = x; }
public class Solution {
private int checkHeight(TreeNode root) {
if (root == null) return 0;
int left = checkHeight(root.left);
int right = checkHeight(root.right);
if (left == -1 || right == -1 || Math.abs(left - right) > 1) return -1;
return 1 + Math.max(left, right);
public boolean isBalanced(TreeNode root) {
return checkHeight(root) != -1;
Mastering the Height-Balanced Binary Tree problem is a significant step forward in your coding interview preparation journey. By understanding the recursive and iterative approaches to this problem,
you're not only ready to tackle similar questions but also equipped with strategies that apply to a broader range of algorithm challenges. Whether you're an experienced engineer or new to coding
interviews, these insights will help you approach binary tree problems with confidence.
Remember, the key to excelling in coding interviews is practice and understanding the underlying principles of data structures and algorithms. Happy coding!
Did you find this article valuable?
Support Sean Coughlin by becoming a sponsor. Any amount is appreciated! | {"url":"https://blog.seancoughlin.me/mastering-leetcodes-height-balanced-binary-tree-a-comprehensive-guide","timestamp":"2024-11-09T16:13:16Z","content_type":"text/html","content_length":"213978","record_id":"<urn:uuid:0169a4dc-fd2a-49cf-8159-dc6528ee1f96>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00286.warc.gz"} |
CPF Life: Is it a worthwhile investment? - The Life Finance Blog
CPF Life: Is it a worthwhile investment?
In a previous post, CPF Life: A Primer, we described the features of the CPF Life scheme. Singapore has phased out pension schemes for most of the citizens and residents. Hence, CPF Life will play an
important role in post-retirement finances of the CPF members, being a mandatory annuity which serves as a cross between the defunct defined benefits pension schemes, and a the defined contribution
scheme which the CPF has been associated with. But it is common to view CPF Life as an investment, and whether it is a worthwhile investment.
In particular, as a life-long annuity, it hedges the post-retirement finances of the CPF members against longevity risk, i.e. the risk of outliving financial resources. Therefore, CPF Life should be
evaluated, not as an investment (i.e. what is the return on investment, when is the breakeven longevity), but rather as a hedge (i.e. does it protect against longevity risk – yes, is it actuarially
fair, or value for money). In the following, we address the second metric for evaluating CPF Life – what is the value of CPF Life, and is it a worthwhile investment?
CPF Life hedges against longevity risk –
the risk of outliving financial resources
* Our update on CPF Life for 2020 is here!
Calculating the Value of CPF Life
The assessment of the value of CPF Life is a straightforward present valuation of the cashflows:
1. At the age of 55, the CPF member “invests” a sum of money into a deferred annuity. We assume this is the Full Retirement Sum (FRS) of members turning 55 in 2019, $176,000
2. At the age of 65 onwards, the CPF member receives a monthly stream of payments. This is projected by the CPF Board based on various assumptions regarding the returns it achieves on the funds
3. Should the CPF member pass on before starting to receive the monthly payments at the age of 65, the CPF Board refunds the entire FRS plus the accumulated interest to deceased member’s nominees
4. Should the CPF member pass on after starting to receive the monthly payments at the age of 65, the CPF Board pays the projected bequest to the deceased member’s nominees. The amount of the
bequest depending on the age of the member.
We need two further parameters to evaluate the value of CPF Life:
1. The discount rate. This is derived by computing the discount rate on a stream of payments over time from the age of 65 onwards until the exhaustion of the funds in a member’s Retirement Account
(RA) such that the present value is equal to the minimum sum of $176,000. This works out to be 4.27%, which is a blend of the RA interest rate, plus the extra 1% paid to balances up to $60,000
and the additional 1% paid on balances up to $30,000.
2. The likelihood that the member will pass on at a specific age. This is derived from the preliminary Life tables for Singapore 2017
For males:
Age x (Years) Prob of dying between age x and x+1 Number of survivors at age x Number of deaths between age x and x+1 Expectation of life at age x
q(x) l(x) d(x) e(x)
55 0.00424 95,636 406 27.6
60 0.00687 93,186 640 23.2
65 0.01101 89,353 984 19.1
70 0.01859 83,445 1,551 15.3
75 0.03108 74,243 2,307 11.8
80 0.05308 60,956 3,236 8.8
85 0.08923 43,390 3,872 6.4
90 0.14730 24,137 3,555 4.5
95 0.23240 8,988 2,089 3.1
100+ 1.00000 1,775 1,775 2.1
For females:
Age x (Years) Prob of dying between age x and x+1 Number of survivors at age x Number of deaths between age x and x+1 Expectation of life at age x
q(x) l(x) d(x) e(x)
55 0.00252 97,281 245 31.5
60 0.00380 95,831 364 27.0
65 0.00591 93,681 553 22.5
70 0.00958 90,351 865 18.3
75 0.01796 84,936 1,526 14.2
80 0.03505 75,301 2,639 10.7
85 0.06289 59,872 3,765 7.8
90 0.10926 39,500 4,316 5.5
95 0.18031 19,054 3,436 3.8
100+ 1.00000 5,548 5,548 2.6
From these tables, we can compute the likelihood of a person surviving up a particular age by dividing the Number of Survivors at exact age x by the same number at an earlier age. For example, the
chance that a 55-year old male will survive to the age of 90 is:
I[90]/I[55 ] = 24,137/95,636 = 25.24%
We now can value the CPF Life Scheme for a 55 year old setting aside the Full Retirement Sum.
To find the present value of a future payment at the age of 90 say, P[90], we discount this payment for the time until the receipt of the payment. For a 55 year old person, the value of this payment
at the age of 90 (i.e. in 35 years time) is:
Present Value = P[90] / (1 + Discount Rate)^t
where t = 35 years, and Discount Rate = 4.27%. For an annuity, the payment at age 90 is only received if the person survives until age 90. Hence, we need to further adjust this by the likelihood of
this payment happening, which is 25.24% for a male, as computed earlier. In the case of CPF Life, there is also a bequest payable if he/she passes away in the same year. The likelihood of this
bequest being received (by the nominees, of course!) is the chance that the person survives until age 90 (which is 25.24%) and the chance that he/she then passes on in the same year q[90], which is
14.73% from the life table above.
Putting things together
Let’s compute the present values for the payments in the future at different ages for a male aged 55 years today. We assume the CPF Life payment is at the higher level projected by the CPF Board. We
also assume all the CPF Life’s payments for a particular year is paid at the start of the year. And if someone passes away, it happens at the end of the year (so he receives both the payments and
Age x (Years) CPF Life Payout CPF Life Bequest Chance of living to age Chance of dying at age
P(x) B(x) l(x) / l(55) q(x)
65 $1,549 $274,204 93.43% 1.101%
70 $1,549 $181,211 87.25% 1.859%
75 $1,549 $93,736 77.63% 3.108%
80 $1,549 $9,671 63.74% 5.308%
85 $1,549 $0 45.37% 8.923%
90 $1,549 $0 25.24% 14.730%
95 $1,549 $0 11.94% 23.240%
100 $1,549 $0 1.86% 35.630%
Age x (Years) Expected CPF Life Payout Expected CPF Life Bequest Present Value
P(x) x l(x) / l(55) B(x) x l(x) / l(55) x q(x)
65 $11,432 $1,857 $13,289
70 $8,662 $1,570 $10,232
75 $6,253 $980 $7,233
80 $4,165 $115 $4,280
85 $2,406 $2,406
90 $1,086 $1,086
95 $328 $328
100 $53 $53
If we do this for every single age starting from 55 onwards, and sum up all the present values, we get the value of the CPF Life for a person aged 55. What do these values look like? These are below:
For Males:
Full Retirement Sum Plan Present Value at Age 55 % Value form Monthly Payments % Value from Bequest
Standard $163,244 - $178,416 82.4% 17.6%
Basic $173,267 - $188,753 70.7% 29.3%
Escalating $148,482 - $174,448 80.2% 19.8%
For Females:
Full Retirement Sum Plan Present Value at Age 55 % Value form Monthly Payments % Value from Bequest
Standard $160,954 - $177,469 89.1% 10.9%
Basic $170.598 - $187,226 79.6% 20.4%
Escalating $155,082 - $172,400 87.2% 12.8%
Now, these values are the average values to the entire cohort of males and females under the CPF Life scheme. They can be very different for each individual, depending on when the person passed away.
A longer-lived person would get more benefit and value than a shorter-lived one. This is the nature of annuities and longevity risk pooling. The tables following provide another view of the present
value of CPF Life, depending on how long the person survived:
A longer-lived person gets more benefit than a shorter-lived one, as is the nature of annuities & longevity risk pooling
For Males:
Age of Death Full Retirement Sum Standard Plan Full Retirement Sum Basic Plan Full Retirement Sum Escalating Plan
75 $131,822 - $149,148 $171,180 - $187,431 $125,176 - $143,985
80 $157,937 - $174,621 $169,644 - $186,698 $147,584 - $164,781
85 $179,125 - $198,047 $168,348 - $186,553 $173,762 - $194,008
90 $196,315 - $217,054 $178,140 - $197,608 $197,213 - $220,189
For Females:
Age of Death Full Retirement Sum Standard Plan Full Retirement Sum Basic Plan Full Retirement Sum Escalating Plan
75 $127,492 - $145,612 $168,559 - $184,534 $122,296 - $139,769
80 $146,776 - $162,897 $166,497 - $183,247 $134,144 - $150,357
85 $166,467 - $184,750 $164,702 - $182,710 $157,936 - $177,024
90 $182,443 - $202,481 $172,214 - $191,411 $179,249 - $200,911
What can we conclude?
What do these values for the tell us? Is CPF Life a worthwhile investment? There are a number of conclusions:
1. We need to see if the inaugural cohort in the CPF Life scheme gets the higher payout projected. Only at this higher level of payouts is the CPF Life Scheme worth the Full Retirement Sum. In other
words, if the payouts under CPF Life are less, the CPF Board has replaced the Retirement Sum Scheme with something that is of lesser value in practice.
2. A significant amount of the value of CPF Life is for bequests, which diminish over time. In the worst case, the bequest disappears exactly when a CPF member passes away, leaving little or nothing
to his/her loved ones. So even while the Basic Plan has a higher present value compared to the Standard or Escalating Plans, the additional value comes form the bequests which may not be paid out
3. In an annuity scheme, those who live the longest gain the most. In the case of CPF Life, it is the lucky 50% who live past their life expectancy who will gain. Therefore, CPF members in poor
health before the age of 65 (which could reduce life expectancy by 3 years or more), should choose a Plan which maximizes the value to them and their loved ones given their expected lifespans.
These conclusions should not detract from the fact that the CPF Life scheme is a valuable addition to the retirement planning tools and products, which have been lacking in many aspects previously,
available for CPF members. However, there is still room to improve in terms of product design, communication, and understanding of the CPF Life Scheme for the benefit of CPF members. It is in this
spirit that we present these conclusions.
One common question is for those born before 1958, and who have a choice upon reaching the age of 65 to enroll in either CPF Life, or the Retirement Sum Scheme, how should they choose? As we have
shown above, both options are actuarially fair. The value of what you expect get out of them is exactly the value of what you put into them. However, both in terms of hedging against outliving
resources, and providing a bequest for beneficiaries, CPF Life does better, thanks to the addition of the returns to mortality risk!
Other resources
Other perspectives on the returns on CPF Life:
**Now that it is 2021, the Full Retirement Sum for CPF members turning 55 this year has gone up to $186,000 from $181,000. Find out our thoughts on this in our update here!
22 thoughts on “CPF Life: Is it a worthwhile investment?”
8. Hi, I really appreciate your insightful analysis on whether CPF Life is a worthwhile investment. I have learnt a lot from it. For people born before 1958 who are on the old Retirement Sum Scheme
but have the option of converting to the CPF Life Scheme before the age of 80, would it be beneficial for them to defer the drawdown age to 70 and also, should they convert to CPF Life Scheme at
all? If so, should they do so as early as possible or as late as possible? I would be very grateful if you could do an analytical article explaining these issues. Thank you very much in advance.
1. Hi Victor!
Thanks for your interest in my research and writing!
With regards to your questions:
1) Should you convert to the CPF Life scheme? My take is yes, since a person in his early 60’s actually has a 26% chance to living past the age of 90, and CPF Life is the best hedge against
running out of money at that age. Moreover, as we grow old, our mental capacity and ability to manage our money diminishes, so having something guaranteed to payout month after month is an
2) Should you defer the CPF Life payouts to age 70? My answer (actually in the blog as well see CPF Life: Should I defer the payouts to age 70?) is no. The additional 7% increase in the
payout amount is not enough to compensate for the fewer years we have to get paid. For example, in the US (where the life expectancy of people in their 60’s is similar to Singapore), Social
Security payouts increase by 8% for each year deferred, so more than what CPF Life gives.
Hope this is helpful!
9. Thank you for your very prompt reply. Yes, I have read that equally good article on why we should not defer our drawdown age to 70 IF we are on CPF Life. However, I am on the old RSS as I was
born in 1956. Our RSS is designed to last till the age of 90. RSS pays accrued interest into our own RA whereas CPF Life pays accrued interest into the CPF Life common annuity pool to benefit
those who live longer. Since I have the option to convert to CPF Life before the age of 80, I figure that if I am still alive and feel relatively healthy at close to 80 years of age, then it
would still not be too late to convert to CPF Life to guard against longevity risk although I am likely to have a lower monthly payout. Do so will let me have the best of both worlds – receiving
accrued interest up to the age of 80 as well guarding against longevity risk after 80. It would be good to have your professional view on whether my thinking is correct.
1. Hi again Victor!
You raise a very interesting question, whether people on RSS should switch to CPF Life and when they should do it. To be honest, I have not thought about this before, and it might be worth
doing a blog post on this question next month.
But from a very rough point of view, I think you have gotten it correct, the best time to switch is sometime between the age of 75 to 80. I believe this is so for 2 reasons:
a) As the CPF Life Standard Plan pays out a bequest up to the age of 80, what this means is that the RSS and CPF Life are very similar up to that age. Basically, you are paying for the
monthly payouts out of your own CPF savings. After the age of 80, CPF Life becomes a true annuity, which can run forever, while the RSS remains a savings/withdrawal scheme, paying you out of
your own savings. In terms of monthly payouts, an annuity will always be better than savings (although it will leave you with nothing for your heirs if you pass away)
b) If you look at the returns to an annuity, you will notice that the returns to longevity start rising very quickly after the age of 75-80. This is the way an annuity works – the money left
over by those who pass away are distributed to those who survive, and during this period, the longevity returns will far outstrip the 4%-6% the CPF pays out to the people on the RSS.
Hence, it looks like your plan to switch over before the age of 80 is a good one! | {"url":"https://lifefinance.com.sg/cpf-life-is-it-a-worthwhile-investment/","timestamp":"2024-11-11T22:52:10Z","content_type":"text/html","content_length":"171112","record_id":"<urn:uuid:67d60564-1069-4e36-b23c-99a79ecddb10>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00464.warc.gz"} |
Special Relativity/Mathematical approach2 - Wikibooks, open books for an open world
A four vector represents a displacement in spacetime.
If the spacetime has a metric given by:
${\displaystyle ds^{2}=dt^{2}-dx^{2}-dy^{2}-dz^{2}}$
and if the space-time is flat the magnitude of the four vector is calculated from:
${\displaystyle \mathbf {S} ^{2}=A^{2}-B^{2}-C^{2}-D^{2}}$
Where A, B, C, D are the projections of the vector on the corresponding coordinate axes (t,x,y,z). The magnitude of a four vector is given by:
${\displaystyle S=|\mathbf {S} ^{2}|^{1/2}={\sqrt {A^{2}-B^{2}-C^{2}-D^{2}}}}$
The scalar product of a four vector (also known as the "dot product" or "inner product") can be derived in the same way as the scalar product for an ordinary vector (a three vector, see scalar
product). So the scalar product of two four vectors, ${\displaystyle \mathbf {A.B} }$ is:
${\displaystyle \mathbf {A.B} =A_{t}B_{t}-A_{x}B_{x}-A_{y}B_{y}-A_{z}B_{z}}$
This can also be derived easily from the metric tensor of Minkowski spacetime. The scalar product of two four-vectors x and y is defined (using indicial notation) as:
${\displaystyle x\cdot y=x^{a}\eta _{ab}y^{b}=\left({\begin{matrix}x^{0}&x^{1}&x^{2}&x^{3}\end{matrix}}\right)\left({\begin{matrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right)\left({\
The scalar product of four vectors is independent of the coordinate system:
${\displaystyle {\begin{matrix}A'_{x}&=&\gamma \left(A_{x}-{\frac {v}{c}}A_{t}\right)\\A'_{y}&=&A_{y}\\A'_{z}&=&A_{z}\\A'_{t}&=&\gamma \left(-{\frac {v}{c}}A_{x}+A_{t}\right)\end{matrix}}}$
The scalar product in this frame is:
${\displaystyle {\underline {A'}}\cdot {\underline {B'}}=A'_{x}B'_{x}+A'_{y}B'_{y}+A'_{z}B'_{z}-A'_{t}B'_{t}}$
Simplifying, we get:
${\displaystyle {\begin{matrix}{\underline {A'}}\cdot {\underline {B'}}&=&\gamma ^{2}\left(A_{x}B_{x}-{\frac {v}{c}}(A_{x}B_{t}+A_{t}B_{x})+{\frac {v^{2}}{c^{2}}}A_{t}B_{t}\right)\\&&+A_{y}B_{y}
+A_{z}B_{z}&\\&&-\gamma ^{2}\left({\frac {v^{2}}{c^{2}}}A_{x}B_{x}-{\frac {v}{c}}(A_{x}B_{t}+A_{t}B_{x})+A_{t}B_{t}\right)\\&=&\left(1-{\frac {v^{2}}{c^{2}}}\right)^{-1}\left(A_{x}B_{x}+{\frac {v
^{2}}{c^{2}}}A_{t}B_{t}\right)\\&&+A_{y}B_{y}+A_{z}B_{z}\\&&-\left(1-{\frac {v^{2}}{c^{2}}}\right)^{-1}\left({\frac {v^{2}}{c^{2}}}A_{x}B_{x}-A_{t}B_{t}\right)\\&=&A_{x}B_{x}+A_{y}B_{y}+A_{z}B_
Which is the same as the scalar product in the original frame of reference.
1. Distributivity for vector addition ${\displaystyle \mathbf {A} .(\mathbf {B} +\mathbf {C} )}$
2. Symmetry ${\displaystyle \mathbf {A.B} =\mathbf {B.A} }$
3. Leibniz rule of differentiation applies ie: ${\displaystyle d(\mathbf {A.B} )=d\mathbf {A} .\mathbf {B} +\mathbf {A} .d\mathbf {B} }$
4. Orthogonality ${\displaystyle \mathbf {A.B} =0}$ if ${\displaystyle \mathbf {A} }$ is perpendicular to ${\displaystyle \mathbf {B} }$
5. ${\displaystyle \mathbf {A.A} =\mathbf {A} ^{2}}$
We now know that the dot product of two four-vectors is a scalar result, i. e., its value is independent of coordinate system. This can be used to advantage on occasion.
In the odd geometry of spacetime it is not obvious what perpendicular means. We therefore define two four-vectors ${\displaystyle {\underline {A}}}$ and ${\displaystyle {\underline {B}}}$ to be
perpendicular if their dot product is zero, in the same way as with three-vectors.
${\displaystyle {\underline {A}}\cdot {\underline {B}}=0}$
Because the dot product is a scalar, if vectors are perpendicular in one frame, they will be perpendicular in all frames.
We can also consider the dot product of a four-vector ${\displaystyle {\underline {A}}}$ which resolves into ${\displaystyle (A_{x},A_{t})}$ in the unprimed frame. Let us further suppose that the
spacelike component is zero in some primed frame, so that the components in this frame are (0, A[t'] ) The fact that the dot product is independent of coordinate system means that
${\displaystyle {\underline {A}}\cdot {\underline {A}}=A_{x}^{2}-A_{t}^{2}=-A_{t}'^{2}}$
This constitutes an extension of the spacetime Pythagorean theorem to four-vectors other then the position four-vector. Thus, for instance, the wavenumber for some wave may be zero in the primed
frame, which means that the wavenumber and frequency in the unprimed frame are related to the frequency in the primed frame by ${\displaystyle k^{2}-\omega ^{2}/c^{2}=-\omega '^{2}/c^{2}}$ .
We indicate a four-vector by underlining and write the components in the following way: ${\displaystyle {\underline {k}}=(k,\omega /c)}$ , where ${\displaystyle {\underline {k}}}$ is the wave
four-vector, ${\displaystyle k}$ is its spacelike component, and ${\displaystyle \omega /c}$ is its timelike component. For three space dimensions, where we have a wave vector rather than just a
wavenumber, we write ${\displaystyle {\underline {k}}=(k,\omega /c)}$ .
Another example of a four-vector is simply the position vector in spacetime, ${\displaystyle {\underline {x}}=(x,ct)}$ , or ${\displaystyle {\underline {x}}=(\mathbf {x} ,ct)}$ in three space
dimensions. The ${\displaystyle c}$ multiplies the timelike component in this case, because that is what is needed to give it the same dimensions as the spacelike component.
Classically, the temporal derivative, d/dt acts like a scalar so we can multiply a vector by it, and get another vector.
In relativity t is part of a four-vector, which means d/dt also is, so we can't simply differentiate vectors with respect to t and expect to get vectors.
For example, the position of a stationary particle is (0, ct).
Viewed from a frame moving at v to the right, its position becomes (-vτ, cτ), where τ=γt is the time as measured in the moving frame.
If we differentiate with respect to τ the velocity would be (-v, c)
If we differentiate with respect to t, we get (0, c) in the stationary frame, which would be (using the Lorentz transform) (-γv, -γc) in the moving frame, if this were a four vector.
These two expressions differ by a factor of γ, when measured in the same frame, so this can not be a four vector.
However, if the moving observer divides by γ, which is the time dilation, they will get the same vector as the stationary observer.
Doing this is equivalent to differentiating by the time in the particles own rest frame. Since this works for the position vector, we can expect it to work for all vectors.
The time measured in a particle's rest frame is called its proper time.
Differentiating a vector with respect to proper time gives another vector, which is the relativistic equivalent of the temporal derivative. | {"url":"https://en.m.wikibooks.org/wiki/Special_Relativity/Mathematical_approach2","timestamp":"2024-11-12T17:20:52Z","content_type":"text/html","content_length":"108959","record_id":"<urn:uuid:6b1c392f-aac5-41f5-808d-5ba0ed6dcf7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00433.warc.gz"} |
a. Determine all critical points of the given system of equations.
b. Find the corresponding linear system near each critical point.
c. Find the eigenvalues of each linear system. What conclusions can you then draw about the nonlinear system?
d. Draw a phase portrait of the nonlinear system to confirm your conclusions, or to extend them in those cases where the linear system does not provide definite information about the nonlinear
&\frac{dx}{dt} = -2x - y -x(x^2 + y^2)\\
&d\frac{dy}{dt} = x - y + y(x^2 + y^2) | {"url":"https://forum.math.toronto.edu/index.php?PHPSESSID=kff07m30tah4cqa6tu9n4q8e82&topic=1152.0;prev_next=next","timestamp":"2024-11-06T18:40:30Z","content_type":"application/xhtml+xml","content_length":"28841","record_id":"<urn:uuid:5521b971-4a46-423d-93ba-07967ab6d218>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00869.warc.gz"} |
Forecasting Concepts Part 1: PLEASE Don’t Use Ordinary Regression
If you have been to one of my courses where I touch on forecasting, you have heard my rant on this before. Please! Do not use ordinary least squares (OLS) regression to forecast to the future! This
article will explain why not.
Why Can’t I Use Ordinary Regression?
Sadly, in my checkered past, I have seen innocent, well-meaning folks use time as the independent variable in ordinary least squares regression to accomplish forecasting. Do not do this! One good
reason is assumption violations.
Ordinary regression models have a number of assumptions.
• Normality
• Independence
• Homoscedasticity (constant variance)
• Linearity
Data with a time component violate these assumptions causing the following issues:
• Lack of normality affects standard errors and may in some cases also affect the parameter estimates.
• OLS regression assumes that the error terms are independent and identically distributed (IID). The independence assumption is violated in time series data because of autocorrelation (also called
serial correlation). This does not affect the parameter estimates in the limit, but the standard errors are compromised, and estimates might be affected for small sample sizes; t-statistics and
associated p-values will be wrong.
• Heteroscedasticity (variance that is not constant) does not affect the parameter estimates but does compromise the standard errors.
• Lack of linearity means you have the wrong kind of model, and your results can be meaningless.
(Information above largely from Forecasting Using SAS Software: A Programming Approach by Dickey & Woodfield)
Recall from my earlier article of goal-seeking and scenario analysis that homoscedasticity (also called homogeneity) of variance means that variances (error) are equal/constant. Heteroscedasticity is
the opposite and means that the variance (error) changes.
Traditional regression methods are performed on data at a single point in time, or where time is not even considered. Cross-sectional data would be appropriate for this.
Cross-sectional data – a collection of observations for multiple individuals at a single point in time. The following table shows an example.
Aside: For perspective, what is 7,500 calories? Well, it could be eight 12-ounce steaks.
Or…four Valentine’s boxes of chocolates.
Let’s just say that it would be much easier to eat four boxes of chocolates in a day. Don’task me how I know this.
Recall that time series analysis requires an historic set of data that includes repeated measures.
Longitudinal and Panel data – individual (or other entity) observations measured repeatedly over time. If there is more than one individual measured over time these are called panel data and may be
called cross-sectional time series data. This data may be transactional, that is measured at various times by individuals. The following table provides an example.
Longitudinal (panel) data are useful for distinguishing cohort effects from aging effects. Let’s say an effective reading program was begun three years ago in a new public kindergarten. In a
cross-sectional study comparing reading level by age, we may find 9-year-olds to be poorer readers than 8-year-olds. This can be a cohort effect (the 9-year-old cohort did not get the early reading
training that the 8-year-old cohort got) as opposed to an ageing effect. If the reading level of students is measured over time for the SAME individuals as in a longitudinal study, the cohort effect
is removed and the ageing effect can be accurately measured. (Adapted from Diggle et al. 1994.)
Time Series – is an indexed set of data over equally-spaced time periods. Note that this is an example for illustration purposes, and you would absolutely never conduct a forecast from only three
Many time series analyses require that you create a time series from the transactional data. This is commonly done by taking the average, minimum, or maximum for given time periods. In the fictional
example above, I have taken the average for each month.
Using our weight illustrates the point that subsequent measures are not independent. How much I weigh this month is not independent from how much I weighed last month.
Doing it the Right Way
If OLS regression is the wrong way, then what is the right way? You must:
1. Ensure that your data are a proper time series, or turn them into one (remember…equally spaced periods)
2. Evaluate the time series, for example for trends and cycles.
3. Use methods such as exponential smoothing and ARIMA.
I will discuss this further in future posts.
Proper forecasting methods are available in SAS forecasting tools, making it easy for you to use them. For more specifics on forecasting with SAS tools, visit the forecasting courses listed at the
end of this article and my summary of this in Forecasting Concepts 4.
Sources and Additional Information
Unsolicited Advice for Valentine’s Day:
DON’T: Eat half of the chocolates in a heart-shaped box and then give your true love a half-empty box.
DON’T: Re-gift a box of chocolates that your ex gave you last year to your new love.
DON’T: Decide at 6 pm that you will go out to your favorite restaurant on Valentine’s Day.
DO: Make a reservation or plan to go out on a different night.
DON’T: Take your true love to McDonald’s on Valentine’s Day if you are over the age of 11.
DO: Write a love poem.
DON’T: Print it out in
DO: Print it out in
And finally, what you’ve always wondered…the correct answer to “Does this outfit make me look fat?” is “Of course not; you look amazing, honey!” followed by, “Try this chocolate truffle.”
You’re welcome.
07-23-2021 11:18 AM
07-23-2021 11:18 AM
Thanks! Great article. | {"url":"https://communities.sas.com/t5/SAS-Communities-Library/Forecasting-Concepts-Part-1-PLEASE-Don-t-Use-Ordinary-Regression/ta-p/554310","timestamp":"2024-11-12T04:02:43Z","content_type":"text/html","content_length":"144899","record_id":"<urn:uuid:d94a8515-cf3c-435c-89dc-436a43280a05>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00450.warc.gz"} |
This Idea Must Die Series Archives - Environmental Biophysics
Jul 18
In a continuation of our series, based on this book, which discusses scientific ideas that need to be reexamined, Dr.’s Doug Cobos and Colin Campbell make a case for standard operative temperature to
replace wind chill factor:
What are we looking for when we look at a weather forecast? We want to know how we’re going to feel and what we need to wear when we go outside. Currently, the forecast is based on air temperature
and wind chill, which are a major part of the picture, but not all of it. What the forecast leaves out is the effect of radiation. If you go out on a cold, sunny day, you’re going to be warmer than
you would be at that same temperature and wind speed on a cloudy day. It’s not going to feel the same. So why not replace wind chill with the more accurate measurement of standard operative
Where wind chill came from:
In 1969, a scientist named Landsberg created a chart showing how people feel at a certain air temperature and wind speed. His chart was based on work by Paul Siple and Charles Passel. But, Siple and
Passel’s work was done in Antarctica using a covered bottle of water under the assumption that you were wearing the thickest coat ever made. The table was updated in 2001 to improve its accuracy,
but since the coat thickness assumption remained unchanged it underestimates the chill that you feel. It also explicitly leaves out radiation, assuming the worst case scenario of a clear night sky.
The controversy is detailed in this NY Times article from several years ago.
During the winter, forecasters use air temperature and wind chill with no radiation component. In the summertime, they use an index that takes into account the temperature and the humidity called
the heat index. But again, there is no accounting for radiation. Our families deal with this all the time when we take the kids out fishing in early spring. Before we leave, we’ll check the weather
report for temperature and wind chill. But is it going to be sunny or cloudy? That’s key information. You can see the radiation effect in action when a cloud drifts in front of the sun. All the
kids scramble for their jackets because the perceived temperature has changed. This is something that none of the indices actually capture.
Understanding the concept:
Standard operative temperature combines the effects of radiation and wind speed to give a more complete understanding of how you will feel outside. It is a simple energy balance: the amount of
energy coming in from the sun and metabolism minus the amount of energy going out through heat and vapor loss. Using this relationship and adding in the heat and vapor conductances, the temperature
that we might “feel” can be graphed against the solar zenith angle at a fixed air temperature. For reference, the sun is directly overhead when the zenith angle is 0 degrees and at the horizon at 90
What’s interesting is that on a clear day when the sun is around 45 degrees (typical for around noon in the winter) and the temperature is -5 degrees C, if the wind is blowing at 1 m/s, you would
feel a temperature of 6 degrees C (relatively warm). The wind chill predicts the feel at -6 degrees C, a huge difference in comfort. This difference decreases with increasing wind speed as you’d
expect, but even for the same conditions and wind at 10 m/s, the 45-degree sun angle creates a temperature feel 7 degrees C higher than the wind chill. Although not huge, this makes a considerable
difference in perceived comfort.
What do we do now?
The interesting thing is that all the tools to measure radiation are there. Most weather stations have a pyranometer that measures solar radiation, and some of them even measure longwave radiation,
which can also be estimated within reasonable bounds. This means forecasters have all the tools to report the standard operative temperature, which is the actual temperature that you feel. Why not
incorporate standard operative temperature into each forecast? Using standard operative temperature we could have the right number, so we’d know exactly what to wear at any given time. It’s an easy
equation, and forecast websites could use it to report a “comfort index” or comfort operative temperature that will tell us exactly how we’ll feel when we go outside.
Which scientific ideas do you think need to be reexamined?
See weather sensor performance data for the ATMOS 41 weather station.
Download the “Researcher’s complete guide to soil moisture”—>
Get more information on applied environmental research in our
May 30
This Idea Must Die: Using Filter Paper as a Primary Method for Water Potential
In a continuation of our popular series inspired by the book, This Idea Must Die: Scientific Problems that are Blocking Progress, Dr. Gaylon S. Campbell relates a story to illustrate the filter
paper method, a scientific concept he thinks impedes progress:
I remember listening to a story about a jeweler who displayed a big clock in the front window of his store. He noticed that every day a man would stop in front of the store window, pull out a pocket
watch, set the watch to the time that was on the large clock, and then continue on. One day, the jeweler decided to meet the man in order to see why he did that. He went out to the front of the
store, intercepted the man, and said, “I noticed you stop here every day to set your watch.”
The man replied, “Yes, I’m in charge of blowing the whistle at the factory, and I want to make sure that I get the time exactly right. I check my watch every day so I know I’m blowing the whistle
precisely at noon.”
Taken aback, the jeweler replied, “Oh, that’s interesting. I set my clock by the factory whistle.”
The Wrong Idea:
In science, we like to have independent verification for the measurements we make in order to have confidence that they are made correctly, but there are times when our independent verification turns
out to be like the clock and the whistle, and we end up inadvertently chasing our tail. I’ve seen this happen to people measuring water potential (soil suction). They measure using a fundamental
method like dew point or thermocouple psychrometry, but then they verify the method using filter paper. Filter paper is a secondary method—it was originally calibrated against the psychometric
method. It’s ridiculous to use a secondary method to verify an instrument based on fundamental thermodynamics.
Where the Filter Paper Method Came From:
Before the development of modern vapor pressure measurements, field scientists needed an inexpensive, easy method to measure water potential. I.S. McQueen in the U.S. Geological Survey and some
others worked out relationships between the water content of filter paper and water potential by equilibrating them over salt solutions. Later, other scientists standardized this method using
thermocouple psychrometers so that there was a calibration. Filter paper was acceptable as a kind of a poor man’s method for measuring water potential because it was inexpensive, assuming you already
had a drying oven and a balance. The thermocouple psychrometer and later the dew point sensor quickly supplanted filter paper in the field of soil physics. However, somewhere along the line, the
filter paper technique was written into standards in the geotechnical area and the change to vapor methods never occurred. Consequently, a new generation of geotechnical engineers came to rely on the
filter paper method. Humorously, when vapor pressure methods finally took hold, filter paper users became focused on verifying these new fundamental methods with the filter paper technique to see
whether they were accurate enough to be used for water potential measurement of samples.
What Do We Do Now?
Certainly, there’s no need to get rid of the filter paper method. If I didn’t have anything else, I would use it. It will give you a rough idea of what the water potential or soil suction is. But the
idea that I think has to die is that you would ever check your fundamental methods (dewpoint or psychrometer) against the filter paper method to see if they were accurate. Of course they’re accurate.
They are based on first principles. The dew point or psychrometer methods are a check to see if your filter paper technique is working, which it quite often isn’t (watch this video to learn why).
Which scientific ideas do you think need to be revised?
Download the “Researcher’s complete guide to water potential”—>
Download the “Researcher’s complete guide to soil moisture”—>
Get more information on applied environmental research in our
Apr 16
Do the Standards for Field Capacity and Permanent Wilting Point Need to Be Reexamined?
We were inspired by this Freakonomics podcast, which highlights the book, This Idea Must Die: Scientific Problems that are Blocking Progress, to come up with our own answers to the question: Which
scientific ideas are ready for retirement? We asked METER scientist, Dr. Gaylon S. Campbell, which scientific idea he thinks impedes progress. Here’s what he had to say about the standards for
field capacity and permanent wilting point:
The phrase, “this idea must die,” is probably too strong a phrase, but certainly some scientific ideas need to be reexamined, for instance the standard of -⅓ bar (-33 kPa) water potential for field
capacity and -15 bars (-1500 kPa or -1.5 MPa) for permanent wilting point.
Where it came from:
In the early days of soil physics, a lot of work was done in order to establish the upper and lower limit for plant available water. The earliest publication on the lower limit experiments was by
Briggs and Shantz in 1913. They planted sunflowers in small pots under greenhouse conditions, letting the plants use the water until they couldn’t recover overnight, after which they carefully
measured the water content (WC). The ability to measure water potential came along quite a bit later in the 1930s using pressure plates. As those measurements started to become available, a
correlation was found between the 15 bar pressure plate WCs and the WCs that were determined by Briggs and Shantz’s earlier work. Thus -15 bars (-1.5 MPa) was established as the lower limit of plant
available water. The source of the field capacity WC data that established a fixed water potential for the upper limit is less clear, but the process, apparently, was similar to that for the lower
limit, and -⅓ bar was established as the drained upper limit water potential in soil.
Damage it does:
In practice, using -15 bars to calculate permanent wilting point probably isn’t a bad starting point, but in principle, it’s horrible. Over the years we have set up experiments like Briggs and Shantz
did and measured water potential. We have also measured field soils after plants have extracted all the water they can. Permanent wilting point never once came out at -15 bars or -1.5 MPa. For
things like potatoes, it was approximately -10 bars (-1 MPa), and for wheat it was approximately -30 bars (-3 MPa). We found that the permanent wilting point varies with the species and even with
soil texture to some extent.
Of course, in the end it doesn’t matter much as the moisture release curve is pretty steep on the dry end, and whether you predict it to be 10 or 12% WC, it doesn’t make a huge difference in the size
of the soil water reservoir that you compute.
However, on the field capacity end of the scale, it matters a lot. If you went out and made measurements of the water potentials in soils a few days after a rain, it would be an absolute accident if
any of them were ever -⅓ bar (-33 kPa). I’ve never seen it. A layered soil, a soil that has a fine-textured horizon on top of a coarse-textured soil, will hold twice as much water as you’ll predict
from the -⅓ bar value. On the other hand, if you’re getting pretty frequent rains or irrigation, that field capacity number becomes irrelevant. Thus, -⅓ bar may be a useful starting point for
determining field capacity, but it’s only a starting point.
Why it’s wrong:
Field capacity and permanent wilting point are dynamic properties. They depend on the rate at which the water is being extracted or the rate at which it’s being applied. They also depend on the
time you wait to sample after irrigation. Think of the soil as a leaky bucket. If you were trying to carry water in a leaky bucket and you walked slowly, the bucket would be empty by the time you
get the water where you want it. However, if you run fast, there will still be some water left in the bucket. Similarly, if a plant can use water up rapidly, most of it will be intercepted, but if a
plant is using water slowly, the water will move down past the root zone and out the bottom of the soil profile before the plant can use it. These are dynamic phenomena that you are trying to
describe with static variables. And that’s where part of the problem comes. We need a number to do our calculations with, but it’s important to understand the factors that affect that number.
What do we do now:
What I hope we can do is better educate people. We should teach that we need a value we call field capacity or permanent wilting point, but it’s going to be a dynamic property. We can start out by
saying: our best guess is that it will be -⅓ bar for finer-textured soils and -1/10 bar (-10 kPa) for coarser-textured soils. But when we dig a hole and find out there is layering in the profile or
textural discontinuities, we’d better adjust our number. If we’re dealing with irrigated farmland, the adjustment will always be up, and if we’re dealing with dryland or rain-fed agriculture where
the time between water additions is longer, we’ll use a lower number.
Some Ideas Never Die:
One of the contributors to the book, This Idea Must Die, Dr. Steve Levitt, had this to say about outdated scientific ideas, and we agree: “I love the idea of killing off bad ideas because if there’s
one thing that I know in my own life, it’s that ideas that I’ve been told a long time ago stick with me, and you often forget whether they have good sources or whether they’re real. You just live by
them. They make sense. The worst kind of old ideas are the ones that are intuitive. The ones that fit with your worldview, and so, unless you have something really strong to challenge them, you hang
on to them forever.”
Harness the power of soil moisture
Researchers measure evapotranspiration and precipitation to understand the fate of water—how much moisture is deposited, used, and leaving the system. But if you only measure withdrawals and
deposits, you’re missing out on water that is (or is not) available in the soil moisture savings account. Soil moisture is a powerful tool you can use to predict how much water is available to
plants, if water will move, and where it’s going to go.
In this 20-minute webinar, discover:
• Why soil moisture is more than just an amount
• Water content: what it is, how it’s measured, and why you need it
• Water potential: what it is, how it’s different from water content, and why you need it
• Whether you should measure water content, water potential, or both
• Which sensors measure each type of parameter
Take our Soil Moisture Master Class
Six short videos teach you everything you need to know about soil water content and soil water potential—and why you should measure them together. Plus, master the basics of soil hydraulic
Download the “Researcher’s complete guide to soil moisture”—>
Download the “Researcher’s complete guide to water potential”—> | {"url":"https://environmentalbiophysics.org/tag/this-idea-must-die-series/","timestamp":"2024-11-05T12:16:14Z","content_type":"text/html","content_length":"140961","record_id":"<urn:uuid:ef85cb7c-08b5-4766-9491-22b1e62c4022>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00431.warc.gz"} |
D103/2013 - A Subgradient-like Algorithm for Solving Vector Convex Inequalities
Preprint D103/2013
A Subgradient-like Algorithm for Solving Vector Convex Inequalities
L.R. Lucambio Pérez | Bello Cruz, J.Y.
Keywords: Projection methods · Strong convergence · Subgradient algorithm · Vector convex functions
In this paper, we propose a strongly convergent variant of Robinson’s subgradient algorithm for solving a system of vector convex inequalities in Hilbert spaces. The advantage of the proposed method
is that it converges strongly, when the problem has solutions, under mild assumptions. The proposed algorithm also has the following desirable property: the sequence converges to the solution of the
problem, which lies closest to the starting point, and remains entirely in the intersection of three balls with radius less than the initial distance to the solution set. | {"url":"https://preprint.impa.br/visualizar?id=1327","timestamp":"2024-11-09T18:47:31Z","content_type":"text/html","content_length":"6338","record_id":"<urn:uuid:ff3173c1-02d5-4380-bb11-ac7161fdfb3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00686.warc.gz"} |
What Is the Average Federal Individual Income Tax Rate on the Wealthiest Americans? | CEA | The White House
By Greg Leiserson, Senior Economist (CEA); and Danny Yagan, Chief Economist (OMB)
Abstract: We estimate the average Federal individual income tax rate paid by America’s 400 wealthiest families, using a relatively comprehensive measure of their income that includes income from
unsold stock. We do so using publicly available statistics from the IRS Statistics of Income Division, the Survey of Consumer Finances, and Forbes magazine. In our primary analysis, we estimate an
average Federal individual income tax rate of 8.2 percent for the period 2010-2018. We also present sensitivity analyses that yield estimates in the 6-12 percent range. The President’s proposals
mitigate two key contributors to the low estimated rate: preferential tax rates on capital gains and dividend income, and wealthy families’ ability to avoid paying income tax on capital gains through
a provision known as stepped-up basis.
When an American earns a dollar of wages, that dollar is taxed immediately at ordinary income tax rates.[1] But when they gain a dollar because their stocks increase in value, that dollar is taxed at
a low preferred rate, or never at all.[2] Investment gains are a primary source of income for the wealthy, making this preferential treatment of investment gains a valuable benefit for the wealthiest
Americans. Yet the most common estimates of tax rates do not fully capture the value of this tax benefit because they use an incomplete measure of income. This analysis asks: what was the average
Federal individual income tax rate paid by the 400 wealthiest American families’ in recent years, determined using a more comprehensive measure of income?
How the wealthy enjoy low income tax: preferred rates on an incomplete measure of income
The wealthy pay low income tax rates, year after year, for two primary reasons. First, much of their income is taxed at preferred rates. In particular, income from dividends and from stock sales is
taxed at a maximum of 20 percent (23.8 percent including the net investment income tax), which is much lower than the maximum 37 percent (40.8 percent) ordinary rate that applies to other income.
Second, the wealthy can choose when their capital gains income appears on their income tax returns and even prevent it from ever appearing. If a wealthy investor never sells stock that has increased
in value, those investment gains are wiped out for income tax purposes when those assets are passed on to their heirs under a provision known as stepped-up basis.
Analyzing a more comprehensive measure of income
Preferred tax rates on income from stock sales (“realized capital gains income”) and from dividends feature prominently in commonly cited tax rates as well as in our analysis.
An important feature of our analysis that is less common in existing estimates of tax rates is that we include untaxed (“unrealized”) capital gains income in our more comprehensive income measure as
they accrue.[3]
Measuring income in this more comprehensive manner matters relatively little for estimating most families’ tax rates, as most families have few investment assets.[4] However, it matters greatly for
the wealthiest families for whom such unrealized and thus untaxed gains are a large share of their income. Like all other forms of income, unrealized capital gains income can be tapped to finance
consumption and can improve financial wellbeing.
A common reference point for defining income in economics is known as Haig-Simons income.[5] Pre-tax Haig-Simons income equals families’ change in wealth, plus taxes and consumption. We define our
income measure as families’ change in wealth plus easily estimable taxes. Our definition of income is more limited than the Haig-Simons definition because it excludes consumption and other taxes, but
it is a simpler way to include a substantial share of capital gains in income and can be implemented with publicly available data.[6]
Primary estimate and sensitivity
In our primary analysis, we estimate that the 400 wealthiest families paid an average Federal individual income tax rate of 8.2 percent on $1.8 trillion of income over the period 2010–2018, the years
from the last decade for which the necessary data are available. Two factors that contribute to this low estimated tax rate include low tax rates on the capital gains and dividends that are taxed,
and wealthy families’ ability to permanently avoid paying tax on investment gains that are excluded from taxable income. The 2022 President’s Budget proposes to raise the capital gains and dividend
tax rate and to virtually end stepped-up basis for the highest-income Americans, thereby ensuring these investment gains are subject to income tax.
The true tax rate of interest may differ from our primary estimate. First, excluding consumption and some taxes from our measure of income suggests that the 8.2 percent estimate is actually higher
than the tax rate measured relative to a truly comprehensive measure of income. Second, the data and methods on which we rely are imperfect. Different estimation choices illustrate the estimate’s
sensitivity. For example, varying the analytic assumptions underlying one of our key methodological choices (discussed in greater detail in the technical appendix below) causes the estimate to vary
from 6 to 12 percent.
The tax rate we estimate is, of course, sensitive to changes in policy. The top capital gains tax rate was 15 percent between May 2003 and 2012 and has been 20 percent (23.8 percent including the net
investment income tax) since 2013. In addition, the Federal individual income tax is only one tax. Focusing on the individual income tax sheds light on the structural limitations of that tax and the
scope for reforms, such as curtailing the ability of the wealthy to avoid paying tax on their investment gains through stepped-up basis. However, alternative tax rates could also be estimated that
account for other taxes, such as the payroll tax, estate and gift tax, corporate income tax, and taxes paid to foreign governments. Moreover, one could use alternative definitions of income or adopt
various approaches to the treatment of certain subsidized activities such as charitable giving. Finally, we focus on an extended time period (2010–2018), which helps to ensure that our analysis
reflects the long-run reality of positive asset incomes despite short-run fluctuations. However, we also present alternative estimates using different start and end years, as well as an estimate that
replaces the Forbes 400 wealth in 2009 with the average for the period 2008–2010.
Our primary estimate of 8.2 percent is much lower than commonly cited estimates of top Federal individual income tax rates. For example, the Joint Committee on Taxation (2021) estimates that the 2021
Federal individual income tax rate on the top 0.4 percent of families ranked by income (i.e., the 715,000 families with income over $1 million) will be 26 percent. Our analysis differs by (a)
analyzing a smaller group of families (the top 0.0002 percent) ranked by wealth, and (b) including unrealized capital gains income in the income measure. See the end of the technical appendix for
additional discussion of how our analysis compares to commonly cited estimates.
We emphasize that any estimate of tax rates on the wealthiest is uncertain and open to refinement, due to current data limitations.
We detail our method in the technical appendix below. In a nutshell, our method is as follows: we take the IRS Statistics of Income (SOI) Division’s Federal individual income taxes paid by the
top-400-by-income families and multiply it by an adjustment factor constructed using the Survey of Consumer Finances (0.63) to convert it to an estimate of taxes paid by the top-400-by-wealth. We
then estimate a more comprehensive measure of income as the change in Forbes 400 wealth, plus our estimates of the top-400-by-wealth’s Federal individual income taxes paid and State-and-local tax
deductions (estimated similarly based on Statistics of Income data). The ratio of the two yields our estimated tax rate.
In this analysis, we used publicly available data to estimate the average Federal individual income tax rate paid by America’s wealthiest 400 families, using a relatively comprehensive measure of
their income that includes income from unsold stock. In our primary analysis, we estimated an average tax rate of 8.2 percent for the period 2010–2018. We also present sensitivity analyses that yield
estimates in the 6–12 percent range.
Preferential capital gains rates and stepped-up basis—a provision of tax law that allows wealthy taxpayers to wipe out unrealized capital gains for income tax purposes when they pass assets to their
heirs—contribute to this low tax rate. The 2022 President’s Budget would increase capital gains rates and virtually end stepped-up basis for the highest-income Americans, thereby ensuring their
investment gains are subject to income tax.
Technical Appendix
This technical appendix documents how we combine publicly available data from the IRS Statistics of Income (SOI) Division, the Federal Reserve Board’s Survey of Consumer Finances (SCF), and Forbes
magazine’s estimates of the wealthiest 400 Americans to estimate the Federal individual income tax rate paid by the wealthiest 400 families. For reference, Forbes estimates that the 400 wealthiest
Americans in 2018 had wealth ranging from $2.1 billion to $160 billion. Our tax rate estimate is dollar-weighted: it is an estimate of total Federal individual income taxes paid by the wealthiest
families, divided by an estimate of those families’ income. We focus on the period 2010–2018 and report estimates for alternative periods as well.
We first describe the basic idea of the estimation procedure and then go through the details. We divide an estimate of the Federal individual income taxes paid by the 400 wealthiest families by a
relatively comprehensive estimate of their income. For the numerator, we start by estimating the taxes paid by the families with the highest reported income on tax returns. Then we estimate how the
income of the highest-wealth families compares to the income of the highest-reported-income families and use that as an adjustment factor to estimate the taxes paid by the highest-wealth families.
For the denominator, we use changes in the reported wealth of the Forbes 400 to estimate the income of the 400 wealthiest families.
Numerator: estimated Federal individual income taxes paid by the wealthiest 400 families
The numerator of our tax rate estimate equals estimated 2010–2018 taxes paid by the wealthiest 400 families. We construct the numerator by estimating the 2010–2018 taxes paid by the
highest-reported-income families, then multiply by an adjustment factor based on the Survey of Consumer Finances (SCF) to account for the fact that highest-reported-income families are not the same
as the highest-wealth families.
SOI published estimates of the taxes paid by the 400 highest-reported-income families annually from 1992 through 2014. In a first and straightforward step, we extend this series through 2018. To do
this, we rely on estimates of the total Federal individual income tax paid by the top 0.001 percent, available from SOI annually from 2001 through 2018. For the years 2001 through 2014, when both
estimates are available, the ratio of taxes paid by the top 400 to taxes paid by the top 0.001 percent varies in only a small window around 0.59. We therefore estimate the taxes paid by the 400
highest-income families for the period 2015 through 2018 by assuming that it is 0.59 times the taxes paid by the top 0.001 percent for this period. Our SOI-based estimate of 2010–2018 taxes paid by
the 400 highest-reported-income families equals actual SOI top-400 taxes for years 2010-2014, plus our estimates for years 2015–2018.
Our SOI-based estimate of 2010–2018 taxes paid by the 400 highest-reported-income families surely exceeds the 2010–2018 taxes paid by the 400 highest-wealth families: some of the wealthiest families
have lower reported income and pay less tax. For example, Warren Buffett was a member of the 2015 Forbes 400, but his voluntarily-released 2015 tax return information indicates 2015 adjusted gross
income of $11.6 million (Cohen 2016). The thresholds for top percentile groups in 2015 in the SOI estimates show that $11.9 million was required to be in the top 0.01 percent (about 14,000 families).
Thus, Buffett was not even in the top 14,000 tax units ranked by reported income, let alone the top 400 ranked by reported income. Moreover, he paid $1.8 million in Federal individual income tax in
2015, far less than the $36 million average for the top 0.001 percent or the $9 million average for the top 0.01 percent. As a result, the 2015 taxes paid by the top 400 families ranked by reported
income would overstate the 2015 taxes paid by the top 400 families ranked by wealth.
Hence, we must convert our SOI-based estimate of taxes paid by the highest-reported-income families into an estimate of taxes paid by the highest-wealth families. We do so by multiplying the
SOI-based estimate by an adjustment factor of 0.63, constructed as follows from the Survey of Consumer Finances which contains information both on approximate reported income and on wealth.
A formula helps to clarify the method. Our goal for the numerator of the tax rate is to estimate taxes paid by the families with wealth rank 1 through 400, which we write as: TAX^W[1,400]. The SOI
data give us an estimate of the taxes paid by the families with reported-income rank 1 through 400: TAX^I[1,400]. Ideally, we would multiply TAX^I[1,400] by an adjustment factor (in blue) equal to
the ratio of the tax paid by the 400 highest-wealth families to the tax paid by the 400 highest-reported-income families:
However, the ideal adjustment factor cannot be directly measured in publicly available data. The best available data source—the SCF—lacks information on taxes paid and excludes the Forbes 400
wealthiest from the survey sample by construction. We make two assumptions that allow one to approximate the ideal adjustment factor using reported incomes among families ranked 401 through 1400
(i.e., the “next-1,000” groups ranked either by reported income or by wealth), which is approximately the rest of the top 0.001 percent. First, we assume that the highest-reported-income and
highest-wealth groups pay the same average tax rate.[7] Second, we assume that the ratio of the reported incomes for the next-1,000 groups is the same as the ratio of reported incomes for the top-400
groups. Under those assumptions, one can replace the ideal adjustment factor with the following alternative adjustment factor that uses only next-1000 information:
Estimating the reported income of the next 1,000 by wealth I^W[401,1400] is relatively straightforward: the SCF excludes the top 400 by wealth, so we simply use the reported income of the wealthiest
families in the SCF.[8]
Estimating the income of the next 1000 by income I^I[401,1400] is more challenging, as it depends on how much overlap there is between the Forbes 400 and the top 400 by reported income. If there is
full overlap, then none of the top 400 by reported income should be in the SCF. We could then estimate I^I[401,1400] using the SCF observations with the highest reported incomes; doing so would yield
an adjustment factor of 0.44, similar to Saez and Zucman (2019).[9] At the other extreme, if none of the Forbes 400 is in the top 1400 by income, then the appropriate SCF observations to use would be
those with reported income ranks 401 through 1400. Doing so would exclude many high-reported-income families from the calculation and thereby yield a higher adjustment factor of 0.66.[10] A higher
adjustment factor leads to a higher resulting tax rate estimate.
We lean toward the conservative side of the spectrum: we assume an overlap of 100 and estimate I^I[401,1400] using the reported income of the SCF observations that represent families ranked 301
through 1300. Doing so, we obtain an adjustment factor of 0.63. Thus, our estimate of taxes paid by the wealthiest 400 families equals the SOI-based taxes paid by the 400 highest-reported-income
families multiplied by our 0.63 adjustment factor.
Denominator: estimated income
We divide our estimate of taxes paid by the wealthiest 400 by our more comprehensive estimate of the wealthiest 400’s income: their estimated change in wealth, plus easily estimable taxes. This
income measure excludes consumption and other taxes, which would cause us to understate Haig-Simons income and therefore overstate the Federal individual income tax rate on Haig-Simons income.
To estimate comprehensive income for the 2010–2018 period, we begin by subtracting the total wealth (net worth) of the Forbes 400 in 2009 from the total wealth of the Forbes 400 in 2018.[11] We then
add two additional components of Haig-Simons income that are available in tax data: Federal individual income taxes paid (estimated above) and State and local individual tax deductions (estimated
from the same SOI data using the same 0.63 adjustment factor).[12]
Estimate and sensitivity
Using our estimated numerator and denominator, our primary estimate for the 2010–2018 Federal individual income tax rate for the wealthiest 400 is 8.2 percent. For the numerator, we estimate that the
wealthiest 400 families paid $149 billion in Federal individual income taxes, equal to the $237 billion paid by the highest-income families in the SOI data multiplied by 0.63. For the denominator,
Forbes estimates suggest that the wealthiest 400 experienced a change in wealth for the period 2010–2018 of $1.62 trillion. Adding the $149 billion of estimated Federal individual income taxes and an
analogously estimated $46 billion in State and local taxes, we estimate the wealthiest 400’s income for the period 2010–2018 to be $1.82 trillion. Dividing $149 billion by $1.82 trillion, we obtain
8.2 percent.
Appendix Table 1 presents a sensitivity analysis for different periods. Column 1 repeats our main analysis for time periods that begin in years other than 2010 (all ending in 2018). Our analysis for
2018 alone yields 8.5 percent, for the most recent five years yields 9.8 percent, and for the most recent 20 years yields 10.2 percent. Column 2 repeats the exercise for time periods that end in
2014, which is the last year that does not rely on extrapolated top-400 Federal individual income tax data (though it also includes fewer years with the higher post-2012 capital gains tax rates). Our
analysis for years 2010–2014 yields 6.2 percent and for 2014 alone yields 6.3 percent. In addition, replacing 2009 Forbes 400 wealth with its 2008–2010 average yields an estimate of 8.6 percent.
Our 0.63 adjustment factor is estimated with error. One of the most substantial risks to the accuracy of the estimate is if the income of the top 400 by wealth relative to the top 400 by income
differs consistently from the corresponding ratio for the next 1000. Other uncertainties include sampling or non-sampling error in the SCF data on which we rely. Under the extreme assumption that the
highest income families are the highest wealth families, the adjustment factor would equal 1, and the average tax rate for 2010-2018 would be 12.3 percent. If, on the other hand, the highest wealth
families have only 43 percent of the income of the highest-income families following Saez and Zucman’s (2019) analysis, the average tax rate would be 5.8 percent. If the highest wealth families
systematically differ from the next 1000 for whom information is available in the SCF, the ratio could, in principle, be even lower.
We define our more comprehensive measure of income such that it is systematically lower than (pre-tax) Haig-Simons income, which includes all taxes and consumption. The SOI data contain information
on one additional category of expenditure that could be included: deductible contributions to nonprofit organizations. When including these deductible contributions (estimated in the same way as we
estimate State and local taxes) in comprehensive income, we obtain an estimate of 7.9 percent.
Forbes 400 wealth is surely measured with error. An active literature studies and assesses wealth measurement at the very top of the wealth distribution (e.g., Kennickell 2009; Johnson, Raub, and
Newcomb 2013; Piketty 2014; Kopczuk 2015). Saez and Zucman (2016) use capitalized income tax returns to, over some periods, estimate faster growth in top wealth than does the SCF while mostly taking
Forbes as given. In ongoing work, Smith, Zidar, and Zwick (2020) do not publish top 400 estimates, but generally estimate slower growth in top wealth, which could be consistent with Forbes being
misled, unable to value nontraded assets, or unable to observe gifts or debt. Higher growth in top wealth would lead to lower tax rates while lower growth in top wealth would lead to higher tax
rates. For example, if the Forbes 400 overstates top wealth growth by one-third, our estimate would be 11.7 percent.
Combining data across three cross-sectional data sources yields some inconsistency in time periods studied. Our target population is the wealthiest in each year of the period examined based on
end-of-year wealth. The Forbes 400 data are released each fall. The 0.63 adjustment factor is based on families ranked by income in year t, compared to families ranked by wealth when surveyed at some
point in year t+1 (though the reported wealth may or may not be current as of the time they were surveyed). Since the wealthiest families change over time, subtracting Forbes 400 totals across years
understates the income of the wealthiest at the end of each year, which leads to overestimated tax rates.[13]
Our tax rate estimates are substantially lower than commonly cited top Federal individual income tax rates produced by the Congressional Budget Office, Joint Committee on Taxation, the Department of
the Treasury, and the Tax Policy Center.[14] These estimates differ from ours in three key respects. First, and most fundamentally, the Congressional Budget Office, Joint Committee on Taxation, the
Treasury, and the Tax Policy Center estimate tax rates relative to income measures that largely exclude unrealized capital gains. The analyses thus find substantially higher tax rates than we do
because they, to varying degrees, exclude the untaxed income that motivates this analysis in favor of more accurately estimated cash income flows. Second, tax-preferred realized capital gains are a
larger share of income for the top 400 than they are for the larger top groups for which these other estimates are produced. Third, we examine income tax rates by wealth rather than by income, and
unrealized capital gains may be even more concentrated among high-wealth families than high-income families.
We conclude this technical appendix by emphasizing the fundamental uncertainty in our estimates. We hope that our analysis stimulates further estimation and direct measurement of income tax rates
inclusive of unrealized capital gains income and by wealth group.
Bricker, J., P. Hansen, and A.H. Volz. 2019. “Wealth concentration in the U.S. after augmenting the upper tail of the survey of consumer finances.” Economics Letters 184. (Link)
Cohen, Patricia. 2016. “Buffett Calls Trump’s Bluff and Releases His Tax Data.” New York Times, October 10. (Link)
Congressional Budget Office. 2021. “The Distribution of Household Income, 2018.” (Link)
Johnson, B., B. Raub, and J. Newcomb. “A New Look at the Income-Wealth Connection for America’s Wealthiest Decedents.” SOI Working Paper. Washington: Internal Revenue Service Statistics of Income. (
Joint Committee on Taxation. 2012. “Overview Of The Definition Of Income Used By The Staff Of The Joint Committee on Taxation In Distribution Analyses.” JCX-15-12. (Link)
Joint Committee on Taxation. 2021. “Overview Of The Federal Tax System As In Effect For 2021.” JCX-18-21. (Link)
Kennickell, A.B. 2009. “Ponds and Streams: Wealth and Income in the U.S., 1989 to 2007.” Finance and Economics Discussion Series 2009-13. Washington: Board of Governors of the Federal Reserve System.
Kopczuk, W. 2015. “What Do We Know about the Evolution of Top Wealth Shares in the United States?” Journal of Economic Perspectives 29, no. 1: 47-66. (Link)
Piketty, T. 2014. Capital in the Twenty-First Century. Cambridge: Harvard University Press.
Saez. E. and G. Zucman. 2016. “Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data.” The Quarterly Journal of Economics 131, no. 2: 519-578. (Link)
Saez, E. and G. Zucman. 2019. “Progressive Wealth Taxation.” Brookings Papers on Economic Activity Conference Draft. (Link)
Smith, M., O. Zidar, and E. Zwick. 2020. “Top Wealth in America: New Estimates and Implications for Taxing the Rich.” Working paper, Princeton Economics. (Link)
Tax Policy Center. 2021. “T21-0134 – Average Effective Federal Tax Rates – All Tax Unites, By Expanded Cash Income Income Percentile, 2021.” (Link)
U.S. Department of the Treasury Office of Tax Analysis. 2020. “Distribution Table: 2021 001; Distribution of Families, Cash Income, and Federal Taxes under 2021 Current Law.” (Link)
[1] A wage earner may defer taxation—subject to statutory limits—by contributing to a retirement savings account. Other generally applicable tax benefits may also reduce a worker’s tax rate.
[2] The minority of capital gains, realized within one year of acquiring the underlying asset, is taxed at ordinary rates.
[3] Unrealized capital gains are the increase in the value of assets even before the assets are sold. A wealthy individual who purchases corporate stock worth $100 million that subsequently increases
in value to $200 million over the next ten years has accrued $100 million of unrealized capital gains income over that period. These unrealized capital gains are a major source of income for the
wealthiest Americans.
[4] For example, the Federal Reserve’s Distributional Financial Accounts estimate that, as of the first quarter of 2021, the top 1 percent of families by wealth held 54 percent of the value of
corporate stocks and mutual funds, compared to 11 percent for 50^th–90^th percentiles, and less than 1 percent for the bottom 50 percent.
[5] For example, the Joint Committee on Taxation (2012) states, “Economists generally agree that, in theory, a Haig-Simons measure of income is the best measure of economic well-being.”
[6] This approach would be less informative for middle-class families because they consume a much larger share of their income.
[7] We lack direct evidence on the average tax rate paid by the highest-wealth families. In principle, the average tax rate could differ in either direction. The highest-income families could pay a
lower average tax rate because they are high-income due to large single-year capital gains realizations that are taxed at low rates. Alternatively, the highest-wealth families could pay a lower share
of their tax-return income in taxes due to large charitable deductions.
[8] The Survey of Consumer Finances intentionally excludes from its sample anybody included in the Forbes 400 due to privacy concerns. However, some Forbes 400 wealth may be represented by families
included in the Survey of Consumer Finances sample, and some additional observations are also excluded from the SCF sample. Bricker, Hansen, and Volz (2019) propose a method for augmenting the SCF
with the Forbes 400 data without double counting. We simplify by assuming that there is a sharp cutoff between the two and do not rely on the Survey of Consumer Finances to compute any aggregates.
[9] Note, however, that the assumption of full overlap would imply that the desired ratio for the top 400 would be one. In effect, under this set of assumptions, the value for the next 1000 would be
a poor guide to the value for the top 400. In implementing the procedures described in this section, we use the 2001-2019 SCFs and average the resulting annual adjustment factors across years in
order to increase our effective sample size.
[10] Technically, because the SCF is a survey with sample weights, we mean the observations that, when weighted, represent these ranks. We split SCF observations that cross relevant rank boundaries,
allocating a proportionate share of the observation’s weight to each side of the boundary.
[11] The date for which the Forbes 400 estimates wealth has varied over time. In 2020, Forbes used market prices near the end of July. In 2019, Forbes used market prices in September. For simplicity,
we treat the Forbes 400 as end-of-year wealth estimates. The Forbes list is a mix of person-level information and immediate-family information. By treating it as family-level (more precisely,
tax-unit-level) information, our estimate of income could potentially be somewhat conservative, though we anticipate this effect is small.
[12] We impute the State and local tax deductions for the top 400 for the period 2015–2018 as the top 400’s share of the top 0.001 percent’s State and local tax deductions in 2014 multiplied by the
total deductions of the top 0.001 percent for 2015–2018. The 2017 tax law limited the State and local tax deduction. For 2018 only, we impute total State and local tax deductions of the top 0.001%
tax units by multiplying 2018 top 0.001% Federal taxes by the ratio of total 2014-2017 State and local tax deductions to total 2014-2017 top 0.001% Federal taxes.
[13] A further timing issue that could arise is if the wealthy systematically realize capital gains only when they are not in the top 400 by wealth. In this case, the tax rate of the top 400 by
wealth in each year could understate a life-cycle estimate of the tax rate of the extremely wealthy. However, it is not clear that—if this is a concern—our estimate of the tax rate of the extremely
wealthy is affected by it. If it were the case that the extremely wealthy systematically do not realize their income when they are in the top 400 by wealth, our adjustment factor may be an
overestimate of their taxable income.
[14] The Congressional Budget Office (2021) recently estimated that the average Federal individual income tax rate of the highest-income 1 percent of households was about 24 percent for the period
2014 through 2018. Other analysts focus primarily on forward-looking estimates. The Treasury Department (2020) estimated that the average Federal individual income tax rate of the highest-income 0.1
percent of families in 2021 would be 23 percent, and the Tax Policy Center (2021) estimated that it would be 25 percent. The Joint Committee on Taxation (2021) estimated that the tax rate for
families with incomes of at least $1 million would be 26 percent. The 2017 Tax Act reduced individual income tax rates in 2018. However, this effect is small relative to the difference between the
average Federal individual income tax rate on the wealthiest that we estimate and the estimates cited here. | {"url":"https://www.whitehouse.gov/cea/written-materials/2021/09/23/what-is-the-average-federal-individual-income-tax-rate-on-the-wealthiest-americans/?utm_source=sub.sharescoops.com&utm_medium=referral&utm_campaign=market-mysteries","timestamp":"2024-11-06T20:45:43Z","content_type":"text/html","content_length":"156271","record_id":"<urn:uuid:24ee3e3f-36ef-47b3-90c2-90a0a37232bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00199.warc.gz"} |
Worldline approach to vector and antisymmetric tensor fields II
We extend the worldline description of vector and antisymmetric tensor fields coupled to gravity to the massive case. In particular, we derive a worldline path integral representation for the
one-loop effective action of a massive antisymmetric tensor field of rank p (a massive p-form) whose dynamics is dictated by a standard Proca-like lagrangian coupled to a background metric. This
effective action can be computed in a proper time expansion to obtain the corresponding Seeley-DeWitt coefficients a[0], a[1], a [2]. The worldline approach immediately shows that these coefficients
are derived from the massless ones by the simple shift D→D+1, where D is the spacetime dimension. Also, the worldline representation makes it simple to derive exact duality relations. Finally, we use
such a representation to calculate the one-loop contribution to the graviton self-energy due to both massless and massive antisymmetric tensor fields of arbitrary rank, generalizing results already
known for the massless spin 1 field (the photon).
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
• Duality in Gauge Field Theories
• Gauge Symmetry
• Sigma Models
Dive into the research topics of 'Worldline approach to vector and antisymmetric tensor fields II'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/worldline-approach-to-vector-and-antisymmetric-tensor-fields-ii","timestamp":"2024-11-05T04:32:17Z","content_type":"text/html","content_length":"50289","record_id":"<urn:uuid:16aaec2a-31f4-4ab9-a425-d4133fde5976>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00293.warc.gz"} |
room H503
The Landau paradigm of phase transitions states that any continuous (second order) phase transition is a symmetry breaking transition. Originally this was formulated for symmetries that form groups,
e.g. the critical Ising model is the transition between the $\mathbb{Z}_2$ symmetric and spontaneously broken phases. In recent years a new class of symmetries, called categorical or non-invertible,
have emerged in quantum systems -- with impact ranging from high energy and condensed matter physics to mathematics, and quantum computing. I will explain how these symmetries generalize the Landau
paradigm and how new phases and phase transitions are predicted, which have potential future experimental implementations in cold atom systems. | {"url":"https://triangle.mth.kcl.ac.uk/?week=-1","timestamp":"2024-11-05T13:27:32Z","content_type":"text/html","content_length":"16248","record_id":"<urn:uuid:52e3812d-414a-4baa-9d60-dcb4ae713779>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00172.warc.gz"} |
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
I use to scratch my head solving tricky arithmetic problems. I can recall the horrible time I had looking at the equations and feeling as if I will never be able to solve them but once I started with
Algebrator things are totally different
Bronson Thompson, CA
I have never been so confident with algebra before this. I will surely recommend Algebrator to all my friends.
N.J., Colorado
I have never been so confident with algebra before this. I will surely recommend Algebrator to all my friends.
B.M., Colorado
Search phrases used on 2009-10-18:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• math tests and quizzes on statistics for grade 9
• algrebra online
• what are the parentheses in integers mean
• examples of solving square roots of fractions
• free algebra help
• trigonometry in daily life
• solve equations online free
• College Algebra software
• examples of math trivia in algebra
• major field test scoring calculator
• how to do fractions on a ti 83 calculator
• free substitution review worksheets
• addition worksheets Adding groups
• Conditional,Identity,Contradiction
• Fraction Formula Chart
• basic graphing x y axis algebra
• prentice hall tenth grade online book
• multiplying and dividing integers worksheet
• adding/subtracting integers work sheet
• division online caculator
• simplify exponents without calculator
• multiplying and dividing positive and negative numbers worksheets
• multiplying intergers worksheet
• free algebra like terms worksheets
• rational calculator
• Online Calculator that does fractions
• aptitude tests samples
• ti-89 log
• what if addition or subtraction of two linear equations are equal
• algerba help
• HOW CAN YOU CHEAT ON ALGEBRA I NEED ANSWERS
• TI-83 rom image
• how to do equations on casio calc
• worded problems in algebra using the five step solutions
• Square Foot to Decimal conversion
• standard form of a linear equation ppt
• ti-83 programs code formula source code
• free ti-83 plus download
• The distributive property with fractions
• How to use T--83 calculator
• free printable worksheets in algebra
• 11 years math
• printable worksheet on adding,subtracting,dividing,and multiplying integers
• math-to the power couculator
• freehighschoolalgebra
• partial fraction online calculator
• middle school math pizzazz book c answers online
• linear equations how do u solve them with age, money and solutions
• linear equation graph examples multiple constraints
• best college algebra graphing help software
• activities for teaching integers and number lines to 6th graders
• free online t1 83 calculator
• Finding the slope of a graph with decimals
• Factoring Cubed Eqautions
• algebraic expressions worksheets
• Test you basic algebra 2 iq
• irvine high 9th grade algebra
• a calculator for turning fractions in present
• implicit differentiation solver
• simple plain english basic 6th grade elementary algebra rules
• aptitude test sample paper for private banking sector
• "quadratic number patterns"
• formulate standard linear program with absolute values
• Algebra 2 Textbook free Answers
• beginning and intermediate algebra textbook online
• Math Help Estimating Mixed Fractions
• 3rd quadratic equation solver
• MATEMATICS ALGEBRA EXAM PAPERS
• free trignometry questions
• TI-84 Plus Graphing Calculator online emulator
• orleans hannah sample test
• answer sheet to glencoe biology workbook
• subtracting integers worksheet
• adding+binomials games
• polynomials+cubed
• rules for adding subtracting multiplying and dividing integers
• example of program of all divisible numbers by 10 from 1 to 100
• free algebra solver
• 9th grade honors algebra questions challenge
• softmath.com
• online worksheet on adding,subtracting,dividing,and multiplying integers | {"url":"https://www.softmath.com/algebra-help/free-beginners-algebra.html","timestamp":"2024-11-09T12:37:01Z","content_type":"text/html","content_length":"34943","record_id":"<urn:uuid:bb1da343-a3ba-4f17-b622-82cb4677338d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00516.warc.gz"} |
A Guide to Calculating ROI From DevOps
Are you looking to improve and update the way you calculate ROI? Do you want to learn how to calculate ROI from DevOpsDevOps DevOps is a set of practices that combines software development (Dev) and
IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several
DevOps aspects came from Agile methodology.? Well, you found the right page to help you out with calculating ROI from DevOps.
In the guide on ROI below, we’ll discuss the steps to measure ROI from DevOps. Keep reading to learn how and why you need to calculate ROI from DevOps. We’ve also included some tips on how to use an
ROI percentage calculator.
Why Should You Calculate ROI From DevOps?
Almost all businesses today run off software and digital tools. DevOps, or the combination of practices and tools, is one more thing that improves a business. Without a DevOps team or expert, you
risk inefficiency and mistakes.
Because of the opportunities for improvement DevOps offers, most companies invest in it. As of 2018, only 3% of companies lacked DevOps or had no plans to invest in it. A company can still survive
without DevOps, but it’ll take twice the company’s resources to operate well.
Now, let’s say your company already transitioned to DevOps. The first thing you want to know is how successful the new transition is. Measuring and tracking ROI from DevOps is the best way to learn
if it’s working well for you.
Knowing your DevOps ROI will also let you know if you’re using DevOps right. It can help you shift your focus on things you may need to give more attention to. In many ways, it acts as a guide for
your business and a counter-checker of your strategies.
Calculating ROI From DevOps: An Overview
Before we start, let’s talk about how you calculate your ROI or Return on Investment. Calculating for the basic ROI is simple. You only need to subtract the cost of investment from the current value
of an investment.
If you want to get the percentage of your ROI, you multiply the difference by 100. The basic formula for ROI is as follows:
(Current Value of Investment – Cost of Investment) x 100 = ROI percentage
Calculating ROI from DevOps is more complex. You need to take extra steps. The formula includes:
• Calculating software development costs
• Calculating process initiation costs
• Calculating time savings
You’ll need the values of the above-mentioned to complete the formula.
Now, let’s look at how you calculate the values we mentioned above. Read on to learn how to calculate software development, process initiation costs, time savings, and profits.
How to Calculate ROI: Computing Values
Let’s begin with the first step, which is to compute the software development costs. Knowing your software development costs will help you learn how much you’re saving. To compute this, you need to
find the cost of software development per hour.
You can use software development cost calculators. You can also use a simpler method, especially if the DevOps team gave you their hourly rates. The cost is equal to the total development time
multiplied by the hourly rate.
Some DevOps services may charge more for their quality control. You also have to take into account other expenses, especially if you have an in-house DevOps team. In-house DevOps teams cost more
because you must provide the office and DevOps tools:
Calculating Process Initiation Costs
The next step is to calculate the costs for the entire process. This includes items like data security, CI/CD pipeline, and others. Here, you must know that you also need to improve your automation
Understand the best approaches to marketing automation. Add these to the original cost of the tools. In the long run, they’ll help you in ways you didn’t expect.
Computing Time Savings
As we mentioned, DevOps can help you reduce the time it takes to do certain business processes. If DevOps tools got implemented well across the business, you’d have more in time savings.
Remember, your time savings computation must be precise. This way, you’ll know the actual financial benefits of the application of DevOps to the business.
Once you have these values, you can now calculate your DevOps profits.
Calculating Profits
The final step is to learn the financial benefits of a successful DevOps application. You compute this by comparing the time savings and the cost of introducing the process. This number will show you
how much you can save when you implement DevOps.
To compute the DevOps ROI percentage, you multiply the total savings per hour by the cost per hour. From the product, subtract the cost of the process.
In other words, the formula is:
[(Total Savings per hour x Cost per hour) – Cost of Process] x 100 = DevOps ROI Profits (%)
To get the DevOps ROI percentage, multiply the difference by 100.
Is There a Faster Way to Calculate ROI?
What if you don’t have time to sit down and calculate all these values by yourself? Like how you’ve transitioned to using modern technology or DevOps to automate, you can do the same with this.
All you need to do is to find a marketing ROI calculator. Or look for an ROI percentage calculator if you want to get the DevOps ROI percentage. Don’t let your DevOps ROI calculations get left
It’s great if you don’t have extra time in the day to do the manual computations. Using this new tech is also more accurate because you remove the risk of human error. 57% of consumers agree
businesses that use modern technology are more competitive in the market.
Learn to Calculate Your DevOps ROI Today
Going digital and modern makes processes easier and paperless. DevOps offers a lot of great benefits for your business. Your business operations are faster and automated.
As we mentioned, however, not all DevOps operations go right the first time you apply them. You need to measure, countercheck, and analyze your numbers, too. Finding out your ROI can help you learn
if you need to change your DevOps approach or plans.
That ends our guide on calculating ROI from the DevOps operations in your business. If you have questions about DevOps and related matters, let us know. Visit our contact page and shoot us your
question through there.
One response to “A Guide to Calculating ROI From DevOps”
[…] and downsize the rest. If you're having trouble identifying winning and losing products, take in more information here on how technology can help make clearer where ROI opportunities […] | {"url":"https://bizdevops.com/post/a-guide-to-calculating-roi-from-devops/","timestamp":"2024-11-10T08:27:12Z","content_type":"text/html","content_length":"114066","record_id":"<urn:uuid:ea340226-b83b-48d4-b560-2852e10b1a04>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00724.warc.gz"} |
D'Auria-Fré-Regge formulation of supergravity
String theory
$\infty$-Chern-Weil theory
The D’Auria-Fré-Regge formalism (due to Ne’eman & Regge 1978; D’Auria, Fré & Regge 1979, 80a, 80b, extensively developed by Castellani, D’Auria & Fré 1991) is a natural formulation of supergravity on
“superspace” (with some hindsight: in higher super Cartan geometry) in general dimensions, including besides D=4 supergravity also type II supergravity and heterotic supergravity in dimension 10 as
well as notably 11-dimensional supergravity.
This proceeds in generalization of how Einstein gravity in first order formulation of gravity is equivalently the Cartan geometry for the inclusion of the Lorentz group inside the Poincare group: a
field configuration of the field of gravity is equivalently a Cartan connection for this subgroup inclusion.
Accordingly, low dimensional supergravity without extended supersymmetry is equivalently the super-Cartan geometry of the inclusion of the spin group into the super Poincaré group.
What D’Auria-Fré implicitly observe (not in this homotopy theoretic language though, that was developed in Sati, Schreiber & Stasheff 09; Fiorenza, Schreiber & Stasheff 10, 13) is that for higher
supergravity with extended supersymmetry such as 11-dimensional supergravity with its M-theory super Lie algebra symmetry, the description of the fields is in the higher differential geometry version
of Cartan geometry, namely higher Cartan geometry, where the super Poincare Lie algebra is replaced by one of its exceptional super Lie n-algebra extensions (those that also control the brane scan),
such as notably the supergravity Lie 3-algebra and the supergravity Lie 6-algebra. This is the refinement of super-Cartan geometry to higher Cartan geometry.
This higher super Cartan geometry-description of supergravity is what D’Auria-Fré called the geometric approach to supergravity or geometric supergravity (e.g. D’Auria 20).
geometric context gauge group stabilizer subgroup local model space local geometry global geometry differential first order formulation of
cohomology gravity
differential Lie group/algebraic subgroup (monomorphism) $H \ quotient (“coset space”) $G/H$ Klein geometry Cartan geometry Cartan connection
geometry group $G$ hookrightarrow G$
examples Euclidean group $Iso rotation group $O(d)$ Cartesian space $\mathbb{R}^d$ Euclidean Riemannian affine connection Euclidean gravity
(d)$ geometry geometry
Poincaré group $Iso Lorentz group $O(d-1,1)$ Minkowski spacetime $\mathbb{R}^ Lorentzian pseudo-Riemannian spin connection Einstein gravity
(d-1,1)$ {d-1,1}$ geometry geometry
anti de Sitter group $O(d-1,1)$ anti de Sitter spacetime $AdS^d$ AdS gravity
de Sitter group $O $O(d-1,1)$ de Sitter spacetime $dS^d$ deSitter gravity
linear algebraic parabolic subgroup/Borel subgroup flag variety parabolic
group geometry
conformal group $O conformal parabolic subgroup Möbius space $S^{d,t}$ conformal geometry conformal conformal gravity
(d,t+1)$ connection
supergeometry super Lie group $G$ subgroup (monomorphism) $H \ quotient (“coset space”) $G/H$ super Klein super Cartan Cartan
hookrightarrow G$ geometry geometry superconnection
examples super Poincaré group spin group super Minkowski spacetime $\mathbb{R} Lorentzian supergeometry superconnection supergravity
^{d-1,1\vert N}$ supergeometry
super anti de Sitter super anti de Sitter spacetime
higher differential smooth 2-group $G$ 2-monomorphism $H \to G$ homotopy quotient $G//H$ Klein 2-geometry Cartan 2-geometry
cohesive ∞-group ∞-monomorphism (i.e. any homotopy quotient $G//H$ of ∞-action higher Klein higher Cartan higher Cartan
homomorphism) $H \to G$ geometry geometry connection
examples extended super Minkowski spacetime extended higher supergravity: type II,
supergeometry heterotic, 11d
For more background on principal ∞-connections see also at ∞-Chern-Weil theory introduction.
Around 1981 D’Auria and Fré noticed, in GeSuGra, that the intricacies of various supergravity classical field theories have a strikingly powerful reformulation in terms of super semifree differential
graded-commutative algebras.
They defined various such super dg-algebras $W(\mathfrak{g})$ and showed (paraphrasing somewhat) that
• the field content, field strengths, covariant derivatives and Bianchi identities are all neatly encoded in terms of dg-algebra homomorphism $\Omega^\bullet(X) \leftarrow W(\mathfrak{g}) : \phi$;
• the action functionals of supergravity theories on such $\phi$ may be constructed as images under $\phi$ of certain elements in $W(\mathfrak{g})$ subject to natural conditions.
Their algorithm was considerably more powerful than earlier more pedestrian methods for construction such action functionals. The textbook CastellaniDAuriaFre on supergravity and string theory from
the perspective of this formalism gives a comprehensive description of this approach.
We observe here that the D’Auria-Fre-formalism is ∞-Chern-Simons theory for ∞-Lie algebra-valued forms with values in super ∞-Lie algebras such as the supergravity Lie 3-algebra and the supergravity
Lie 6-algebra.
The pivotal concept that allows to pass between this interpretation and the original formulation is the concept of ∞-Lie algebroid with its various incarnations:
Notably the semifree dga upon which D’Auria-Fré base their description is the Chevalley-Eilenberg algebra of the supergravity Lie 3-algebra, which is an ∞-Lie algebra that is a higher central
$0 \to b^2 \mathfrak{u}(1) \to \mathfrak{sugra}(10,1) \to \mathfrak{siso}(10,1) \to 0$
of a super Poincare Lie algebra $\mathfrak{siso}(10,1)$ in the way the String Lie 2-algebra $\mathfrak{string}(n)$ is a higher central extension of the special orthogonal Lie algebra $\mathfrak{so}
A super connection on an ∞-bundle with values in $\mathfrak{sugra}(10,1)$ on a supermanifold $X$ is locally given by ∞-Lie algebroid valued differential forms consisting of
• a $\mathbb{R}^{11}$-valued 1-form $e$ – the vielbein
• a $\mathfrak{so}(10,1)$-valued 1-form $\omega$ – the spin connection
• a spin-representation valued 1-form $\psi$ – the spinor
• a 3-form $C$ .
These are identified with the fields of 11-dimensional supergravity, respectively:
By realizing this data as components of a Lie 3-algebra valued connection (more or less explicitly), the D’Auria-Fré-formalism achieves some conceptual simplication of
Higher gauge theory reinterpretation
Originally D’Auria and Fré referred to commutative semifree dgas as Cartan integrable systems. Later the term free differential algebra, abbreviated FDA was used instead and became popular. Nowadays
much of the literature that studies commutative semifree dgas in supergravity refers to them as “FDA”s. One speaks of the FDA approach to supergravity .
But strictly speaking “free differential algebra” is a misnomer: genuinely free differential algebras are pretty boring objects. Crucially it is only the underlying graded commutative algebra which
is required to be free as a graded commutative algebra in that it is a Grassmann algebra $\wedge^\bullet \mathfrak{g}^*$ on a graded vector space $\mathfrak{g}^*$. The differential on that is in
general not free, hence the more precise term semifree dga .
In fact, when $\mathfrak{g}$ is concentrated in non-positive degree (so that $\wedge^\bullet \mathfrak{g}^*$ is concentrated in non-negative degree) the differential on $\wedge^\bullet \mathfrak{g}^
{*}$ encodes all the structure of an ∞-Lie algebroid on $\mathfrak{g}$. If $\mathfrak{g}$ is concentrated in negative degree the differential encodes the structure of an ∞-Lie algebra on $\mathfrak
{g}$. This interpretation of semifree dgas in Lie theory is the key to our general abstract reformulation of the D’Auria-Fré-formalism.
Already D’Auria and Fré themselves, and afterwards other authors, have tried to better understand the intrinsic conceptual meaning of their dg-algebra formalism that happened to be so useful in
The idea arose and then became popular in the “FDA”-literature that the D’Auria-Fré-formalism should be about a concept called soft group manifolds. This is motivated by the observation that by means
of the dg-algebra formulation the fields in supergravity arrange themselves into systems of differential forms that satisfy equations structurally similar to the Maurer-Cartan forms of left-invariant
differential forms on a Lie group – except that where the ordinary Maurer-Cartan form has vanishing curvature (= field strength) these equations for supergravity fields have a possibly non-vanishing
field strength. It is proposed in the “FDA”-literature that these generalized Maurer-Cartan equations describe generalized or “softened” group manifolds.
However, even when the field strengths do vanish, the remaining collection of differential forms does not constrain the base manifold to be a group. Rather, if the field strengths vanish we have a
natural interpretation of the remaining differential form data as being flat ∞-Lie algebroid valued differential forms, given by a morphism
$A : T X \to \mathfrak{g}$
from the tangent Lie algebroid of the base manifold $X$ to the ∞-Lie algebra $\mathfrak{g}$ encoded by the semifree dga in question. In fact, applying the functor from ∞-Lie algebroids to dg-algebras
given by forming Chevalley-Eilenberg algebras, the above morphism turns into a dg-algebra morphism
$\Omega^\bullet(X) \leftarrow CE(\mathfrak{g}) : A$
to the deRham dg-algebra of $X$ (which we denote by the same letter, $A$, in a convenient abuse of notation).
Since $CE(\mathfrak{g})$ is semifree, this is a map of graded vector spaces
$\Omega^\bullet(X) \leftarrow \mathfrak{g}^* : A$
together with a constraint that the morphism respects the differentials on $CE(\mathfrak{g})$ and on $\Omega^\bullet(X)$. Such a morphism of graded vector spaces in canonically identified with a $\
mathfrak{g}$-valued differential form (recall that $\mathfrak{g}$ is a graded vector space)
$\omega \in \Omega^\bullet(X,\mathfrak{g})$
and the aforementioned constraint is precisely the Maurer-Cartan-like equation that is known from left-invariant 1-forms on a Lie group. In fact, for $G$ a Lie group with Lie algebra $\mathfrak{g}$
there is a canonical morphism
$\Omega^\bullet(G) \leftarrow CE(\mathfrak{g})$
whose image is precisely the left-invariant 1-forms on the Lie group $G$ and whose respect for the differentials is precisely the ordinary Maurer-Cartan equation.
To see the role of group manifolds for more general morphisms
$\Omega^\bullet(X) \leftarrow CE(\mathfrak{g}) : A$
one has to apply Lie integration of the ∞-Lie algebroid morphism $T X \to \mathfrak{g}$ to a morphism of ∞-Lie groupoids
$\Pi(X) \to \mathbf{B}G$
where $\Pi(X)$ is the path ∞-groupoid and where $\mathbf{B}G$ is the delooping of Lie in-group $G$ that integrates the Lie n-algebra $\mathfrak{g}$. Such morphisms are the integrated version of flat
∞-Lie algebroid valued differential forms.
The ∞-Chern-Weil theory of connections on ∞-bundles is about
1. the generalization of such flat form data to ∞-Lie algebroid valued differential forms with curvature.
2. the generalization from globally defined differential form data – which are connections on trivial principal ∞-bundles – to connections on arbitrary principal ∞-bundles.
The D’Auria-Fré-formalism – after this re-interpretation – is about the first of these points. So as an immediate gain of our reformulation of D’Auria-Fré-formalism in terms of connections on
∞-bundles we obtain, using the second of these points, a natural proposal for a formulation of supergravity field configurations that are possibly globally topologically nontrivial. Physicists speak
of instanton solutions.
In fact, the ∞-Lie theory-reformulation exhibits the D’Auria-Fré-formalism as being secretly the realization of supergravity as a higher gauge theory.
It realizes supergravity as an example for a nonabelian higher gauge theory in that a supergravity field configuration is not realizable as a cocycle in ordinary differential cohomology as in
ordinary abelian higher gauge theory (see there) but as a nonabelian connection on an ∞-bundle.
The supergravity Lie $n$-algebras
We have a sequence of ∞-Lie algebra extensions
supergravity Lie 6-algebra$\to$supergravity Lie 3-algebra $\to$super Poincare Lie algebra
$\mathfrak{sugra}_6 \to \mathfrak{sugra}_3 \to \mathfrak{siso}(10,1) \,.$
Super Lorentzian spacetime manifolds
The base space $X$ on which a supergravity field is a super Lie $n$-algebra valued connection on an ∞-bundle is a supermanifold.
In particular, for constructing the action functional of supergravity we want $X$ to locally look like super Minkowski space.
Field configuration and field strength
A local field configuration on a supermanifold $X$ in the classical field theory is a morphism
$T X \stackrel{(A, F_A)}{\to} inn(\mathfrak{sugra}(\mathfrak{g}))$
from the tangent Lie algebroid to the inner-derivation Lie 4-algebra $inn(\mathfrak{sugra}(10,1))$, defined as the formal dual of the Weil algebra of $\mathfrak{sugra}$). So dually this is a morhism
of dg-algebras from the Weil algebra $W(\mathfrak{sugra}(10,1))$ to the deRham dg-algebra $\Omega^\bullet(X)$ of $X$:
$\Omega^\bullet(X) \leftarrow W(\mathfrak{sugra}(10,1)) : (A,F_A) \,.$
This is ∞-Lie algebroid valued differential form data with ∞-Lie algebroid valued curvature that is explicitly given by:
• connection forms / field configuration
• curvature forms / field strengths
□ $T = d E + \Omega \cdot E + \Gamma(\bar \Psi \wedge \Psi) \in \Omega^2(X,\mathbb{R}^{10,1})$ - the torsion
□ $R = d \Omega + [\Omega \wedge \Omega] \in \Omega^2(X, \mathfrak{so}(10,1))$ - the Riemann curvature
□ $\rho = d \Psi + (\Omega \wedge \Psi) \in \Omega^2(X, S)$ – the covariant derivative of the spinor
□ $G = d C + \mu_4(\psi, E) \in \Omega^4(X)$ – the 4-form field strength
Gauge transformations
A gauge transformation of a field configuration
$\phi : T X \to inn(\mathfrak{g})$
is a diagram
$\array{ \Omega^\bullet(X \times \Delta^{1})_{vert} &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{a}) &&& gauge\;transformation \\ \uparrow && \uparrow \\ \Omega^\bullet(X \times \Delta^{1}) &\
stackrel{A}{\leftarrow}& W(\mathfrak{a}) &&& field \\ \uparrow && \uparrow \\ \Omega^\bullet(X) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) &&& gauge\;invariant\;observable }$
Given a 1-morphism in $\exp(\mathfrak{g})(X)$, represented by $\mathfrak{g}$-valued forms
$\Omega^\bullet(U \times \Delta^1) \leftarrow W(\mathfrak{g}) : A$
consider the unique decomposition
$A = A_U + ( A_{vert} \coloneqq \lambda \wedge d t) \; \; \,,$
with $A_U$ the horizontal differential form component and $t : \Delta^1 = [0,1] \to \mathbb{R}$ the canonical coordinate.
We call $\lambda$ the gauge parameter . This is a function on $\Delta^1$ with values in 0-forms on $U$ for $\mathfrak{g}$ an ordinary Lie algebra, plus 1-forms on $U$ for $\mathfrak{g}$ a Lie
2-algebra, plus 2-forms for a Lie 3-algebra, and so forth.
We describe now how this enccodes a gauge transformation
$A_0(s=1) \stackrel{\lambda}{\to} A_U(s = 1) \,.$
The condition that all curvature characteristic forms descend to $U$ in that $A$ completes to a diagram
$\array{ \Omega^\bullet(U \times \Delta^k) &\stackrel{A}{\leftarrow}& W(\mathfrak{a}) \\ \uparrow && \uparrow \\ \Omega^\bullet(U) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) }$
is solved by requiring all components
$\Omega^\bullet(U \times \Delta^1) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \stackrel{r^a}{\leftarrow} \wedge^1 \mathfrak{g}^* F_A^a$
of the curvature forms to vanish when evaluated on the vector field $\partial_s$ along $\partial_s$.
By the nature of the Weil algebra we have
$\frac{d}{d s} A_U = d_U \lambda + [\lambda \wedge A] + [\lambda \wedge A \wedge A] + \cdots + (F_A)(\partial_s, \cdots) \,,$
so that this condition is a system of ordinary differential equations of the form
$\frac{d}{d s} A_U = d_U \lambda + [\lambda \wedge A] + [\lambda \wedge A \wedge A] + \cdots \,,$
where the sum is over all higher brackets of the ∞-Lie algebra $\mathfrak{g}$.
Define the covariant derivative of the gauge parameter to be
$abla \lambda \coloneqq d \lambda + [A \wedge \lambda] + [A \wedge A \wedge \lambda] + \cdots \,.$
In this notation we have
• the general identity
(1)$\frac{d}{d s} A_U = abla \lambda + (F_A)_s$
• the horizontality constraint or second Ehresmann condition
(2)$\frac{d}{d s} A_U = abla \lambda \,.$
This is known as the equation for infinitesimal gauge transformations of an $\infty$-Lie algebra valued form.
By Lie integration we have that $A_{vert}$ – and hence $\lambda$ – defines an element $\exp(\lambda)$ in the ∞-Lie group that integrates $\mathfrak{g}$.
The unique solution $A_U(s = 1)$ of the above differential equation at $s = 1$ for the initial values $A_U(s = 0)$ we may think of as the result of acting on $A_U(0)$ with the gauge transformation $\
In the formulation here the fields of supergravity are modeled by super differential forms on a supermanifold $\tilde X$, and this very fact serves to make local supersymmetry manifest, i.e., serves
to model geometry by higher supergeometric higher Cartan geometry.
But the actual fields of supergravity are supposed to be fields on actual spacetime $X$ (an ordinary smooth manifold) $X \hookrightarrow \tilde X$. Hence one is to impose a constraint that ensures
that the super differential forms used on $\tilde X$ are uniquely determined by their restriction to ordinary differential forms on $X$. This constraint is called rheonomy (Castellani-D’Auria-Fré 91,
vol 2, section III.3.3), alluding to the idea that the constraints allow the field data to “flow” from spacetime $X$ to the super spacetime $\tilde X$.
The idea here is analogous (Castellani-D’Auria-Fré 91, vol 2, p. 660, Fré-Grassi 08, p. 4) to how the Cauchy-Riemann equations impose the constraint for a function on the complex plane $\mathbb{C}$
to be a holomorphic function and hence to be already fixed by its values on the real line $\mathbb{R} \hookrightarrow \mathbb{C}$.
In (Castellani-D’Auria-Fré, vol 2, section III.3.3) this idea is formalized by the constraint that for the given super-$L_\infty$-algebra connection as above, those components of the curvature forms
which carry fermionic indices must be linear combinations of the components carrying no fermionic indices. (See also at L-∞ algebra valued differential forms – integration of transformation.)
This rheonomy constraint is equivalent to what elsewhere is called “superspace constraints”, see (AFFFTT 98, below (3.12)).
See also at rheonomy modality.
under construction
Let $\mathbf{H} =$SuperFormalSmooth∞Groupoids.
(super-L-∞ algebra valued super differential forms)
Let $\mathfrak{g}$ be an super L-∞ algebra and let $X$ be a super ∞-groupoid (for instance a supermanifold or an extended super Minkowski spacetime).
(3)$\Omega(X,\mathfrak{g}) \coloneqq Hom_{dgcSuperAlg}\big( W(\mathfrak{g}), \Omega^\bullet(X) \big) \;\in\; Set$
for the set of super-L-∞ algebra valued super differential forms on $X$, hence of homomorphisms of differential graded-commutative superalgebras from the Weil algebra of $\mathfrak{g}$
$W(\mathfrak{g}) \;\coloneqq\; \Big( \wedge^\bullet\big( \mathfrak{g}^\ast \oplus \underset{ \mathbf{d}\mathfrak{g}^\ast }{\underbrace{\mathfrak{g}^\ast[1]}} \big) , \mathbf{d}_{W(\mathfrak{g})} = \
mathbf{d}_{CE(\mathfrak{g})} + \mathbf{d} \Big)$
to the de Rham algebra of super differential forms on $X$, which is given (see at geometry of physics – supergeometry this example) by
$\Omega^\bullet(X) \;\coloneqq\; \flat \underline{\mathbf{H}} \Big( \underline{\mathbf{H}}\big( \mathbb{R}^{0\vert 1}, X \big), \mathbb{R} \Big)$
equipped with the differential graded-commutative superalgebra-structure induced by the action of $\mathbf{Aut}(\mathbb{R}^{0\vert 1})$ (see at odd line there)
The restriction, as a linear map, of such a homomorphism
$\Omega^\bullet(X) \overset{ \omega }{\longleftarrow} W(\mathfrak{g})$
along the canonical inclusion of $\wedge^1 \mathfrak{g}^\ast[1] = \mathfrak{g}^\ast[2]$ into the Weil algebra yields the curvature forms $F_\omega$ of $\omega$.
(4)$\array{ \Omega(X) && \overset{F_\omega}{\longleftarrow} && \mathfrak{g}^\ast[2] \\ & {}_{\mathllap{\omega}}warrow && \swarrow_{\mathrlap{}} \\ && W(\mathfrak{g}) }$
(restriction of super-L-∞ algebra valued super differential forms to bosonic subspace)
Given $X \in$SuperFormalSmooth∞Groupoids, write
(5)$X^{\rightsquigarrow} \overset{ \epsilon_X^{\rightsquigarrow} }{\longrightarrow} X$
for the inclusion of the underlying bosonic space (the counit morphism of the bosonic modality applied to $X$).
The pullback of the super differential forms in Def. along (5), is a function of the form
(6)$\array{ \Omega(X, \mathfrak{g}) &\overset{ \left( \epsilon^{\rightsquigarrow}_{X} \right)^\ast }{\longrightarrow}& \Omega(X^{\rightsquigarrow}, \mathfrak{g}) }$
If $X$ is a supermanifold and $U \subset X$ is a coordinate chart with coordinates $(x^a, \theta^\alpha)$ then restricted to this coordinate chart the pullback map (6) is given by evaluating
super-differential forms at $\theta^\alpha = 0$ and $\mathbf{d}\theta^\alpha = 0$
$\left( \epsilon_X^{\rightsquigarrow} \right)^\ast \omega_{\vert U} \;=\; \left. \omega_{\vert U}\right|_{ {\theta^\alpha = 0} \atop {\mathbf{d}\theta^\alpha = 0} }$
In this form this operation appears in Castellani-D’Auria-Fré 91, vol 2 (III.3.25).
(rheonomic set of super differential forms)
We may say that a subset
$\widetilde \Omega(X, \mathfrak{g}) \subset \Omega(X, \mathfrak{g})$
of super-Lie algebra valued super differential forms (Def. ) is rheonomic if on this subset the restriction to the bosonic subspace from Def. (hence the pullback of differential forms along $\
epsilon_X^{\rightsquigarrow}$) is injective
$\array{ \widetilde \Omega(X,\mathfrak{g}) \overset{ \left( \epsilon_X^{\rightsquigarrow} \right)^\ast }{\hookrightarrow} \Omega\big( X^{\rightsquigarrow}, \mathfrak{g}\big) }$
hence if every super differential form
$\mu \;\in\; \widetilde \Omega(X,\mathfrak{g})$
is, as an element of this subset, uniquely determined by its restriction to the bosonic submanifold $X^{\rightsquigarrow}$.
More specifically, let now $V$ be an extended super Minkowski spacetime, with $\mathfrak{g} = \mathrm{iso}(V)$ its super $L_\infty$-extension of the corresponding super Poincare Lie algebra let $X$
be a V-manifold, and consider the subset
(7)$\widetilde\Omega(X,\mathfrak{g}) \;\coloneqq\; \Omega_{Cartan}(X,\mathfrak{g}) \subset \Omega(X,\mathfrak{g})$
of globally defined Cartan connection-forms, meaning that their super vielbein component is constrained to be non-degenerate, establishing at each global point a linear isomorphism between its super
tangent space and $V$.
A sufficient condition for the subset (7) to be rheonomic(Def. ) is that the components of the curvature-forms with any odd-graded indices are linear combinations of the components of the curvature
forms without odd-graded indices.
(Castellani, D’Auria &Fré 1991, vol 2, (III.3.30))
$\Omega^\bullet(X) \overset{\mu}{\longleftarrow} W(\mathfrak{g})$
be a given form. Choosing any basis $\{P_a, Q_\alpha\}$ of $\mathfrak{g}$, $\mu$ has components
\begin{aligned} \mu & = \mu\big( (x^a), (\theta^\alpha) \big) \\ & = \mu_a\big( (x^a), (\theta^\alpha) \big) d x^a + \mu_\alpha\big( (x^a), (\theta^\alpha) \big) d \theta^\alpha \end{aligned} \,.
We have to show, under the assumption that there exist linear maps
$\Big( \phi_{\alpha a}^{b c} \;\colon\; \mathfrak{g} \to \mathfrak{g} \Big)_{a,b,c, \alpha}$
$\big( F_{\mu}\big)_{\alpha a} \;=\; \phi_{\alpha a}^{b c} \left( \big(F_\mu\big)_{b c} \right) \,,$
that $\mu$ is uniquely determined already by the component $\mu_a\big( (x^a), (\theta^\alpha = 0) \big)$. For this it is sufficent to show that all component functions
$\mu_a\big( (x^a), (\theta^\alpha) \big)$
$\mu_\alpha\big( (x^a), (\theta^\alpha) \big)$
may be expressed as functions of the $\mu_a\big( (x^a), (\theta^\alpha = 0) \big)$.
We now first prove something weaker, namely that these functions are uniquely determined once we know not just $\mu_a\big( (x^a), (\theta^\alpha = 0) \big)$ but also $\mu_\alpha\big( (x^a), (\theta^\
alpha = 0) \big)$.
It seems to me that this weaker statement is all that Castellani-D’Auria-Fré 91, vol 2, III.3.3 really provide, for notice that the last line of their (III.3.29) still depends on $\mu_\alpha\big(
(x^a), (\theta^\alpha = 0) \big)$.
By the nilpotency of the odd-graded coordinates $\theta^\alpha$, we have that $\mu$ is a multilinear map in the $\theta^\alpha$.
Hence, by induction, assume that the $k$-linear part $\mu\big( (x^a), (\theta^\alpha)_{k lin} \big)$ in the $\theta^\alpha$ of $\mu\big( (x^a), (\theta^\alpha) \big)$ is fixed by $\mu\big( (x^a), (\
theta^\alpha = 0) \big)$. It is then sufficient to show that also the $(k+1)$-linear term $\mu_a\big( (x^a), (\theta^\alpha)_{(k+1) lin} \big)$ is fixed.
This is evidently equivalent to the statement that all the derivatives of $\mu\big( (x^a), (\theta^\alpha)_{(k+1) lin} \big)$ by any $\theta^{\alpha_{k+1}}$ evaluated at $\theta^{\alpha_{k+1}} = 0$
are fixed.
The key point is that by the assumption that we have a Cartan connection, these derivatives are proportional to a sum of $\big( F_\omega\big)_{\alpha_{k+1} a}$ with a linear combination of the $\mu$.
But by assumption, $\big( F_\omega\big)_{\alpha_{k+1} a}$ (which a priori depends on data at $\mathbf{d}\theta^\alpha eq 0$) is a linear combination of the curvatures with bosonic indices, and these
are determined from the data at $d \theta^\alpha = 0$.
This is essentially the argument in Castellani-D’Auria-Fré 91, vol 2, (III.3.29)-(III.3.31), except that I have added the inductive argument, which seems necessary to really conclude beyond first
order in the odd coordinates.
This shows that $\mu\big( (x^a), (\theta^\alpha) \big)$ satisfies well-formed differential equations in the $\theta^\alpha$.
To conclude, we hence need to see that we have sufficient boundary data on $\mu\big( (x^a), (\theta^\alpha) \big)$ fixed to have the solution to this differential equation be unique.
Now the boundary data for $\mu_a\big( (x^a), (\theta^\alpha) \big)$ is clearly $\mu_a\big( (x^a), (\theta^\alpha = 0) \big)$, and if the differential equations did not also depend on $\mu_\alpha\big(
(x^a), (\theta^\alpha) \big)$ this would be the end of the story.
We do not know the analogous boundary data $\mu_\alpha\big( (x^a), (\theta^\alpha = 0) \big)$, since all of $\mu_\alpha$ is forgotten when restricting to $\mathbf{d}\theta^\alpha = 0$.
I think this is a real gap in the general argument for rheonomy. It is not a real problem in special situations, though…
11d-SuGra from Super C-Field Flux Quantization
We discuss (Thm. below, following GSS24, §3) how the equations of motion of D=11 supergravity — on an $11\vert\mathbf{32}$-dimensional super-torsion-free super spacetime $X$ with super vielbein $(e,
\psi)$ (the graviton/gravitino-fields) — follow from just the requirement that the duality-symmetric super-C-field flux densities $(G_4^s, G_7^s) \,\in\, \Omega^4_{dR}(X) \times \Omega^7_{dR}(X)$:
1. satisfy their Bianchi identities
(8)$\begin{array}{l} \mathrm{d} \, G_4^s \;=\; 0 \\ \mathrm{d} \, G_7^s \;=\; \tfrac{1}{2} G_4^s \, G_4^s \end{array}$
2. are on any super-chart $U \hookrightarrow X$ of the locally supersymmetric form
(9)$\begin{array}{l} G_4^s \;=\; \tfrac{1}{4!} (G_4)_{a_1 \cdots a_4} e^{a_1} \cdots e^{a_4} \,-\, \tfrac{1}{2} \big(\overline{\psi}\Gamma_{a_1 a_2} \psi\big) e^{a_1} \, e^{a_2} \\ G_7^s \;=\; \
tfrac{1}{7!} (G_7)_{a_1 \cdots a_7} e^{a_1} \cdots e^{a_7} \,-\, \tfrac{1}{5!} \big(\overline{\psi}\Gamma_{a_1 \cdots a_5} \psi\big) e^{a_1} \cdots e^{a_5} \mathrlap{\,.} \end{array}$
Up to some mild (but suggestive, see below) re-arrangement, the computation is essentially that indicated in CDF91, §III.8.5 (where some of the easy checks are indicated) which in turn is a mild
reformulation of the original claim in Cremmer & Ferrara 1980 and Brink & Howe 1980 (where less details were given). A full proof is laid out in GSS24, §3, whose notation we follow here.
The following may be understood as an exposition of this result, which seems to stand out as the only account that is (i) fully first-order and (ii) duality-symmetric (in that $G_7$ enters the EoMs
as an independent field, whose Hodge duality to $G_4$ is imposed by the Bianchi identity for $G_7^s$, remarkably).
Notice that the discussion in CDF91, §III.8 amplifies the superspace-rheonomy principle as a constraint that makes the Bianchi identities on (in our paraphrase) a supergravity Lie 6-algebra-valued
higher vielbein be equivalent to the equations of motion of D=11 SuGra. But we may observe that the only rheonomic constraint necessary is that (9) on the C-field flux density — and this is the one
not strictly given by rules in CDF91, p. 874, cf. around CDF91, (III.8.41) —; while the remaining rheonomy condition on the gravitino field strength $\rho$ is implied (Lem. below), and the
all-important torsion constraint (10) (which is also outside the rules of rheonomy constraints, cf. CDF91, (III.8.33)) is naturally regarded as part of the definition of a super-spacetime in the
first place (Def. below).
In thus recasting the formulation of the theorem somewhat, we also:
1. re-define the super-flux densities as above (9), highlighting that it is (only) in this combination that the algebraic form of the expected Bianchi identity (8) extends to superspace;
2. disregard the gauge potentials $C_3$ and $C_6$, whose role in CDF91, §III.8.2-4 is really just to motivate the form of the Bianchi identities equivalent to (8), but whose global nature is more
subtle than acknowledged there, while being irrelevant for just the equations of motion.
Indeed, the point is that, in consequence of our second item above, the following formulation shows that one may apply flux quantization of the supergravity C-field on superspace in formally the same
way as bosonically (for instance in Cohomotopy as per Hypothesis H, or in any other nonabelian cohomology theory whose classifying space has the $\mathbb{Q}$-Whitehead $L_\infty$-algebra of the
4-sphere), and in fact that the ability to do so implies the EoMs of 11d SuGra. Any such choice of flux quantization is then what defines, conversely, the gauge potentials, globally. Moreover, by the
fact brought out here, that the super-flux Bianchi identity already implies the full equations of motion, this flux quantization is thereby seen to be compatible with the equations of motion on all
of super spacetime.
For the present formulation, we find it suggestive to regard the all-important torsion constraint (10) as part of the definition of the super-gravity field itself (since it ties the auxiliary
spin-connection to the super-vielbein field which embodies the actual super-metric structure):
• $D \in \mathbb{N}_{\geq 1}$ a natural number
• $\mathbf{N} \in Rep_{\mathbb{R}}\big(Spin(1,D-1)\big)$ a real spin representation (“Majorana spinors”) of $\mathbb{R}$-dimension $N$
whose $Spin(1,D)$-equivariant bilinear pairing we denote
$\overline{(\text{-})}(\text{-}) \;\colon\; \mathbf{N} \otimes \mathbf{N} \longrightarrow \mathbb{R}^{1,D-1} \,,$
by a super-spacetime of super-dimension $D\vert \mathbf{N}$ we here mean:
1. a supermanifold
2. which admits an open cover by super-Minkowski supermanifolds $\mathbb{R}^{1,D-1\vert \mathbf{N}}$,
3. equipped with a super Cartan connection with respect to the canonical subgroup inclusion $Spin(1,D-1) \hookrightarrow Iso(\mathbb{R}^{1,D-1\vert\mathbf{N}})$ of the spin group into the super
Poincaré group, namely:
1. equipped with a super-vielbein $(e, \psi)$, hence on each super-chart $U \hookrightarrow X$
$\big( (e^a)_{a=0}^{D=1} ,\, (\psi^\alpha)_{\alpha=1}^N \big) \;\in\; \Omega^1_{dR}\big( U ;\, \mathbb{R}^{1,D-1\vert \mathbf{N}} \big)$
such that at every point $x \in \overset{\rightsquigarrow}{X}$ the induced map on tangent spaces is an isomorphism
$(e,\psi)_x \;\colon\; T_x X \overset{\sim}{\longrightarrow} \mathbb{R}^{1,10\vert \mathbf{N}} \,.$
2. and with a spin-connection $\omega$ (…),
4. such that the super-torsion vanishes, in that on each chart:
(10)$\mathrm{d} \, e^a - \omega^a{}_b \, e^b \;=\; \big( \overline{\psi} \,\Gamma^a\, \psi \big) \,,$
where $\Gamma^{(-)} \,\colon\, \mathbb{R}^{1,D-1} \longrightarrow End_{\mathbb{R}}(\mathbf{N})$ is a representation of $Pin^+(1,10)$, hence
$\Gamma_{a} \Gamma_b + \Gamma_{b} \Gamma_a \;=\; + 2\, diag(-, +, +, \cdots, +)_{a b} \,.$
(the gravitational field strength)
Given a super-spacetime (Def. ), we say that (super chart-wise):
1. its super-torsion is:
$T^a \;\coloneqq\; \mathrm{d}\, e^a \,-\, \omega^a{}_b \, e^b \,-\, \big( \overline{\psi}\Gamma^a\psi \big)$
2. its gravitino field strength is
$\rho \;\coloneqq\; \mathrm{d}\, \psi + \tfrac{1}{4} \omega_{a b}\Gamma^{a b}\psi \,,$
3. its curvature is
$R^{a}{}_b \;\coloneqq\; \mathrm{d}\, \omega^{a}{}_b \,-\, \omega^a{}_c \, \omega^c{}_b \,.$
(super-gravitational Bianchi identities)
By exterior calculus the gravitational field strength tensors (Def. ) satisfy the following identities:
(11)$\begin{array}{ccl} \mathrm{d} \, R^{a}{}_b &=& \omega^a{}_{a'} \, R^{a'}{}_b - R^{a}{}_{b'} \, \omega^{b'}{}_{b} \\ \mathrm{d} \, T^a &=& - R^{a}{}_b \ e^b + 2 \big( \overline{\psi} \,\Gamma^a\,
\rho \big) \\ \mathrm{d} \, \rho &=& \tfrac{1}{4} R^{a b} \Gamma_{a b} \psi \end{array}$
Write now $\mathbf{32} \in Rep_{\mathbb{R}}\big(Spin(1,10)\big)$ for the unique non-trivial irreducible real $Spin(1,10)$-representation.
(11d SuGra EoM from super-flux Bianchi identity) Given
1. (super-gravity field:) an $11\vert\mathbf{32}$-dimensional super-spacetime $X$ (Def. ),
2. (super-C-field flux densities:) $(G^s_4,\, G^s_7)$ as in (9)
then the super-flux Bianchi identity (8) (the super-higher Maxwell equation for the C-field)
$\begin{array}{l} \mathrm{d} \, G_4^s \;=\; 0 \\ \mathrm{d} \, G_7^s \;=\; \tfrac{1}{2} G_4^s \, G_4^s \end{array}$
is equivalent to the joint solution by $\big(e, \psi, \omega, G_4^s,\, G_7^s\big)$ of the equations of motion of D=11 supergravity.
This is, in some paraphrase, the result of
CDF91, §III.8.5
, We indicate the
broken up in the following Lemmas
, and
In all of the following lemmas one expands the Bianchi identoties in their super-vielbein form components.
The Bianchi identity for $G^s_4$(8) is equivalent to
1. the closure of the ordinary 4-flux density $G_4$
2. the following dependence of $\rho$ on $G_4$
shown in any super-chart:
(12)$\begin{array}{l} \mathrm{d}\, G^s_4 \;=\; 0 \\ \;\Leftrightarrow\; \left\{ \begin{array}{l} \big( abla_{a} (G_4)_{a_1 \cdots a_4} \big) e^{a} \, e^{a_1} \cdots e^{a_4} \;=\; 0 \\ \rho \;=\; \
rho_{a b} \, e^{a} \, e^b \,+\, \underset{ H_a }{ \underbrace{ \Big( \tfrac{1}{6} \, \tfrac{1}{3!} (G_4)_{a b_1 b_2 b_3} \,\Gamma^{a b_1 b_2 b_3}\, \, - \tfrac{1}{12} \, \tfrac{1}{4!} (G_4)_{b_1 \
cdots b_4} \,\Gamma^{a b_1 \cdots b_4}\, \Big) } } \psi \, e^a \\ \Big( \tfrac{1}{4!} \psi^\alpha abla_\alpha (G_4)_{a_1 \cdots a_4} \;+\; \big( \overline{\psi} \Gamma_{a_1 a_2} \rho_{a_3 a_4} \big)
\Big) e^{a_1} \cdots e^{a_4} \;=\; 0 \,. \end{array} \right. \end{array}$
This is essentially the claim in
CDF91 (III.8.44-49 & 60b)
; full proof is given in
GSS24, Lem. 3.2
The general expansion of $\rho$ in the super-vielbein basis is of the form
$\rho \;:=\; \rho_{a b} \, e^a\, e^b + H_a \psi \, e^a + \underset{ = 0 }{ \underbrace{ \overline{\psi} \,\kappa\, \psi } } \,,$
where the last term is taken to vanish.l (…).
Therefore, the Bianchi identity has the following components,
(13)$\begin{array}{l} \mathrm{d} \Big( \, \tfrac{1}{4!} (G_4)_{a_1 \cdots a_4} \, e^{a_1} \cdots e^{a_4} - \tfrac{1}{2} \big( \overline{\psi} \Gamma_{a_1 a_2} \psi \big) \, e^{a_1}\, e^{a_2} \Big) \;
=\; 0 \\ \;\Leftrightarrow\; \left\{ \begin{array}{l} \big( abla_{a} (G_4)_{a_1 \cdots a_4} \big) e^{a}\, e^{a_1} \cdots e^{a_4} \;=\; 0 \\ \Big( \tfrac{1}{4!} \psi^\alpha \big( abla_\alpha (G_4)_
{a_1 \cdots a_4} \big) \;+\; \big( \overline{\psi} \Gamma_{a_1 a_2} \rho_{a_3 a_4} \big) \Big) e^{a_1} \cdots e^{a_4} \;=\; 0 \\ \tfrac{1}{3!} (G_4)_{a b_1 b_2 b_3} \big( \overline{\psi} \,\Gamma^a\,
\psi \big) \, e^{b_1 b_2 b_3} + \big( \overline{\psi} \,\Gamma_{a_1 a_2}\, H_b \psi \big) e^{a_1} \, e^{a_2} \, e^b \;=\; 0 \,, \end{array} \right. \end{array}$
where we used that the quartic spinorial component vanishes identically, due to a Fierz identity (here):
$- \tfrac{1}{2} \big( \overline{\psi} \Gamma_{a_1 a_2} \psi \big) \big( \overline{\psi} \Gamma^{a_1} \psi \big) e^{a_2} \;=\; 0 \,.$
To solve the second line in (13) for $H_a$ (this is CDF91 (III.8.43-49)) we expand $H_a$ in the Clifford algebra (according to this Prop.), observing that for $\Gamma_{a_1 a_2} H_{a_3}$ to be a
linear combination of the $\Gamma_a$ the matrix $H_a$ needs to have a $\Gamma_{a_1}$-summand or a $\Gamma_{a_1 a_2 a_3}$-summand. The former does not admit a Spin-equivariant linear combination with
coefficients $(G_4)_{a_1 \cdots a_4}$, hence it must be the latter. But then we may also need a component $\Gamma_{a_1 \cdots a_5}$ in order to absorb the skew-symmetric product in $\Gamma_{a_1 a_2}
H_a$. Hence $H_a$ must be of this form:
(14)$H_a \;=\; \mathrm{const}_1 \, \tfrac{1}{3!} (G_4)_{a b_1 b_2 b_3} \Gamma^{b_1 b_2 b_3} + \mathrm{const}_2 \, \tfrac{1}{4!} (G_4)^{b_1 \cdots b_4} \Gamma_{a b_1 \cdots b_4} \,.$
With this, we compute:
(15)$\begin{array}{ll} \big( \overline{\psi} \Gamma_{a_1 a_2} H_{a_3} \psi \big) e^{a_1} \, e^{a_2} \, e^{a_3} & =\; \mathrm{const}_1 \, \tfrac{1}{3!} (G_4)_{a_3 b_1 b_2 b_3} \, \big( \overline{\psi}
\Gamma_{a_1 a_2} \Gamma^{b_1 b_2 b_3} \psi \big) e^{a_1} \, e^{a_2} \, e^{a_3} \\ & \;\;\;+\, \mathrm{const}_2 \, \tfrac{1}{4!} \, (G_4)^{b_1 \cdots b_4} \, \big( \overline{\psi} \Gamma_{a_1 a_2} \
Gamma_{a_3 b_1 \cdots b_4} \psi \big) e^{a_1} \, e^{a_2} \, e^{a_3} \\ & \;=\; 1 \, \mathrm{const}_1 \, \tfrac{1}{3!} \, (G_4)_{a_3 b_1 b_2 b_3} \big( \overline{\psi} \,\Gamma_{a_1 a_2}{}^{b_1 b_ 2
b_3}\, \psi \big) e^{a_1} \, e^{a_2} \, e^{a_3} \\ & \;\;\;+\, 6 \, \mathrm{const}_1 \, \tfrac{1}{3!} \, (G_4)_{b_3 a_1 a_2 a_3} \big( \overline{\psi} \,\Gamma^{b_3}\, \psi \big) e^{a_1} \, e^{a_2}
\, e^{a_3} \\ & \;\;\;+\, 8 \, \mathrm{const}_2 \, \tfrac{1}{4!} \, (G_4)^{b_1 \cdots b_3 a_3} \, \big( \overline{\psi} \Gamma^{a_1 a_2}{}_{b_1 \cdots b_3} \psi \big) e^{a_1} \, e^{a_2} \, e^{a_3}
\,. \end{array}$
Here the multiplicities of the nonvanishing Clifford-contractions arise via this Lemma:
$\begin{array}{l} 1 \;=\; 0! \Big( {2 \atop 0} \Big) \Big( {3 \atop 0} \Big) \\ 6 \;=\; 2! \Big( {2 \atop 2} \Big) \Big( {3 \atop 2} \Big) \\ 8 \;=\; 1! \Big( {2 \atop 1} \Big) \Big( {4 \atop 1} \
Big) \,, \end{array}$
and all remaining contractions vanish inside the spinor pairing by this lemma.
Now using (15) in (13) yields:
$\begin{array}{l} \mathrm{const}_1 = -1/6 \,, \\ \mathrm{const}_2 = - 4!/3! \, \mathrm{const}_1 / 8 = + 1/12 \,, \end{array}$
as claimed.
Given the Bianchi identity for $G^s_4$(12), then the Bianchi identity for $G^s_7$(8) is equivalent to
1. the Bianchi identity for the ordinary flux density $G_7$
2. its Hodge duality to $G_4$
3. another condition on the gravitino field strength
(16)$\begin{array}{l} \mathrm{d} \, G^s_7 \;=\; \tfrac{1}{2} G^s_4 \, G^s_4 \\ \;\Leftrightarrow\; \left\{ \begin{array}{l} \big( abla_{a_1} \tfrac{1}{7!} (G_7)_{a_2 \cdots a_8} \big) e^{a_1} \cdots
e^{a_8} \;=\; \tfrac{1}{2} \big( \tfrac{1}{4!} (G_4)_{a_1 \cdots a_4} \, \tfrac{1}{4!} (G_4)_{a_5 \cdots a_8} \big) e^{a_1} \cdots e^{a_8} \\ (G_7)_{a_1 \cdots a_7} \;=\; \tfrac{1}{4!} \epsilon_{a_1
\cdots a_b b_1 \cdots b_4} (G_4)^{b_1 \cdots b_4} \\ \Big( \tfrac{1}{7!} \psi^\alpha abla_\alpha (G_7)_{a_1 \cdots a_7} \psi^\alpha \;+\; \frac{2}{5!} \big( \overline{\psi} \Gamma_{a_1 \cdots a_5} \
rho_{a_6 a_7} \big) \Big) e^{a_1} \cdots e^{a_7} \;=\; 0 \end{array} \right. \end{array}$
This is essentially
CDF91, (III.8.50-53)
The components of the Bianchi identity are
$\begin{array}{l} \mathrm{d} \, G_4^s \;=\; 0 \\ \Rightarrow \left\{ \begin{array}{l} \mathrm{d} \Big( \tfrac{1}{7!} (G_7)_{a_1 \cdots a_7} \, e^{a_1} \cdots e^{a_7} - \tfrac{1}{5!} \big( \overline{\
psi} \Gamma_{a_1 \cdots a_5} \psi \big) e^{a_1} \cdots e^{a_5} \Big) \\ \;=\; \tfrac{1}{2} \Big( \tfrac{1}{4!} (G_4)_{a_1 \cdots a_4} e^{a_1} \cdots e^{a_4} - \tfrac{1}{2} \big( \overline{\psi} \
Gamma_{a_1 a_2} \psi \big) \Big) \Big( \tfrac{1}{4!} (G_4)_{a_1 \cdots a_4} e^{a_1} \cdots e^{a_4} - \tfrac{1}{2} \big( \overline{\psi} \Gamma_{a_1 a_2} \psi \big) \Big) \\ \;\Leftrightarrow\; \left\
{ \begin{array}{l} \Big( abla_{a_1} \tfrac{1}{7!} (G_7)_{a_2 \cdots a_8} \;=\; \;\tfrac{1}{2}\; \tfrac{1}{4!} (G_4)_{a_1 \cdots a_4} \, \tfrac{1}{4!} (G_4)_{a_5 \cdots a_8} \Big) e^{a_1} \cdots e^
{a_8} \\ \Big( \tfrac{1}{7!} \psi^\alpha abla_\alpha (G_7)_{a_1 \cdots a_7} + \frac{2}{5!} \big( \overline{\psi} \Gamma_{a_1 \cdots a_5} \rho_{a_6 a_7} \big) \Big) e^{a_1} \cdots e^{a_7} \;=\; 0 \\ \
left. \begin{array}{l} \tfrac{1}{6!} (G_7)_{a_1 \cdots a_6 b} \big( \overline{\psi} \,\Gamma^b\, \psi \big) e^{a_1} \cdots e^{a_6} \\ \;\;\;+\, \tfrac{2}{12} \, \tfrac{1}{5!} \, \tfrac{1}{4!} \,
(G_4)^{b_1 \cdots b_4} \big( \overline{\psi} \, \Gamma_{a_1 \cdots a_5} \, \Gamma_{a b_1 \cdots b_4}\, \psi \big) e^a \, e^{a_1} \cdots e^{a_5} \\ \;\;-\; \tfrac{2}{6} \tfrac{1}{5!} \tfrac{1}{3!}
(G_4)_{a b_1 b_2 b_3} \big( \overline{\psi} \,\Gamma_{a_1 \cdots a_5}\, \Gamma^{b_1 b_2 b_3} \psi \big) e^{a} \, e^{a_1} \cdots e^{a_5} \\ \;\;\;-\, \Big( \tfrac{1}{2} \big( \overline{\psi} \Gamma_
{a_1 a_2} \psi \big) e^{a_1} \, e^{a_2} \Big) \tfrac{1}{4!} (G_4)_{b_1 \cdots b_4} \, e^{b_1} \cdots e^{b_4} \;\;=\;\; 0 \,, \end{array} \right\} \Leftrightarrow (G_7)_{a_1 \cdots a_6 b} \;=\; \tfrac
{1}{4!} \epsilon_{a_1 \cdots a_6 b b_1 \cdots b_4} (G_4)^{b_1 \cdots b_4} \end{array} \right. \end{array} \right. \end{array}$
(i) in the quadratic spinorial component we inserted the expression for $\rho$ from (12), then contracted $\Gamma$-factors using again this Lemma, and finally observed that of the three spinorial
quadratic forms (see there) the coefficients of $\big(\overline{\psi}\Gamma_{a_1 a_2} \psi\big)$ and of $\big(\overline{\psi}\Gamma_{a_1 \cdots a_6} \psi\big)$ vanish identically, by a remarkable
cancellation of combinatorial prefactors:
• $\underset{= 0 }{\underbrace{\bigg(- \frac{2}{12} \frac{1}{5!} \frac{1}{4!} 4! \Big( { 5 \atop 4 } \Big) \Big( { 4 \atop 4 } \Big) \;+\; \frac{2}{6} \frac{1}{5!} \frac{1}{3!} 3! \Big( { 5 \atop 3
} \Big) \Big( { 3 \atop 3 } \Big) \;-\; \frac{1}{2} \frac{1}{4!} \bigg) } } \; (G_4)_{a_2 \cdots a_5} \big( \overline{\psi} \,\Gamma_{a a_1}\, \psi \big) e^{a} \, e^{a_1} \cdots e^{a_6} \;\;\;$ (
• $\underset{ = 0 }{ \underbrace{ \bigg( \frac{2}{12} \frac{1}{5!} \frac{1}{4!} 2 \Big( { 5 \atop 2 } \Big) \Big( { 4 \atop 2 } \Big) \;-\; \frac{2}{6} \frac{1}{5!} \frac{1}{3!} 1 \Big( { 5 \atop 1
} \Big) \Big( { 3 \atop 1 } \Big) \bigg) } } \; (G_4)_{a_1 a_2 b_1 b_2} \big( \overline{\psi} \,\Gamma_{a_3 \cdots a_6}{}^{b_1 b_2}\, \psi \big) e^{a_1} \cdots e^{a_6} \;\;\;$ (check)
(ii) the quartic spinorial component holds identitically, due to the Fierz identity here:
$-\tfrac{1}{4!} \big( \overline{\psi} \,\Gamma_{a_1 \cdots a_5}\, \psi \big) \big( \overline{\psi} \Gamma^{a_1} \big) e^{a_2} \cdots e^{a_5} \;=\; \tfrac{1}{8} \Big( \big( \overline{\psi} \,\Gamma_
{a_1 a_2}\, \psi \big) e^{a_1} e^{a_2} \Big) \Big( \big( \overline{\psi} \,\Gamma_{a_1 a_2}\, \psi \big) e^{a_1} e^{a_2} \Big) \,.$
Therefore the only spinorial component of the Bianchi identity which is not automatically satisfied is (with $\Gamma_{0 1 2 \cdots} = \epsilon_{0 1 2 \cdots}$, see there) the vanishing of
$\tfrac{1}{6!} \Big( (G_7)_{a_1 \cdots a_6 b} - \tfrac{1}{4!} (G_4)^{b_1 \cdots b_4} \epsilon_{b_1 \cdots b_4 a_1 \cdots a_6 b} \Big) \big( \overline{\psi} \,\Gamma^b\, \psi \big) \,,$
which is manifestly the claimed Hodge duality relation.
Given the Bianchi identities for $G_4^s$(12) and $G_7^s$(16), the supergravity fields satisfy their Einstein equations with source the energy momentum tensor of the C-field:
(17)$\begin{array}{l} \mathrm{d}\, G_4^s \;=\;0 \,, \;\;\; \mathrm{d}\, G_7^s \;=\; \tfrac{1}{2} G_4^s \, G_4^2 \\ \;\Rightarrow\; \left\{ \begin{array}{l} R^{a m}_{b m} - \tfrac{1}{2} \delta^a_b\, R
^{m n}_{m n} \;=\; - \tfrac{1}{12} \Big( \, (G_4)^a{c_1 \cdots c_3} (G_4)_{b c_1 \cdots c_3} - \tfrac{1}{8} (G_4)^{c_1 \cdots c_4} (G_4)_{c_1 \cdots c_4} \delta^a_b \;\;\;\; ({\color{darkblue}\text
{Einstein equation}}) \\ \Gamma^{b a_1 a_2} \rho_{a_1 a_2} \;=\; 0 \;\;\;\; ({\color{darkblue}\text{Rarita-Schwinger equation}}) \end{array} \right. \end{array}$
Cf. e.g.
CDF91, (III.8.54-60)
; full details are given in
GSS24, Lem. 3.8
Lagrangian densities
Cosmo-cocycle equations
We discuss how actional functionals for supergravity theories are special cases of this.
In first-order formulation of gravity where the field of gravity is encoded in a vielbein $E$ and a spin connection $\Omega$, the Einstein-Hilbert action takes the Palatini form
$\mathcal{L} : (e,\omega) \mapsto \int_X R^{a_1 a_2} \wedge E^{a_3} \wedge \cdots \wedge E^{a_d} \epsilon_{a_1 \cdots a_d} + \cdots \,,$
where $R^{a b} = \mathbf{d} \Omega^{a b} + \Omega^{a c}\wedge \Omega_c{}^b$ are the components of the curvature of $\Omega$ and
$\epsilon_{a_1 \cdots a_n} = sgn(a_1, \cdots, a_n)$
is the signature of the index-permutation.
If $E$ and $\Omega$ are components of an ∞-Lie algebroid-valued form $\Omega^\bullet(X) \leftarrow W(\mathfrak{g}) : A$ then such a Palatini term is of the form as may appear in a Chern-Simons
$\Omega^\bullet(X) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \stackrel{cs}{\leftarrow} W(b^{n-1}\mathbb{R}) : cs(A)$
on $W(\mathfrak{g})$. We now discuss, following D’Auria-Fré, how the action functionals of supergravity are related to ∞-Chern-Simons theory for Chern-Simons elements on certain super $\infty$-Lie
We discuss a system of equations that characterizes a necessary condition on Chern-Simons elements in the Weil algebra $W(\mathfrak{g})$. This condition is called the cosmo-cocycle condition in (
To do so, we work in a basis $\{t^a\}$ of $\mathfrak{g}^*$. Let $\{r^a\}$ be the corresponding shifted basis of $\mathfrak{g}^*[1]$. Write $\{\frac{1}{n}C^a{}_{b_0 \cdots b_n}\}$ for the structure
constants in this basis, so that the differential in the Weil algebra acts as
$d_W : t^a \mapsto \sum_{n \in \mathbb{N}} \frac{1}{n} C^a{}_{b_0 \cdots b_n} t^{b_0} \wedge \cdots \wedge t^{b_n} + r^a \,.$
Write a general element in $W(\mathfrak{g})$ as
$cs \coloneqq \lambda + r^a \wedge u_a + r^a \wedge r^b \wedge u_{a b} + \cdots + r^{a_1} \wedge \cdots \wedge r^{a_d} u_{a_1 \cdots a_2} \,,$
where $\lambda, u_a, u_{a b}, \cdots \in CE(\mathfrak{g})$.
The condition that $d_{W(\mathfrak{g})} (cs)$ has no terms linear in the curvatures $r^a$ is equivalent to the system of equations
\begin{aligned} \iota_{t_a} \lambda + abla u_a & \coloneqq \iota_{t_a} \lambda + d_W u_a + (-1)^{|t_a|} C^c{}_{a b_1 \cdots b_n} t^{b^1} \wedge \cdots t^{b^n} \wedge u_c \\ & = 0 \end{aligned} \,,
for all $t_a \in \mathfrak{g}$.
In DAuriaFre p. 9 this system of equations is called the cosmo-cocycle condition .
This follows straightforwardly from the definition of the Weil algebra-differential $d_{W(\mathfrak{g})}$:
We have $d_{W(\mathfrak{g})} = d_{CE(\mathfrak{g})} + \mathbf{d}$, where $\mathbf{d} : t^a \mapsto r^a$. So
$d_{W(\mathfrak{g})} \lambda = d_{CE(\mathfrak{g})} \lambda + \mathbf{d} \lambda = d_{CE(\mathfrak{g})} \lambda + \sum_a r^a \wedge \iota_{t_a} \lambda \,.$
Here the first term contains no curvatures, while the second is precisely linear in the curvatures.
Moreover, by the Bianchi identity we have
$d_{W(\mathfrak{g})} r^a = \sum_n C^a{}_{b_0 \cdots b_n} r^{b_0} \wedge t^{b_1} \wedge \cdots \wedge t^{b_n} \,.$
Therefore the condition that all terms in $d_{W} cs$ that are linear in $r^a$ in vanish is
\begin{aligned} & r^a \wedge \iota_{t_a} \lambda + (-1)^{|t_a|} r^a d_{CE(\mathfrak{g})}u_a + r^a \wedge \sum_n C^c{}_{a b_1 \cdots b_n} t^{b_1} \wedge t^{b_n} \wedge u_c \\ & = r^a( \iota_{t_a} \
lambda + d_{CE(\mathfrak{g})}u_a + (-1)^{|t_a|} \sum_n C^c{}_{a b_1 \cdots b_n} t^{b_1}\wedge \cdots \wedge t^{b_n} \wedge u_c ) \\ & = 0 \end{aligned} \,.
Minimal 4-dimensional $N=2$ supergravity
5-Dimensional Supergravity
$11$-Dimensional Supergravity
Let $\mathfrak{g} = \mathfrak{sugra}_6$ be the supergravity Lie 6-algebra.
The Weil algebra:
$d_{W} c = \frac{1}{2} \bar \psi \wedge \Gamma^{a b} \psi \wedge e_a \wedge e_b + r^c$
The Bianchi identity
$d_W r^c = \bar \psi \wedge \Gamma^{a b} \rho \wedge e_a \wedge e_b - \bar \psi \wedge \Gamma^{a b} \psi \wedge \theta_a \wedge e_b$
The element that gives the action is
\begin{aligned} \ell_{11} &= -\frac{1}{9} R^{a_1 a_2} \wedge e^{a_3} \wedge \cdots \wedge e^{a_{11}} \epsilon_{a_1 \cdots a_{11}} \\ & + \cdots \\ & + \cdots \\ & + \cdots \\ & + \cdots \\ & + \cdots
\\ & + 840 r^c \wedge \bar \psi \Gamma^{a b} \psi \wedge e_a \wedge e_b \wedge c \\ & + \cdots \\ & + \frac{1}{4}\bar \psi\wedge \Gamma^{a_1 a_2} \psi \wedge \bar \psi \Gamma^{a_3 a_4} \psi \wedge e^
{a_5} \wedge \cdots \wedge e^{a_{11}} \epsilon_{a_1 \cdots a_{11}} \\ & + - 14 \cdot 15 \bar \psi \wedge \Gamma^{a_1 a_2} \psi \wedge \bar \psi \Gamma^{a_3 a_4} \psi \wedge e_{a_1} \wedge \cdots \
wedge e_{a_4} \wedge C \\ & -840 r^c \wedge r^c \wedge c \end{aligned}
This is DAuriaFre, page 26.
The first term gives the Palatini action for gravity.
The last terms is the Chern-Simons term for the the supergravity C-field.
The second but last two terms are the cocycle $\Lambda$.
We find that the $d_W$-differential of this Lagrangian term is
\begin{aligned} d_{W} \ell_{11} & = r^c \wedge r^c \wedge r^c \\ & - R^{a_1 a_2} \wedge \theta^{a_3} \wedge \cdots \wedge e^{a_{11}} \epsilon_{a_1 \cdots a_{11}} \\ & + \cdots \\ & + 840 \{ \sigma(r^
c \wedge \bar \psi \Gamma^{a b} \psi \wedge e_a \wedge e_b \wedge c ) + (d_{W}(r^c \wedge r^c)) \wedge c = 0 \} \\ & + 840 r^c \wedge r^c \wedge \bar \psi \Gamma_{a b} \psi\wedge e_a \wedge e_b - i
48 r^c \wedge \sigma(\bar \psi \wedge \Gamma^{a_1 \cdots a_5} \psi \wedge e_{a_1} \wedge \cdots \wedge e_{a_5}) \\ & + \cdots \end{aligned} \,.
This fails to sit in the shifted generators by the terms coming from the translation algebra. For the degree-3 element $c$ however it does produce the expected term $r^c \wedge r^c \wedge r^c$.
The formulation of supergravity on super spacetime supermanifolds (“superspace”) and the relevance of the Bianchi identities originates with:
paralleled by these maybe overlooked articles that make the underlying Cartan geometry more explicit:
• N. S. Baaklini, Spin 3/2 field and Cartan’s geometry, Lett. Math. Phys. 2 (1977) 43-47 [doi:10.1007/BF00420670]
• N. S. Baaklini, Cartan’s geometrical structure of supergravity, Lett. Math. Phys. 2 (1977) 115-117 [doi:10.1007/BF00398576]
The use in this context of super L-∞ algebras, implicitly, in their formal dual incarnation (as pointed out later in FSS15, FSS18a, FSS18b, Sc19) as semifree super-graded commutative dg-algebras was
suggested originally in
• Yuval Ne'eman, Tullio Regge, Gravity and supergravity as gauge theories on a group manifold, Physics Letters B 74 1–2 (1978) 54-56 [doi:10.1016/0370-2693(78)90058-8, spire:6328]
also: Rivista del Nuovo Cimento 1 5 (1978) 1–43
• Peter van Nieuwenhuizen, Free Graded Differential Superalgebras, in: Group Theoretical Methods in Physics, Lecture Notes in Physics 180, Springer (1983) 228–247 [doi:10.1007/
3-540-12291-5_29, spire:182644]
The original articles that introduced specifically the D’Auria-Fré-formalism for discussion of supergravity in this fashion (“geometric supergravity”):
• Riccardo D'Auria, Pietro Fré, Tullio Regge, Geometrical formulation of supergravity as a theory on a supergroup manifold, contribution to Supergravity Workshop, Stony Brook (1979) 85-92 [
• Riccardo D'Auria, Pietro Fré, Tullio Regge, Supergravity and Cohomology Theory: Progress and Problems in $D = 5$, in Unification of the Fundamental Particle Interactions, Ettore Majorana
International Science Series, 7 Springer (1980) 171-185 [doi:10.1007/978-1-4613-3171-1_12, pdf]
• Riccardo D'Auria, Pietro Fré, Tullio Regge, Graded Lie algebra, cohomology and supergravity, Riv. Nuov. Cim. 3 fasc. 12 (1980) [spire:156191]
• Riccardo D'Auria, Pietro Fré, About bosonic rheonomic symmetry and the generation of a spin-1 field in $D=5$ supergravity, Nuclear Physics B
173 3 (1980) 456-476 [doi:10.1016/0550-3213(80)90013-9]
(on D=4 N=1 supergravity)
• Pietro Fré, Extended supergravity on the supergroup manifold: $N=3$ and $N=2$ theories, Nuclear Physics B
186 1 (1981) 44-60 [doi:10.1016/0550-3213(81)90092-4]
(on D=4 N=2 and D=4 N=3 supergravity)
• Riccardo D'Auria, Pietro Fré, A. J. da Silva, Geometric structure of $N=1$, $D=10$ and $N=4$, $D=4$ super Yang-Mills theory, Nuclear Physics B 196 2 (1982) 205-239 [doi:10.1016/0550-3213
(on D=10 and D=4 N=4 super Yang-Mills theory with emphasis on Fierz identities)
• Riccardo D'Auria, Pietro Fré, E. Maina, Tullio Regge, A New Group Theoretical Technique for the Analysis of Bianchi Identities and Its Application to the Auxiliary Field Problem of $D=5$
Supergravity, Annals Phys. 139 (1982) 93 [doi:10.1016/0003-4916(82)90007-0, spire:167640]
(on D=5 supergravity with emphasis on Fierz identities)
• Riccardo D'Auria, Pietro Fré, Geometric Supergravity in D=11 and its hidden supergroup, Nuclear Physics B 201 (1982) 101-140 [doi:10.1016/0550-3213(82)90376-5, errata]
(in D=11 supergravity)
• Leonardo Castellani, Pietro Fré, F. Giani, Krzysztof Pilch, Peter van Nieuwenhuizen, Gauging of $d = 11$ supergravity?, Annals of Physics 146 1 (1983) 35-77 [spire:11998, doi:10.1016/
• Riccardo D'Auria, Pietro Fré, Paul Townsend, Invariance of actions, rheonomy and the new minimal $N=1$ supergravity in the group manifold approach, Ann. Phys. 155 (1984) 423-446 [
cds:143990, doi:10.1016/0003-4916(84)90007-1, pdf]
Early review with an eye towards more mathematical language:
• Leonardo Castellani, Riccardo D'Auria, Pietro Fré, Supergravity and Superstrings - A Geometric Perspective, World Scientific (1991) [doi:10.1142/0224, toc: pdf, ch I.2: pdf, ch I.3: pdf,
ch I.4: pdf, ch I.6: pdf, ch II.2: pdf, ch II.3: pdf, ch II.5: pdf, ch II.7: pdf, ch II.8: pdf, ch III.3: pdf, ch III.6: pdf, ch III.7: pdf, ch III.8: pdf, ch V.4: pdf]
• Pietro Fré, §6 in: Gravity, a Geometrical Course, Volume 2: Black Holes, Cosmology and Introduction to Supergravity, Springer (2013) [doi:10.1007/978-94-007-5443-0]
See also:
Discussion of gauged supergravity in this way:
Independent dicussion of rheonomy in the guise of “superspace constraints”:
The interpretation of the D’Auria-Fré-formulation as identifying supergravity fields as ∞-Lie algebra valued differential forms is in
The Lie integration of that to genuine principal ∞-connections is in
The super L-∞ algebras that govern the construction are interpreted in the higher gauge theory of an ∞-Wess-Zumino-Witten theory description of the Green-Schwarz sigma-model-type $p$-branes in
Apart from that the first vague mention of the observation that the “FDA”-formalism for supergravity is about higher categorical Lie algebras (as far as I am aware, would be grateful for further
references) is page 2 of
An attempt at a comprehensive discussion of the formalism in the context of cohesive (∞,1)-topos-theory for smooth super ∞-groupoids is in the last section of
To compare D’Auria-Fre with our language here, notice the following points in their book
• The statement that a supergravity field is a morphisms $\phi : T X \to inn(\mathfrak{g})$ or dually a morphism $\Omega^\bullet(X) \leftarrow W(\mathfrak{g}) : \phi$ out of the Weil algebra of the
supergravity Lie 3-algebra or similar is implicit in $(I.3.122)$ (but it is evident, comparing with the formulas at Weil algebra) – notice that these authors call $\phi$ here a “soft form”.
• What we identify as gauge transformations and shifts by the characterization of curvature forms on the cylinder object $U \times \Delta^{1|p}$ is their equation (I.3.36).
Some more references:
• Pietro Fré, M-theory FDA, twisted tori and Chevalley cohomology (arXiv)
• Pietro Fré, Pietro Antonio Grassi, Pure spinors, free differential algebras, and the supermembrane (arXiv:0606171)
• Pietro Fré, Pietro Grassi, Free differential algebras, rheonomy, and pure spinors (arXiv:0801.3076)
Discussion in this formalism of the Green-Schwarz action functional for the M2-brane sigma-model with a target space 11-dimensional supergravity background is in
• Gianguido Dall'Agata, Davide Fabbri, Christophe Fraser, Pietro Fré, Piet Termonia, Mario Trigiante, The $Osp(8|4)$ singleton action from the supermembrane, Nucl. Phys. B 542 (1999) 157-194 &
• Pietro Fré, Pietro Antonio Grassi, Pure Spinors, Free Differential Algebras, and the Supermembrane, Nucl. Phys. B 763:1-34, 2007 (arXiv:hep-th/0606171)
Relation to pure spinor-formalism:
See also:
• S. Salgado, Non-linear realizations and invariant action principles in higher gauge theory [arXiv:2312.08285] | {"url":"https://ncatlab.org/nlab/show/D%27Auria-Fr%C3%A9-Regge+formulation+of+supergravity","timestamp":"2024-11-12T02:28:46Z","content_type":"application/xhtml+xml","content_length":"397032","record_id":"<urn:uuid:29672fb1-4f50-4ff4-bd30-3e6eea073afe>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00051.warc.gz"} |