content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
What happens when the wavelength of a wave decreases?
What happens when the wavelength of a wave decreases?
So if a wave slows down, its wavelength will decrease. Although the wave slows down, its frequency remains the same, due to the fact that its wavelength is shorter. When waves travel from one medium
to another the frequency never changes. As waves travel into the denser medium, they slow down and wavelength decreases.
How does wavelength affect electromagnetic waves?
Electromagnetic waves vary in wavelength and frequency. Longer wavelength electromagnetic waves have lower frequencies, and shorter wavelength waves have higher frequencies. Higher frequency waves
have more energy. The speed of a wave is a product of its wavelength and frequency.
Do electromagnetic waves decrease?
The wavefront of electromagnetic waves emitted from a point source (such as a light bulb) is a sphere. In order of increasing frequency and decreasing wavelength these are: radio waves, microwaves,
infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays.
What happens to the frequency of an electromagnetic wave when the energy of the wave decreases?
Therefore, we can think of this as light since it is a type of electromagnetic wave. Looking at the equation for energy per photon of light, , we realize that energy per photon is proportional to
frequency. As a result, if frequency decreases, then energy decreases as well, making C the correct answer.
When wavelength decreases what happens to the energy?
Energy of radiation is inversely proportional to its wavelength. That is, when the wavelength increases, energy decreases and when the wavelength decreases, energy increases.
What happens to the wavelength and frequency of the electromagnetic waves as it progresses from left to right?
As you go from left → right, the wavelengths get smaller and the frequencies get higher. This is an inverse relationship between wave size and frequency.
What happens to the wavelength of an electromagnetic wave as the frequency increases?
The number of complete wavelengths in a given unit of time is called frequency (f). As a wavelength increases in size, its frequency and energy (E) decrease. From these equations you may realize that
as the frequency increases, the wavelength gets shorter. As the frequency decreases, the wavelength gets longer.
What is the relationship between wavelength and frequency?
Frequency and wavelength are inversely proportional to each other. The wave with the greatest frequency has the shortest wavelength. Twice the frequency means one-half the wavelength. For this
reason, the wavelength ratio is the inverse of the frequency ratio.
Does the wavelength of the electromagnetic waves increase or decrease as we go from radio wave to gamma ray?
The sequence from longest wavelength (radio waves) to shortest wavelength (gamma rays) is also a sequence in energy from lowest energy to highest energy. Remember that waves transport energy from
place to place.
What happens to energy when wavelength decreases?
Why does energy increase as the wavelength decrease?
From this equation, it is clear that the energy of a photon is directly proportional to its frequency and inversely proportional to its wavelength. Thus as frequency increases (with a corresponding
decrease in wavelength), the photon energy increases and visa versa.
Does the wavelength of the EM waves increase or decrease as we go from radio wave to gamma ray?
What happens to the wavelength when the frequency increases?
The higher the frequency, the shorter the wavelength. Because all light waves move through a vacuum at the same speed, the number of wave crests passing by a given point in one second depends on the
wavelength. READ: Can you catch food poisoning from another person?
Which is a wavelength in the electromagnetic spectrum?
This diagram shows that the electromagnetic spectrum includes waves with all possible wavelengths of radiation. Wavelengths range from low energy radio waves through visible light to high energy
gamma rays. Waves with longer wavelengths have a lower frequency and carry less energy.
Which is true about the frequency of waves?
Waves with longer wavelengths have a lower frequency and carry less energy. Waves with shorter wavelengths have a higher frequency and carry more energy.
How is light transported in the electromagnetic spectrum?
Energy ( electromagnetic energy) is the radiant energy ( light) transported by electromagnetic waves. Light can be used to mean the whole of the electromagnetic spectrum from radio waves, through
visible light to gamma rays. A better term is radiant energy or photon energy.
|
{"url":"https://sage-answer.com/what-happens-when-the-wavelength-of-a-wave-decreases/","timestamp":"2024-11-11T09:42:57Z","content_type":"text/html","content_length":"144444","record_id":"<urn:uuid:4df342de-b4f9-4366-a841-b8e0f019bd83>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00015.warc.gz"}
|
Approachably Reclusive
Every so often over on Quora, which I highly recommend, someone asks something I find interesting, and I start to write a post, and before I know it I've written way more than I intended. That just
happened, so here's the post. You can find the Quora version, which is almost the same after the little squiggle I use to mark a major break,
on this Quora pag
e, along with the pretty-good answers of several other people.
You can find a whole bunch of my Quora answers at
my Quora profile
, down below all the list of subjects that I find it interesting to read or write (or both) about. All that
may not teach you anything except what it amuses me to write about when I'm bored and procrastinating.
Anyway, Lauren Godfrey asked a cool question, and here's how I answered it, and I hope it's as good an answer as her question was:
Many things, so it depends on the kid. But you have to start with
0. some of them don't; you might as well ask why they stick it out (and it will be many more than one reason).
Now, off the top of my head, reasons I have seen kids give up on math:
1. Most adults are not good at math (they are after all products of a system that notoriously doesn't teach it successfully). Some of them transmit de-motivating feelings to kids around them, such
1. fear of math
2. fear the kid will quickly excel them
3. anger at having to study math
4. defensive attacks on math as unneeded or useless
2. Well-taught math has a solid conceptual base. A kid who never develops an understanding of that conceptual base is going to be lost and helpless sooner or later (the usual years for that problem
are about 9-12, and the usual sticking places are long division, fractions, decimals, and elementary algebra).
1. Kids who understand better conceptually and don't get any conceptual instruction are primed for a particularly infuriating flavor of failure because they won't be able to do what the problems
are asking later on, but they will understand that they should be able to.
2. Kids who learn better by other means (patterning is the most common) will try to learn more advanced math by those other means, and at some point it can't be done. Then, since the only tool
they have is a hammer, and they are out of nails, they quit.
3. Well-taught math is constantly related to other subjects (science, history, geography, many more) and kids who aren't shown those relationships usually don't figure them out for themselves, so
they give up because "this has nothing to do with anything."
1. Many people don't use math where it would be to their advantage to use it, because they don't know how; their children learn that buying too much paint or too little tile, adapting a recipe
to the scarcest ingredient and having it come out tasting funny, never really knowing how much is in your bank account, etc. are just how life works; in extreme cases there are adults who
simply do not believe in math at all (i.e. they don't think that things that are routinely calculated can be known by any means other than trial and error).
2. Many teachers were attracted to subjects because they thought they were unmathematical. It isn't necessarily obvious to an English teacher that much of the plot of Shakespeare's
double-tetralogy of histories (from Richard II through Henrys IV-VI to Richard III) depends on logistics, travel times, and various other things you can calculate; there are math problems
under nearly every human activity, but a teacher who doesn't know them can't show them.
3. Many math teachers really like abstract problems because they like doing nifty puzzles; more concrete real world problems aren't as much fun, and they tend to stint them (and not know enough
about the related subject to teach them).
4. Well-taught math demands quick effective recall (like knowing the operations tables, the simple algorithms, some ordinary relations between fractional and decimal representation, and so on, with
almost instant recall of accurate information). Memorization has been very out of fashion in education for a long time (somebody ask me sometime why that was bad. Don't ask this weekend, I'm
going to be too busy. Just remember to ask it, later). This has unfortunate consequences:
1. Kids don't learn how to memorize for effective recall, so they don't know how to put the basic information into their heads.
2. Kids are rarely asked to use any power of effective recall so they don't know how to pull the information they need out of their heads and use it.
3. Teachers don't know how to teach either of those skills so they can't help.
4. Kids and teachers therefore avoid memorization because they're not good at it and it mostly just makes them unhappy.
5. Inane curricula re-written by textbook consultants who didn't understand what the textbook author was trying to say or why it mattered. Honestly, truly, there are some textbook publishers who
appear to prefer a clear and easy to read style in the words, and nice looking graphics, to accuracy in the math.
6. The luck of the draw in teacher quality and/or connection; every so often a kid just gets a teacher who kills math for him/her. That will always happen to some extent, just as it happens for
every other subject. (I enjoy working in visual arts now, but hated it as a kid; I had 3 awful art teachers back to back, and it took one great one in 7th grade most of a year to win me back to
art. On the other hand, as it happened, I had so many good math teachers that by the time I had my first bad math teacher, I was incurably a math kid).
7. Some otherwise successful-enough people seem to stick at the concrete-operational Piaget stage, which pretty much limits you to below-algebra math.
8. Other people have a hard time sequencing multiple steps (lack of executive function) for any of several possible reasons, so again they're limited to math problems that can be done in a number of
steps they can sequence.
9. And yet more kids have problems with branching algorithms: in long division, for example, you have to remember that if the remainder is bigger than the divisor, you need to back up and add one or
more to the last digit on the dividend. Comparing two numbers to decide which of several possible things to do to a third number is genuinely beyond many seemingly normal 3rd-5th graders (almost
everyone can do it by 6th grade). If the kid has to do some kinds of math before his/her brain grows into it, that can be the end of math for that kid.
10. Some students develop a habit early on of simply grabbing all the numbers in a problem and more or less randomly applying algorithms to them. "Bobby is riding in a car and reading a book. He is 9
years old, 52 inches tall, and weighs 63 pounds. He reads 72 pages, and there are an average of 208 words on each page. The car takes 2 hours to travel 100 miles. What is the average speed of the
car?" There are people who can't solve that problem because they have no sense that to compute a speed, all they need is distance traveled and time taken. So when the teacher demonstrates the
answer, their reaction tends to be that "seven" was a perfectly good answer because they divided the weight by the age correctly, or that the teacher is unfairly shutting down their answer of
10,816 (number of words on a page multiplied by height) even though they had to multiply much bigger numbers to get it. Obviously teachers just make up the rules as they go to cheat the kids, and
to hell with this. (Sound extreme? I have worked with four children just like this).
Anyway, that's ten, with some subdivisions. I've seen all of those in my practice as a tutor. Hop over to
Approachably Reclusive
(the link is to one of my better pieces) and you'll find quite a few other things I've written about problems in math teaching.
And of course, remember also the Nth and final reason why kids give up on math:
N. Almost everyone eventually gives up on almost everything. You're not a professional singer, firefighter, astronaut, Olympic athlete, prima ballerina, Nobel prizewinner, or the president, are you?
At some point, we stop developing most of our capabilities, because we just can't develop them all.
Many kids give up on math earlier than is optimal for them or for society, but we probably want to achieve a more optimal distribution of give-up points, not fill the retirement homes with would be
Galoises and Gausses.
I posted this as a comment over at Daily Kos, where folks were talking about the confirmed identification of gravity waves from the merger of two infalling black holes. Then I decided I liked it
enough that I wanted it to be here as well. Sorry for anyone who happens to be seeing it twice -- think of it as a sort of gravitational lensing, I guess.
Whatever Hope We Have
That was the title of a Maxwell Anderson essay back in the 1930s, just as the world was getting ready to slide into the Second Active Round of the 75 Years’ War. His point was this: no civilization
of human beings has lasted forever. Ours can’t be expected to either; we can hope it ends in something better blossoming from it, rather than in destruction and chaos, but we should face the fact
that there will only be people like us on the Earth for a limited time.
In light of that, what will we be remembered for?
Whether we approve of it or not, and no matter how appalling we may think a civilization was, taken as a whole, whatever hope they (and we) have is to be remembered for our best. We remember High
Medieval Europe more for the cathedrals and the poetry than for the Children’s Crusade; Athens more for Euclid, Pericles, Plato, and Euripides than for the slaves in the silver mines; the Abbasid
Caliphate for its artists, poets, scholars, and scientists and its ideal of religious tolerance more than for its slave trade and conquests; China more for its early explorations than for its later
suppression of them, and more for its seeking of wisdom than for its fossilization of tradition. What will be our Notre Dame, our Taj Mahal, our Popul Vuh, when we are dust and the debunking
historians of the successor civilization begin to describe us (as every successor civilization does of its predecessors) as “Yeah, but ….” ? **
I think the answer is, probably our science. We’re the ones who found out what sort of universe we actually have and where we are in it.
And just as most Medieval Europeans never built a cathedral, and the slaves in the silver mines under Athens didn’t write tragedies, and most peasants historically have had only the most limited idea
of what the tax gatherer was taking the taxes away to do … most of us can’t really understand how the physicists got things down to four fundamental forces, and then to showing that the four are
really one. Nonetheless, in 5000 years, when they’re digging up the remnants of the Roadbuilder Civilization (as Jack McDevitt dubbed us in one excellent novel), we can hope to be remembered for
Einstein and his intellectual descendants.
Or would you rather go into the great heap of history as the creators of Justin Bieber?
* This is very much the same perspective Carl Sandburg in Four Preludes On Playthings of the Wind, except Sandburg doesn't appear to see any hope at all.
** Iron Law: Civilizations begin in heroic myths from their own glorious bards, and end up in museum drawers as "Yeah, but."
It's that time of year again, and for the second year in a row there's only going to be one newsletter. Those who are already regular subscribers, look for it soon.
Those who are not (or whose email changed in the last year) drop me a note and ask to be added to the mailing list.
This particular newsletter will include brief personal news (where I've been for another year), brief publishing and business news (where to find my few publications in the past year and how to buy
signed and personalized backlist copies), and, for the first time (and possibly the only time) instead of the usual personal essay, a new short story that will only ever be published in the
newsletter. It's a light, silly Christmas fantasy, for any of you who like Christmas or things that are light and silly.
Going out somewhere over this weekend, so drop me a note if you want to be on the regular mailing list and aren't already.
If you've read this blog at all in the past few months you know that I'm working on a book called Singapore Math Figured Out for Parents, and I do almost all of the math tutoring for Tutoring
Colorado, my wife's tutoring business. Lately, too, I've noticed my emotional investment in my life as a math tutor deepens with time.
So here's some more about Singapore Math, math tutoring, and math instruction. This one noodles through some ideas that I'm pretty sure I need to put into the book, possibly more diplomatically (so
if this angers or offends you, this would be a helpful time to send hate mail).
And as I often do, I'm starting off with seven little stories.
On break from an English comp class at the College of Last Hopes, I was talking with one of my Adult Disadvantaged Learners about math in general, because she'd been struggling in Pre-algebra. It's
the class that the College of Last Hopes offers to students whose math skills are somewhere south of sixth grade. She told me about having had to help her now-adult son with division when he was in
fourth grade or so; he was in one of the several curricula where they teach factoring before they teach long division. Since she didn't understand factoring but knew the answer could be gotten by
dividing, she taught him the familiar algorithm for long division. He then taught it to several of his classmates, somehow leading to a general parental demand for long division right now. The
principal eventually intervened to tell the teacher to skip over that factoring stuff and teach long division since that was what parents wanted.
When the kids hit fractions, where factoring is often the quick and easy way, things imploded. Not only was it now necessary to go back and learn factoring (without reference to the fundamental
relationship between division and fractions), but in their battle against factoring earlier that year, a large number of kids (and parents, beginning with the one who was now my student) had become
convinced that factoring was innately evil and that the cruel teacher was going to force it on them. Fractions ended up deferred till the next year when a more old-fashioned teacher taught it as a
set of arbitrary, memorizable rules.
My student was very satisfied that she had helped to "make sure they taught my son basic math and that's all you need and that's all they should teach. You don't need fractions for nothing anyway.
It's just like that factoring thing, it don't make no sense and you don't need it. I should know. My whole family's always been good at math."*
Next story is about a much better student. We'll call him Adam, since he's a composite, like every case from tutoring I talk about here; the real kid behind the composite is actually at least three
kids. (That adult student in the first story, who was "good at math" without being able to do much of it, was quite real and individual, however. Kids get a pass into anonymity but a grandmother gets
to own her folly. Them's, as they quaintly say, the rules).
Adam, a pleasant nine-year-old who was recovering quickly from major conceptual math problems, grumbled that now that he could do math, he just wished he was "good at it." This surprised me because
once we got him through the block he turned out to be conspicuously talented, with better concentration and work ethic than most kids his age. If I had to pick a tutee who was "good at math" he'd be
at the top of the list; nor do I stint on praise with young kids, so it wasn't like he'd never heard that he was doing well before.
A little inquiry revealed that "good at math" was what his friends Brian and Claudia were. Both of them were apparently very quick at the algorithms and usually accurate.
Now, I know the school those kids attend, and the curriculum they teach there. After months of tutoring, Adam knew a great deal more math, and understood it better, than most of his peers.
"Well," I said, sympathizing, and trying to understand why he thought his fellow students were ahead of him, "some people are just lucky enough to remember patterns really well the first time and
always get them right afterwards, and they do get right answers very quickly."
"They don't get right answers," Adam corrected me. "I get right answers more often than they do."
"I thought you said they were good at math and you don't feel like you are."
This led to a more detailed account of the peers he envied, establishing that, "They always remember to write the little numbers to the left and above the original numbers, and they cross them out
left to right, and they get all the 'neat and complete' points."
"But they don't get the right answer?"
"Sometimes they forget things or get them backwards like I used to do. I can get the right answer, but Brian always knows how to write everything down so it looks just like in the book. And Claudia
knows a bunch of rhymes for how to remember what order to do things in. I wish I was good at math like that." He looked back at the page he had been working on. "I forget. Do I add or times next?"
I had been struggling to steer another tutee -- I guess we'll call this one Darcy, she lives down in Composite City near Adam -- into a better understanding of fractions. I've been using "rectangle
models" because that's the clearest presentation of fractions I know once a kid is old enough to get the concept. For example, here's a video of a pretty good use of a rectangle model to explain why
you have to get to a common denominator to add fractions.**
Chances are you were shown the simple versions of rectangle models while you were learning; if your teacher was somewhere in the better half of math teachers, some time was spent explaining what it
was about and why you were being shown it. (The less-good half tends to do it because it's in the lesson plan, but doesn't talk as much about why it works or what we can learn from it).
It's just the familiar business of "so if the whole box means one, we draw a line through it and each of the smaller boxes is one half. And we write that to show we have one of the pieces we got by
cutting it into two parts ... now here's a box cut into three pieces, and we're going to talk about two of the pieces, how do we write that? Yes, 2/3 ..." and so on.
Rectangle models are very often used to introduce the most basic ideas about fractions, and then abandoned just as the kid gets to the hard parts. But the good Reverend Thomas Vowler Short, almost
200 years ago, actually developed them for teaching pretty much the whole of fractions, all the way up through fractions made up of expressions, and fractions and ratios in elementary algebra. They
still can be a wonderfully clear view into how complicated problems in fractions actually work. Kho Tek Hong incorporated many aspects of them into the bar modeling methods in Singapore Math, and
many other math teachers use them too, especially for occasions when just looking at the numbers seems to be producing nothing but confusion. (For example, here's a pretty good one that uses
rectangle diagrams to begin the explanation of dividing whole numbers by fractions, a much tougher topic for most kids. *** )
And Darcy was pretty much in the same situation as her predecessors going back two hundred years; rectangle models were lifting the fog from fractions. After two sessions of rectangle models
practice, she'd reached the point of consistently being able to draw any straightforward fraction problem as a rectangle model. She could then either find the answer directly from the model, or see
what operations she needed to do on the fractions.
Of course, over time, that second pathway would become the natural one. Eventually Darcy would no longer need to draw the model to think clearly about the problem, or could draw it in her head
instantly if she ever needed it. (Rather like the way most people learned the Alphabet Song, and some still occasionally need to sing it to themselves to alphabetize things, but most just know
alphabetical order.) In short or via Short, Darcy eventually learned how the algorithms for fractions worked, and thus she had a clear idea of when to apply which ones, and to recreate or correct any
of the algorithms she might temporarily forget.
Darcy knows fractions. Now it's just a matter of practicing what she knows until it's easy and automatic for her to do fractions. But today she's very discouraged.
"I hate all this thinking. It's a waste of time. I wish you'd just tell me what to write where," she sighs. "Or that they would just give us directions about how to do each problem on tests. I just
want to get the answer and go on to the next problem."
If you worry at all about math teaching in the US -- and I can't imagine you've read this far if you don't -- then no doubt you've seen this bit of second grade homework, which went viral on the
Right Wing Kookoo-Bird Web, crossing over to the general web as well.
Alas, according to the Stuck Clock Principle, there are places here where the guy is absolutely right.
As is usual with things in the Right Wing Kookoo-Bird Web, it's misidentified, facts have been distorted to alarm naive readers, and the actual situation is rather different from what Glenn Beck made
of it.
Nonetheless, this is not at all an unusual parental response, or an irrational one, and the explanation offered to the parent was not much of an explanation. Furthermore, as you'll find in Sarah
Garland's actually-fair-and-balanced article, the homework really is badly designed for its intended purpose, the intended purpose was inappropriate, and it's hardly a surprise that the parent
couldn't discern it.
What I want to draw your attention to, though, is that in the face of the inexplicable assignment (or, being fair, the assignment that could have been explicated but is still pretty badly done), the
first thing a parent does is reach for the good old reliable centuries-old algorithm. And this is a parent who is well-acquainted with and thoroughly grasps math himself (he wouldn't last a day in
his job if he didn't).
If you do go over there to look over the full story, read the comments, as they illustrate what I'm talking about almost as well as the story itself.
From the comments on a Washington Post article about math anxiety, which quickly (d)evolved into a quarrel about Common Core:
Attack: better approach to math? You mean like forcing kids to draw 18 balloons with 5 circles each, then counting them, to reach 90, rather than just letting do the much easier task of
memorizing 18*5=90?
And riposte: Yes, good example. That way when they reach algebra/geometry/statistics/calculus they already understand how to think conceptually about math. Rather than students who think, "I
can't possible memorize this abstract stuff!" you get students who can solve complex problems using the logic they learned solving simple problems.
Both sides sort of stabbed past each other here, but I don't think either can be blamed for the way they missed the target; it was pretty dark in there for everyone.
To the attack, we might say: The point of having students calculate 18*5 is not that we don't know the answer, or even that they don't. We ask them to do it so that they will learn the math. Knowing
the answers is not knowing the math. Knowing the fast way to the answers is knowing a little bit of math. But knowing why that's an answer and what it means -- that's math. And a picture is what is
needed for a kid who isn't too sure about what a number is or means yet. You don't need it now, and neither will the kid when s/he's thirty. But conversely, neither of you was born knowing what
numbers are or how they work.
And to the riposte, we might say: Yet the response also misses the point: in the tutoring business, I've seen only a couple of students at most who showed any trace of trying to get through all of
math by memorizing procedures. (The more common problem by far, as with Forrest, is that a student thinks the procedures are causal, like magic spells that make the answer true, rather than
revelatory, i.e. simply revealing what was always true.)
I have seen more of the memorize-a-long-cookbook approach among the ADLs, who are in a sense a population selected for having difficulty with mathematics, but even there it's scarce. If the problem
were just people trying to memorize a complicated cookbook instead of learning math, we could give them all a good shake, tell them what real math is, and have the problem solved before the weekend.
The problem is that for many people, brute-force meaningless memorization is actually more attractive than understanding math. People are not trying to get through math that way because they don't
know any better. They're trying it because they know they like it better. And that's a much harder problem to solve.****
Some of the same arguments are played out at a much higher level (for some reason most of the trolls failed to show up, or perhaps were whacked down) is in Leah Libresco's piece in TheAmerican
Conservative. Libresco is talking about it from the teacher's perspective, and she's sharp and clear, and several of her fellow teachers, who show up in the comments, also get it and know how it
The comments also feature some of the most useful kind of commenter for a piece about a hard idea: honest-and-not-stupid people having a hard time seeing what it's about.
There are also some trolls and sloganeers, of course. One apparently cannot hold their numbers to zero.
But overall, in that piece, people are talking about understanding, and it makes a much better conversation, or at least one less irritating to read.
The reason for including Libresco's article here, though, is a point she makes in passing a few times, picked up by several commenters and bulldozed irritably over by others:
The best way for a kid to get to clarity about a concept is not necessarily the way the kid will do the related problem later as an adult. This is hardly a surprise; it's the way learning a complex
skill that you will be using for years often works. Phonics produces more proficient readers, but proficient adult readers rarely sound words out. Many good cooks started out with a well-edited
cookbook, measuring everything and following directions exactly, but nowadays they just grab the right ingredients and tools and turn the stove on. A ski instructor friend tells me that the long
journey through intermediate from just-qualified-as-intermediate to almost-advanced is mostly moving out of knowing tricks to get down the hill and into just skiing.
But it's also quite clear that for many people in that conversation, procedural proficiency is all there is to math. They keep wandering back to "all you do is just..." as a sort of touchstone or
mantra, no matter how many earnest and respectful voices tell them that that's not "all", it's not "just" that, and that what you "do" is often beside the point.
Older readers have probably seen the "dishonest bellhop" problem, especially because Ripley's Believe It or Not! popularized it decades ago: three men rent a room for $30, and after they've gone up
to their room, the desk clerk notices that that room was a $25 room, so he sends the bellhop upstairs with the $5 to give to the men. The bellhop, being dishonest (that's why we named the problem
after him), only gives each man $1.00. So now each man has paid $9 for the night, $27 in all, and the bellhop has a $2 unauthorized tip, and that's $29. But they paid $30. Where's the extra dollar?
Newer readers may have seen this version of the same problem: You want a shirt that costs $100. You borrow $50 from your mother and $50 from your father. When you get to the store you find the shirt
is on sale for $97. So you buy the shirt, return $1 to your father and $1 to your mother, and perhaps because you are secretly a bellhop, pocket that last dollar. So effectively you borrowed $49 per
parent, and pocketed one, which adds up to $99. Where's the extra dollar?
(You can tell which problem is newer because in one of them a hotel room is $30 and in the other a shirt is $100...)
The quick answer is that if you draw a little table in either case and ask where the money came from and where it went, you'll see that the money into the problem ($30 from the 3 men; or $100 from
the parents) equals the money coming out ($25 in hotel cash register, $3 in refunds, $2 in graft; or $97 in store cash register, $2 returned, and $1 in your pocket). Those correct solutions are
treating an equation as an equation, not as a puzzle with a double line that means "write your answer here." The reason they fool so many people is that so few actually think in equations. (For a
much better,longer, and clearer exposition of this see the Mathemagician's blog. )
And fool people they do. Presented as a puzzle to college or high school students, I'd say maybe 1 in 50 who have not seen the trick before will get it. Even more amusing, most students can be shown
one of the puzzles, be taken in by it, have it explained, even be able to work the trick themselves ... and will then fall for the other version of the same trick the week after. The trick is
irresistible to many of them: there's a procedure and an answer, so you do the procedure and the answer is right. Right?
Fundamentally, all these stories show how extraordinarily strong in everybody, but particularly in children, is the tendency to look for a known algorithm with clearly remembered steps to just
execute without reference to meaning. Mostly, ordinary people confronted by math want to know what to do, and then do it. Give kids a "what to do" and, as long as they can remember it, they'll do it
I strongly suspect that one reason that people understand better if they learn why-before-how is that all they really want to know is "how." If you show them "how" first, they've gotten what they
came for, and they'll tune the rest out, no matter how many advantages you can explain to knowing the "why." (And of course, explaining the value of knowing "why" to an eight-year-old isn't always
possible; it's not a very "why" age. But as with any ability or skill, if you're ever going to be able to do it at all, you have to start as soon as you can, and long before you're good).
This "pull of pattern" shouldn't be a surprise. It is not uncommon in many other situations. You can see it with people who have made hundreds of cakes from mixes but would get nervous about making
one from scratch, doodlers who draw the same drawings over and over, and readers who read only one very restrictive genre. There are chess players who only open with the king's pawn, guitar players
who only play the Carter lick, writers who put a topic sentence at the beginning of every paragraph, and ballroom dancers who do the same sequence of base steps and variants over and over without
really listening to the music. Beginning realistic drawing students often have to struggle to get over "I know how to draw eyes" (or lips, or shoes, or hands) and learn to draw what they see rather
than what they have a prefabricated pattern for.
The pull of the pattern is so strong, almost inescapable, because so many of our basic life skills are just such patterns. We don't necessarily want to ride with a taxi driver who tries to take every
fare from the airport to the convention center by a different route, let alone one who is constantly experimenting with new ways to turn or brake.
Five hundred, or a hundred, or even fifty years ago, most people who needed to do anything with numbers only needed a few of the simple patterns (often not even all of them), and another
almost-as-simple meta-pattern to tell them which pattern to deploy when. But the calculator and the computer have killed the jobs that only required simple math -- along with a vast realm of jobs
that didn't require math.
The minimum math your kids will need for a good job -- or just to understand what is going on in the world around them -- is much more advanced than it used to be. Once, you learned long division
because it was needed by people in business to make sure they weren't selling below cost, teachers to figure grades, and electricians to balance a load. Nowadays spreadsheets and specialty software
do all that -- but now the kid needs to know long division because it's one of the earliest points in math where the possibility of alternate strategies, and the need to go back, start over, and
guess ahead enters into it, and those are all meta-skills that will be essential in learning the much higher level math they do need.
Unfortunately, the human brain remains wired so that patterns pull just as strongly as they did back when patterns were all you needed. It takes effort to push people away from just learning those
patterns and stopping there.
It takes effort to push kids away from patterns in Singapore too. The drill schools there -- after school mass practice at arithmetic facts and simple algorithms -- are quite common, and really
popular with parents. Quite likely, especially when they were starting out, many parents thought the only thing going on in the drill schools was the drill, and to this day, in the not very good
drill schools, that is sometimes the only thing they actually do.
But in the better drill schools, a long generation of emphasizing "why" in the classroom and in homework has had its effect; the drills are not just recitations of the answers, but also are attentive
repetitions of the ideas behind things. The students don't just say "fourteen times fifteen is two hundred ten" or work that out on a whiteboard while mentally reciting "put them in matching columns,
put down zero, carry two ..." and so on. Rather, they say something like "fifteen is one and a half times ten, so we can rearrange the problem into one and a half times fourteen times ten, half of
fourteen is seven, so one and a half times fourteen is twenty-one, times ten makes two hundred ten."
They might then be taken through the drill another way, reciting, "the factors are five, three, seven, and two, regroup to five times two is ten, three times seven is twenty-one, ten times twenty-one
is two hundred ten." They're practicing two slightly different algorithms that quickly yield the right answer -- but they're also consciously reminding themselves of commutativity, association,
distribution, and partial products while they're doing it, and they're internalizing that the right answer is always the same, but there are many different valid ways to get there, which is the
essential principle behind including strategy in their number sense.
The kids hate it, though. So do many teachers. Tutor manuals at drill schools have big underlined notes saying, "Do not merely repeat right answers. Recite the whole process all the way through every
time." My guess is that, good for them or not, the students would really rather just be told what to do, do it, and be done.
A hidden advantage of memorizing the traditional algorithms, sticking to them, and avoiding all that "why" stuff is that it's a reliable way to keep math from getting into any other part of life.
Math produces insights into why things are the way they are, suggests which other ways are possible, dismisses some ideas as impossible, draws attention to perceptions about the order of things,
makes the sciences accessible, and makes people smart in a way that will not work out well for people who need a population which is gullible and compliant enough to stay hoodwinked.
The advantages of being really good at real math (as opposed to quick at arithmetic algorithms) is the opening up of whole new dimensions on the world. That requires the courage to allow our children
in general, and your child specifically, to go beyond us, to have intellectual horizons wider and more varied than ours. Not everyone wants that: the loss of family solidarity, the collapse of the
secure position of the elders being always right, the fear of eventually being judged by adult children who really do know and see more -- or of not being able to share much of a world with the
grandkids after they are small -- all of these are real fears.
It's the same fear people have about sending the first generation to college, or about learning to read (including the fear of having to learn to read themselves, to keep up with the kids). For that
fear, all I can say is that we all know that acting from our courage is better for the kids than acting from our fears, and that it is the right thing to do. Furthermore, a family that stifles its
best brains, to keep them at home, is also throwing away the possibilities you can see in what is probably the most pro-education whiskey commercial ever.
More than one parent who has considered putting their kids into tutoring with me, after asking about my approach, has nodded, and asked some version of:
Now is Singapore Math the one with the bundles of sticks, or the one with poker chips?
Is Singapore Math the one where they draw circles around things?
Is Singapore Math lattice multiplication or regular multiplication or something else?
And of course, at that point I know I have not been communicating very well. The fault is almost certainly mine, but I offer, as a feeble defense, the sheer difficulty of shaking the grip of
procedure on most people's idea of math instruction.
The real answer, which I am trying to learn to give well orally, is that Singapore Math can be used to teach any procedure that works, and usually, somewhere in the world, it is. For many topics the
student will learn some procedures/algorithms that are slow and cumbersome at first because it's easy to see how they work and why they always arrive at the right answer. But the thing the student is
supposed to learn from that is not to divide by drawing circles and counting, or to multiply by drawing diagonal lines or laying out product matrices. What the student is learning is the why behind
every algorithm: that all multiplication of numbers too large or too complicated to memorize is done by computing partial products and adding them, which works because of the distributive property.
The student who understands that overall principle thoroughly will not get lost or have memory problems with whatever algorithm he or she eventually learns. That student is likely to immediately see
why one multiplies for area or volume but adds for perimeter, why least common denominators are needed for adding and subtracting but not for multiplying and dividing fractions, how long division
works, and eventually how factoring a polynomial is a fast way to find its roots.
But to find the why, the student has to look for it, which means learning to seek it. And when a trusted adult in the student's life dismisses the why in favor of the how -- which is what "All you do
is just ..." means -- and invites the kid to leave the difficult path of understanding the way up the mountain, in favor of a quick tram ride to the right answer that gets the kid off the hook, very
few kids will resist that offer.
When you offer "all you do is just..." to them -- or even push it on them, as I've seen some parents do -- you're turning them off the path of eventual real, deep, lasting success so that they can
have the right answer on tomorrow's homework, hand it in, and forget everything.
Do you really want to teach your kids to give up the richly successful but difficult long term process of really learning real math, in favor of getting done early and having more time for video
*She wasn't exactly one of my star students in English comp, either, by the way.
**I don't do it quite the way this guy does. Many roads lead to the kingdom, some of which have alternate routes, shortcuts, and interesting scenery, so the exact route tends to be highly individual,
especially for one-on-one tutoring.
*** Again, that's only a beginning. The next step is to note that the total number of pieces will be the denominator of the fraction times the whole number, so you multiply those two; and then that
the divisor of this number of pieces will be the numerator of the original fraction; and thus arriving at the invert-and-multiply rule. Which, you then demonstrate with a slightly more complicated
drawing (so you wait until they get the simple one) works for all dividends, not just whole numbers. And just to repeat the point once again, the subject here is not "how to do fraction problems" but
"what is going on when you do fraction problems" -- a very different subject.
**** Humble analogy (or humbling one, considering how true it is for me): I have done research for, and written specifications for, two different sets of dieter-assistance software over the years. I
am also fat. It's not that I don't know it would be better for me to "eat food, mostly plants, not too much." It's that the promise of wearing a smaller shirt in a couple of months has one hell of a
time competing with the certainty of a pizza tonight. One reason so many problems are hard to solve with education is the rarity of problems that are solved by knowing better.
***** In fact if you teach math, you should know the Mathemagician; lots of good things in his toybox!
If you just got here, this is one of about a week-long series of blog posts about Singapore Math and number sense, and how Singapore Math techniques can help kids through The Wall, that barricade of
"this makes no sense" that most kids run into somewhere between long division and elementary algebra. Much of this material will be appearing later in my forthcoming book, Singapore Math Figured Out
for Parents. The book draws on two roots:
1. I've done a fair bit of science and technology journalism and understand educationese pretty well too; I'm used to explaining more-technical matters to a less-technical audience.
2. I tutor math to elementary and middle school students for Tutoring Colorado, and I've seen how well these methods can work.
Another qualification of sorts: I've spent a fair bit of time teaching ADLs, Adult Disadvantaged Learners, people in their 20s-50s who are having to painfully pick up what they never got in school.
That has given me an all-too-clear picture of what the dead end of innumeracy really looks like, why it matters that just as many kids as possible get a decent start in math, and how hard it is to
recover from a bad start later. I really wish I'd known many of the Singapore Math tactics when I was teaching remedial college pre-algebra and beginning college algebra!
The series to date has included
•a questionnaire to evaluate your own number sense (if you're going to help your kid get it, it helps to have it or acquire it yourself)
•and three episodes before this one following a case study of the mathematical adventures of a beginning fourth-grader named Forrest. Despite being a composite of several different students with
difficulties, Forrest made quite a bit of progress in those episodes, progressing through
1. a general diagnosis of a memory problem and a conceptual difficulty with perceiving numbers as existing apart from what was being counted, to
3. the breakthrough moment when Forrest caught on to numbers as numbers, which ended with the warning note that breakthroughs are only beginnings, and that it's the practice afterwards that cements
the breakthrough and makes it last.
And now, about that practice. If you don't read any other post in this series, this might be the one that gives the clearest idea of what Singapore Math is all about (at least, if I understand it
correctly and I'm doing my job, two things of which you must be the judge).
Now that Forrest had a real idea of what numbers were, and how they connected to each other and to the world, he could see why his parents and teachers had been on his case to learn addition facts.
He also had a much better understanding of what addition facts might have to do with the rest of math. All of that gave him much more motivation, but that didn't necessarily make the addition facts
any easier to learn. If anything, it increased the urgency and made him impatient.
Forrest's mother confirmed he was continuing his practice at home with the addition table board, and was beginning to complain that he was bored and it had become too easy. That meant it was the
perfect time to introduce a more complicated trick.
"Let's try you out on this one," I said. "Whenever you have one value, you have all the values around it." I put a tile down at 6+3=9 (that is at the intersection of the 6 row and the 3 column, I put
down a 9 tile.)
"Now, instead of a row, you're going to make a spiral. Watch how this works. You put the tile down to the right of the first one -- 6+4=10." I point and he does; so far, of course, this is just like
doing a row.
"Then we wrap around." I point to successive squares and ask him to say the sum and place the tile. "7+4=11, 7+3=10, 7+2=9, 6+2=8, 5+2=7, 5+3=8, 5+4=9 ... "
Here's what it looks like, with red arrows to show the order in which they are placed.
"You see? Now next we wrap around some more, so 5+5= ..."
"Right. Do you see where we go next?" I have to correct and steer him a few times, but soon he's doing the spiral pattern correctly, and gaining speed as he goes. When it seems to be
well-established, with another circuit and a half completed, I say, "So you see how it works. when you go up or to the right, the sum goes up by one. When you go down or to the left, the number goes
down by one. And as you lay the tiles out in a spiral, you form that spiral pattern."
"What do I do when my spiral hits the edge of the board?"
"What do you think you should do?"
"Maybe skip down to the nearest blank space?"
"That might work."
"Or I could just start a new spiral on the board somewhere, and grow it till it runs into this one."
"That would work too. Why don't you try it a couple of different ways and tell me what you think?"
Long experience has taught me that boisterous kids like to make spirals run into each other, and then have some complicated rule for managing the collision. Quieter kids, especially ones who just
want to get done, tend to try to figure out ways to get things back to running up and down rows and columns. After some debating, Forrest hesitantly made the boisterous choice, and started growing a
new spiral around 8+8=16.
"Now there's something else you need to do. Every time you turn a corner, take a long breath, and look at the tiles you've laid down. Just imagine your mind is taking a picture of it. Try that now."
Pretty soon he had a rhythm going, and started building simultaneous spirals, taking turns adding to each one, so that they would collide. That small-child passion for patterns kicked in, embracing
saying the addition facts as he did them. For a kid in remedial math, he seemed to be having a pretty good time.
Then a moment of panic: he stumbled at "eight plus nine". He tensed up all over.
"Deep breath," I said, "and look at what you already have. You've got eight plus eight on one side of it, and seven plus nine below it, and you know what they are, so the square you want has to be --
"Seventeen!" He was pretty excited; things were still making sense, after all.
"Say the whole thing, and point. Every tile you put down, say the whole problem. If you don't know the answer automatically, use the layout of the board to see what it has to be, but once you do see
it, be sure you say it."
He looked a little stubborn, probably realizing how quickly he could lay out the table if he ignored all that addition stuff and just filled in the sequences.
I asked him, "So what are we doing this for?"
He shrugged. "It's not as boring as flash cards. It's not as hard."
"All excellent reasons, but here's another one. You're training your memory to find its way to the answer. There's four things that build memories, and if you can use all of them at the same time,
they make very strong memories that last for a long time. The first big memory builder is concentrating on what you're doing. Do you see that if you started just laying down the tiles in order, you
wouldn't be thinking about the numbers anymore?"
"I guess not."
"You have to think about them and pay attention to them to build the memory. Pointing and saying makes you think about them a little more. It also makes you do the second thing that helps you learn:
repeating a thing over and over. So ... get on with it, Forrest. You've probably almost got the whole table already, just from all the repeating and concentrating you've been doing in practice."
He finished a couple more spirals, and now the board held just a scattering of spaces to fill in.
"Now, this is where you can see the other two things that build memory. One is relationship." I pointed to the blank space at 9+6. "What does that one have to be?"
"Exactly! Now, how many ways did you know?"
He looks puzzled, which is normal at this stage, so I begin with examples. "You knew 10+6=16, you already had that on the board, right? So the 6 stays the same, the 10 goes down one, one down from 16
... that would be one way to know. Or you knew 9+5=14, nine stays the same ..."
Slowly, he says, "six is one up from five so it's one up from 14, and that's 15."
"That's right. That's another way to know." I tapped my finger over the 8 spaces surrounding 9+6. "You see? Each of these is a clue. So they're all related. This number in the middle has to be the
one that all these clues fit.
"That's using the third way, which is relationships, to remember. The more you relate, the better you remember. Going up, down, left, and right, it changes by 1. Going diagonal, it changes by two
this way -- see, 13, then 14,15 -- and stays the same this other way. So if you get lost, not only do you have the rows and columns, you've got every square around every square."
I sent him home to practice spirals, and told his mother to let me know if he seemed to be getting bored or resistant.
Sure enough, by the next session, Forrest was good and bored, though he was pretty thrilled that in the special education math class he attended, he had showed a huge improvement with addition facts
in a quiz that week. "Well," I said, "there are lots of other things we can use the board for, and we will, but maybe you'd like to try something else?"
"Yeah!" By now, "something else" probably sounded wonderful. Attentive repetition is highly effective, but even when generously mixed with relation, it's still not much fun.
"Okay, let's see how fast you can set the board up. You can do it in any order and you don't have to say them. I'll time you."
He did it in less than five minutes, noticeably checking his math facts to make sure he was right. His quick confidence was very encouraging.
I drew his attention to the Left-Right-Down diagonals, the ones of identical numbers. Not only did each LR diagonal contain all the same number; the only place that number occurred was on that
diagonal. "All the ways of making ten are on that one diagonal," I point out. "And the only place where you find any way of making ten is on that diagonal; the diagonal is the ways to make ten and
the ways to make ten are the diagonal.* Why do you suppose that is?"
An advanced fourth-grader might figure out an answer, but a struggling student like Forrest first had to understand the question. (Again, no worry about that: figuring out a hard question begins with
understanding it, and this was all valuable practice). His first answer was "Because it goes across like this," making a slashing motion in the air. He meant that it was a diagonal because it looked
I said he was right, apologized for my unclearness, and asked him to try again, dropping more hints each time, until it clicked and he said, "Something makes that happen."
"Excellent! Now, here's what makes it happen."
I had him line up ten poker chips on the table and split them into two a group of four and a group of six, and made sure he knew that the number of chips stayed the same.**
"Now point to the first group and say how much it is -- "
"And say 'plus,' and point to the second group -- "
"Four plus six equals ten."
"Exactly right. Just like you do when you're doing the board. Now move one chip from one group to the other, and do it again."
And 4 plus 6 is 10, shooby-doo wa,
"And again."
And 6+4 is 10, bop a a loo bop a bop boom boom bang shooby-doo-wa may be adjusted for cultural and generational reasons
He hesitated when he ran out of one group. I pointed to the empty space where it had been and said, "So how many chips are there here?"
"What's math talk for none?"
"Zero. Oh! Zero plus ten equals ten!"
"Good, now start back the other way."
He quickly developed a rhythm, moving the counters and saying what they meant at the same time. Since he was a little bit of a ham and liked to sing, I encouraged him to sing the combinations
according to a melody that he gradually made up.
Once he had it well worked out, I said, "So, do you recognize the words?"
He looked puzzled.
"Try doing that song and pointing to numbers going down that ten diagonal on the board."
He started, stopped, and looked up in confusion. "It's the same as it is with the chips. I'm singing the exact same words."
"So why do the tens all fall on a diagonal?" At that point, I shut up and waited. This is one of those things where if a kid can say it for himself, you've won.
"'Cause a diagonal goes one right and one down, and that's like moving a chip from one to the other, kind of."
At that moment, however primitively, Forrest was doing real mathematics.
This is one of the foundational teaching tricks in Singapore Math: students are guided to come at things more than one way, then learn to integrate the ways. It's another way of building memory/
retention through the relationship pathway, and also through the fourth avenue (anticipation, often known as "guessing ahead" or "self-testing.")
Parents often ask about this. Many really don't see why a student has to know more than one way to do anything, and why that way can't just be the memorized traditional algorithm. I usually offer
them this analogy: "If you are going somewhere completely unfamiliar in a town strange to you, you follow the directions exactly to get there, and the moment you get off the directions you back up,
or try to figure out or find new directions. But if you are going between two familiar spots in your hometown, you have a real understanding of where they both are, and you just take what you know
will be the best route between. The objective is to move your kid from that lost-in-a-strange-place, must-stay-on-the-directions state to inhabiting mathematics like it's his/her hometown."
To put it a little more abstractly, once a kid learns to see the patterns as manifestations of underlying causes -- to realize, for example, that the first group can be assigned to a row and the
second group to a column, and that a move that goes Row-1,Column+1 is a diagonal move on the board, and sums to zero -- that kid actually understands the math, rather than just playing the pattern.
Which is to say, the kid has learned to use number sense.
Or putting the issue another way (you see how you can use this method for anything?): to learn one algorithm, all you need to do is to memorize. To learn more than one algorithm, you just need more
memory at first. But to understand why two or more different algorithms are actually doing the same thing requires number sense. And if a kid does those "why are two methods really the same, just
written differently" exercises enough, s/he starts to learn to reach for the number sense to understand any algorithm. That means, for example, that when the kid hits fractions, the question will
probably be "what does it mean when the numerator is bigger than the denominator?" instead of "which number do I write on top?" (The second question leads to much more understanding than the first.)
Not long after he started singing the groups-of-chips songs, I pointed out to Forrest that he could just picture the chips in his mind, or even imagine the diagonal on the board, and sing it just as
well. I had him demonstrate it by singing the sevens diagonal while blindfolded. As soon as he finished, he insisted that his mother watch him do it.*** We agreed that he'd try to sing all the
diagonals from the table a few times a day, but didn't have to use the board or the chips unless he wanted to.
The next week, I handed him a randomized list of all the addition facts. In less than fifteen minutes, he had gotten them all right.
When Thomas Vowler Short figured out and systematized his much better way of teaching fractions somewhere in the 1830s, he was astonished at how students went from slowly, carefully plodding to
soaring. It still startles me.
Breakthroughs take time and patience. Exploiting the breakthrough fully, making it part of how the student sees math and the world, takes attentive practice, so it is often a much slower process, and
subject to setbacks. Keeping a kid focused on the idea while practicing is hard and requires a lot of inventiveness and close attention.
Once they have learned a few fundamental ideas through the whole process, from insight to practice to complete familiarity, they really know what math is about. And after that, the kids who were
"never any good at math" move with amazing speed, often moving up a full grade level in a couple of months.
That blissful state doesn't last forever, of course, though it's great while it does. Sooner or later the kid faces another conceptual barrier, but the next time, it's with the experience of getting
through or over a barrier, and of knowing that s/he has seen a block like this before, and made it through.
The student knows to look for an idea, not a rule about where to write things, and how to practice the idea via concentration, repetition, relationship, and anticipation until it is really second
After two to four times working through conceptual blocks in this way, most kids are true "math kids" regardless of whatever talent they started with. They know how to push into the difficulty, how
to work their way through the conceptual problems, and ultimately how to have their own breakthroughs.
All that moves "Aha!" out of the realm of intuition and miracle, and into something that can be deliberately worked for and achieved. And with that power, students can go about as far as they need or
want to go, without nearly as much fear or anxiety as in traditional methods. Math has become their own common sense of how the world works, rather than an arcane ritual adults use to prove you're
*I don't know for sure that this will give him a head start on graphing functions in a few years, but I am inclined to think it might.
**This is not usually a problem with a nine or ten year old, even one with severe math problems, but it's worth checking because now and then a child who is delayed on the Piaget scales may think
that rearranging a group of objects can change how many there are. These children may grow up to become investment bankers and should be watched carefully.
***Luckily, she thought it was cute.
|
{"url":"http://thatjohnbarnes.blogspot.com/","timestamp":"2024-11-10T14:19:57Z","content_type":"text/html","content_length":"178139","record_id":"<urn:uuid:0b6c28bd-12ab-4a9d-ba64-a6ae26be1ab5>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00366.warc.gz"}
|
A566/2007 - $\L1$ Stability of Spatially Periodic Solutions in Relativistic Gas Dynamics
Preprint A566/2007
$\L1$ Stability of Spatially Periodic Solutions in Relativistic Gas Dynamics
Hermano Frid | Calvo, Daniela | Colombo, Rinaldo
Keywords: relativistic gas dynamics | conservation laws | well-posedness | spatially periodic solutions
This paper proves the well posedness of spatially periodic solutions of the relativistic isentropic gas dynamics equations. The pressure is given by a $\gamma$-law with initial data of large
amplitude, provided $\gamma-1$ is sufficiently small. As a byproduct of our techniques, we obtain the same results for the classical case. At the limit $c \to +\infty$, the solutions of the
relativistic system converge to the solutions of the classical one, the convergence rate being $1/c^2$. We also construct the semigroup of solutions of the Cauchy problem for initial data with
bounded total variation, which can be large, as long as $\gamma-1$ is small.
|
{"url":"https://preprint.impa.br/visualizar?id=1484","timestamp":"2024-11-09T18:58:15Z","content_type":"text/html","content_length":"6441","record_id":"<urn:uuid:c4afa5fc-92c8-40c8-a752-8b4a91028400>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00030.warc.gz"}
|
10 Digit Number
How many $10$ -digit numbers can be formed using only the digits $2$ and $5$ ?
This section requires Javascript. You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.
|
{"url":"https://solve.club/problems/10-digit-number/10-digit-number.html","timestamp":"2024-11-05T18:38:14Z","content_type":"text/html","content_length":"28837","record_id":"<urn:uuid:accb454a-8594-481a-93ba-ee4a4a673e69>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00079.warc.gz"}
|
The mixtures of resmethrin and piperonyl butoxide in acetone were applied topically to adults of the house fly, Musca domestica L. The dosage-mortality relations were analysed by fitting a
mathematical model proposed by Hewlett(1969) to the data,
y=a+b_1 log z_1+b_2z_2/(c+z_2),
where y is probit mortality, z_1=dose of resmethrin, and z_2=dose of piperonyl butoxide, a, b_1,b_2 and c are parameters. The model was fairly applicable to the data, and a mixture with an
indefinitely large ratio of piperonyl butoxide was estimated to be 4.25 times as toxic as resmethrin alone.
|
{"url":"https://ir.lib.shimane-u.ac.jp/en/journal/A-BFA/15/--/article/1714","timestamp":"2024-11-03T00:20:51Z","content_type":"text/html","content_length":"37258","record_id":"<urn:uuid:3b202572-9845-46ea-b66d-2db1e52c21d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00436.warc.gz"}
|
Disordered one-dimensional conductors at finite temperature for Physica A: Statistical Mechanics and its Applications
Physica A: Statistical Mechanics and its Applications
Disordered one-dimensional conductors at finite temperature
View publication
Inelastic scattering processes at nonzero temperatures produce a broadening, h/τin, of electronic energies which essentially is equivalent to the imaginary component of the energy one must introduce
in order to evaluate the Kubo formula for the d.c. conductivity of a dirty metal. We1) have developed a fast algorithm for evaluating the conductivity of a linear chain, and have used it to study the
dependence of the conductivity on the magnitude of the imaginary part of the electron energy. Results for the Anderson model with different degrees of disorder and at different energies can all be
scaled onto the same curve, which is of the form expected from the usual theory of localized states. A cross-over is evident between two limiting regimes: in one the damping caused by the complex
energy is dominant; in the other, localization of the eigenstates dominates. The universal curve obtained make it possible to connect tight binding model results with the conductivity calculated for
an electron in a white noise potential. This curve describes models in which the scattering potentials have finite variance. Similar, but not identical, results are obtained for tight binding chains
with a Cauchy distribution of site energies. © 1981.
|
{"url":"https://research.ibm.com/publications/disordered-one-dimensional-conductors-at-finite-temperature","timestamp":"2024-11-13T12:05:27Z","content_type":"text/html","content_length":"71684","record_id":"<urn:uuid:01d7a931-a63e-40c2-88c4-ab7aab1fd7bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00459.warc.gz"}
|
The investigation of a nonlinear differential equation using numerical methods
Christie, Alan M (1966) The investigation of a nonlinear differential equation using numerical methods. MSc(R) thesis, University of Glasgow.
Full text available as:
The equation investigated was [equation] the parameters a and c being varied. The boundary conditions inposed upon the equation were [equation] where t[m] was the position of the first maximum after
the origin. It was most fully investigated for a 20, this being the region in which the solutions were exponentioally decaying. Although no analytic solution was discovered for the full equation,
full solutions were found when a = 0. By suitable transformations the solution for c>0 was [equation] where M, t[0], k and q were constants. For c<0 the solution was [equation]. These, as might be
expected were periodic solutions. The four numerical methods used were (1) Finite Difference (2) Step-by-Step (3) Picards (4) Perturbation The first two were purely numeric and the second two,
semi-analytic. The Finite Difference technique was used to dind the solution between the boundary values, and the Step-by-Step method then was used to integrate along the curve until the value of y
dropped to 0.01. The initial conditions for this latter method were found from the Finite Difference solution. Picard's Method and Perturbation which were used over the whole region both gave
solutions in terms of exponential series. This series was of the form [equation] where the A[rs]'s were constant coefficients and ? and ? were the exponents of the linear solution [equation]. In all
the methods except the Step-by-Step, the maximum had to be iterated onto by some means or another. In the Finite Difference method the second point was adjusted until this condition had been
satisfied. In the two semianalytic approaches, the coefficients were in effect, altered to suit the condition. There was good agreement in results between the boundary conditions for all methods, but
as might be expected for large values of c, the accuracy outside this region was not good, when the numerical methods were compared with the semi-analytic. This was due to the fact that the
semi-analytic solutions were essentially sollutions expanded about a point. In comparing the two numerical solutions when the Finite Difference methid was used over the whole region, there was good
Item Type: Thesis (MSc(R))
Qualification Level: Masters
Additional Information: Adviser: Gilles
Keywords: Mathematics
Date of Award: 1966
Depositing User: Enlighten Team
Unique ID: glathesis:1966-73640
Copyright: Copyright of this thesis is held by the author.
Date Deposited: 14 Jun 2019 08:56
Last Modified: 14 Jun 2019 08:56
URI: https://theses.gla.ac.uk/id/eprint/73640
Actions (login required)
Downloads per month over past year
|
{"url":"https://theses.gla.ac.uk/73640/","timestamp":"2024-11-05T08:44:34Z","content_type":"application/xhtml+xml","content_length":"37634","record_id":"<urn:uuid:bd7cb53b-9a1b-450b-bab0-999ac5efe6d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00094.warc.gz"}
|
The use of scientific methods such as carbon dating to determine the age of an artifact
Determining the louvre museum. Four main methods such as. Over the major advances led to refer to detect if a specific elements like carbon and. Over time, a sample is the long rock. Archaeologists
use of an. This https://www.wuwei-shop.de/ for carbon dating. Over time. Cation-Ratio dating and study of radiocarbon dating a numeric age, you read statements in good method that are less. Measuring
the most of things such lies are finally. In science. To acquire evidence to date is the age. Nuclear decay as carbon in absolute. Because c-14 carbon, and animal. But the age or argon to determine
how determining an artefact is determined, globally distributed, follows a fundamental part of obtaining a feature. Archaeologists throughout the information associated with the age determination
instead of the remnants of radioactive isotope carbon-14 in. After its death in books that have the age i want to determine the technique known age of the sand. How old artwork, seeds. https://
hvs-schule-berlin.de/ Old. Stratigraphy does help of an artifact. Key tool archaeologists and any changes. Luminescence dating may be used to understand how long residence of egyptian dynasties.
Students use radiometric dating is used for example, this process of earth that is the right. Application of fossils that measures. Uranium dating was first application of estimating the. Nuclear
decay. No other. And other ways archaeologists to determine the technique for excavating a fossils that would be used for dating measures the sand. Archaeologists look for dating methods to identify.
They live in bad condition use relative dating methods, why do not only horizontal, seeds. By. But dating girl number whatsapp This page. After about 8–10 to analyze objects discussed in human
The use of scientific methods such as carbon dating to determine the age of an artifact answers.com
We can easily. Except for. They determined the. Just last. Nuclear decay rate of samples from this angle is a large-scale. Has become a few of a perceptual. A steady rate of organic matter in human
artifacts found in ancient fossil. Perhaps the meteorites and arnold published their. Dating, stratigraphy used to give rocks over 100, dating.
How to use carbon dating to determine age
Whether one of organic material by measuring the low activity of carbon-14, the age of measuring the sample's actual age of artifacts up to see. Vegetation absorbs carbon dating technique used to
estimate how we discover in this paper: compatibility with the calibrated date his landscapes, the niaux caves. We discover how old an object is carbon dating is by using the past 50000 years old an.
Why or carbon dating calculates an object is over 100 million years. Paleontologists use of sedimentary rocks. Review: judging the most of an. Emissions threaten age of carbon dating to use to
determine the date it has proved to figure out of. Most notably carbon-14 levels in determining carbon-14 is a method of. Here are carbon-12, r, consider the half-life of. How archaeologists and year
of years, you have been present. Archaeology and their limitations the properties of a formula for geologic materials such as rocks formed. Uranium mineral.
Carbon dating to determine the age of an artifact
Historical documents. As carbon, radiocarbon dating has been present in archeology. Students learn about 60, a sample today contains. A given number of an inorganic artifact if it's the percent of
specific elements like carbon, 000 years. Problem 44: carbon-14 we visit museums. C-14 to become more accurate method of organic. Scientists use that humans. For fossils, how do scientists can be
wood and events as long as determined by henri becquerel. Explain radioactive isotope, not fission track dated because. C-14 is not older than 50, follows a relative concentration of years. It works
on researchgate. All the age. Dating method. Radioactive dating. Archeologists seek to determine an object's age of wood, 000 years, 000 years. There are two reasons why the half-life of artifacts up
the ages of death in archaeology, 430 years.
|
{"url":"https://hvs-schule-berlin.de/the-use-of-scientific-methods-such-as-carbon-dating-to-determine-the-age-of-an-artifact/","timestamp":"2024-11-05T21:48:51Z","content_type":"text/html","content_length":"140862","record_id":"<urn:uuid:6741dadf-8d15-46ca-a1a5-f87a95b22a65>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00836.warc.gz"}
|
Preprocessing routines
Preprocessing routines¶
Some elaborated variables need to be preprocessed, such as the moisture index. Thus, different preprocessing routines are available in AtmoSwing.
The main preprocessing routines implemented are:
• Addition (Addition): add (point-wise) all the provided predictors.
• Average (Average): average (point-wise) all the provided predictors.
• Difference (Difference): process the difference (point-wise) between 2 predictor grids.
• Multiplication (Multiplication or Multiply): multiply (point-wise) all the provided predictors.
• Humidity (or moisture) index (HumidityIndex): multiply the relative humidity and the precipitable water
• Humidity (or moisture) flux (HumidityFlux): process the multiplication of the wind and the humidity index. Requires four predictors in the following order: 1) U wind, 2) V wind, 3) relative
humidity and 4) precipitable water
• Wind speed (WindSpeed): process the wind speed with the provided U wind and V wind components.
The following preprocessing routines are usually not used directly (or are automatically handled by AtmoSwing):
• Simple gradients (SimpleGradients): processing of differences between adjacent grid points by ignoring the horizontal distance.
• Real gradients (RealGradients): processing of real gradients between adjacent grid points (using the horizontal distance). This preprocessing method is automatically used when the analogy
criterion is S1.
• Simple gradients with Gaussian weights (SimpleGradientsWithGaussianWeights): same as before, but with a weighting of the spatial field by a gaussian ‘hat’-shaped pattern.
• Real gradients with Gaussian weights (RealGradientsWithGaussianWeights): same as before, but with a weighting of the spatial field by a gaussian ‘hat’-shaped pattern.
• Simple curvature (SimpleCurvature): processing of the ‘curvature’ between adjacent grid points by ignoring the horizontal distance.
• Real curvature (RealCurvature): processing of real ‘curvature’ between adjacent grid points (using the horizontal distance). This preprocessing method is automatically used when the analogy
criterion is S2.
• Simple curvature with Gaussian weights (SimpleCurvatureWithGaussianWeights): same as before, but with a weighting of the spatial field by a gaussian ‘hat’-shaped pattern.
• Real curvature with Gaussian weights (RealCurvatureWithGaussianWeights): same as before, but with a weighting of the spatial field by a gaussian ‘hat’-shaped pattern.
|
{"url":"https://atmoswing.readthedocs.io/en/latest/getting-started/preprocessing.html","timestamp":"2024-11-04T13:36:25Z","content_type":"text/html","content_length":"13919","record_id":"<urn:uuid:8044def7-0eb3-4e41-ba6c-3f64b28939ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00280.warc.gz"}
|
Long Multiplication Calculator - PureTables.com
1. / Long Multiplication Calculator
Long Multiplication Calculator
Calculate step-by-step long multiplication solution.
Long multiplication is a method used to multiply larger numbers together. It involves breaking down the multiplication into smaller, manageable steps.
It enables you to multiply large numbers without relying on a calculator, as it can be carried out on paper.
This calculator enables you to perform long multiplication with both positive and negative numbers. Additionally, it can handle calculations involving decimal numbers.
How to do Long Multiplication?
Step 1: Write the two numbers you want to multiply on top of each other, with the digits lined up properly. Let's say we're multiplying 24 by 12.
Step 2: Start with the rightmost digit of the bottom number (2) and multiply it by the top number digit on the right (4). Write the result below the line.
Step 3: Move one place to the left on the top number and repeat. Multiply the rightmost digit of the bottom number (1) by the top number digit on the left (2). Write the result below the line,
leaving space in between.
Step 4: Add up the two results from Step 3. Write the sum below the line, making sure the numbers align properly.
How to do Long Multiplication with Decimals?
Ignore the decimals in your calculation, but count the total number of them. Then place the decimal point in the answer by counting from the right.
Example 1: 10.51 × 12.72
× 1272
+ 1051
= 1336872
Result = 133.6872 (4 decimal places in total)
Example 2: 54.2 × 17
Result = 921.4 (1 decimal place in total)
How to do Long Multiplication with Negative Numbers?
If both numbers are negative or both are positive, the result will be positive.
Example: (-3) x (-4) = 12
If only one number is negative, the result will be negative.
Example: (-3) x 4 = -12 or 3 x (-4) = -12
|
{"url":"https://puretables.com/long-multiplication-calculator","timestamp":"2024-11-05T02:29:43Z","content_type":"text/html","content_length":"19410","record_id":"<urn:uuid:0e42aa73-1e9e-4e6e-8389-86143ba1d618>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00511.warc.gz"}
|
Institute for Mathematical Sciences
A year of undergraduate algebra, such as MAT 313 and MAT 310. Thus basic notions concerning set theory, cardinals, ordinals, prime numbers, Euclidean algorithm, congruences, polynomials, complex
numbers, abelian and cyclic groups, permutation groups, rings and fields, vector spaces are assumed or briefly reviewed. A good reference is Algebra by Michael Artin, Prentice Hall, 1991.
Algebra I (Fall)
1. Groups (5 weeks)
□ Direct products, Normal subgroups, Quotient groups, and the isomorphism theorems.
□ Groups acting on sets; orbits and stabilizers. Applications: class formula, centralizers and normalizers, centers of finite p-groups. Conjugacy classes of S[n].
□ Sylow's Theorems, Solvable groups, Simple groups, simplicity of A[n]. Examples: Finite groups of small order (< 8).
□ Structure of finitely generated abelian groups. Free groups. Applications.
References: Algebra (3rd Edition), Lang, 1993, Addison-Wesley, chapter I. Abstract Algebra (2nd edition), Dummit and Foote, 1999, Part I. Introduction to the Theory of Groups, Rotman, Springer
2. Basic linear algebra (3 weeks)
□ Vector spaces, Linear dependence/independence, Bases, Matrices and linear maps. Dual vector space, quotient vector spaces, isomorphism theorems.
□ Determinants, basic properties. Eigenspaces and eigenvectors, characteristic polynomial.
□ Inner products and orthonormal sets. Spectral theorem for normal operators (finite dimensional case).
References: Algebra (3rd Edition), Lang, 1993, chapters XIII and XIV. Abstract Algebra (2nd Edition), Dummit and Foote, Chapter 11.
3. Rings, modules and algebras (6 weeks)
□ Rings, subrings, fields, ideals, homomorphisms, isomorphism theorems, polynomial rings.
□ Integral domains, Euclidean domains, PID's. UFD's and Gauss's Lemma (F[x[1], . . . , x[n]] is an UFD). Examples.
□ Prime ideals, maximal ideals. The Chinese remainder Theorem. Fields of fractions.
□ The Wedderburn Theorem (no proof). Simplicity and Semisimplicity.
□ Noetherian rings and the Hilbert Basis Theorem.
□ Finitely generated modules over PID's, the structure theorem.
References: Algebra (3rd Edition), Lang, 1993, Addison-Wesley, chapters II, III, V and VI. Basic Algebra (2nd edition) Jacobson, Chapter 2. Abstract Algebra (2nd Edition), Dummit and Foote, Part
Algebra II (Spring)
1. Linear and multilinear algebra (4 weeks)
□ Minimal and characteristic polynomials. The Cayley-Hamilton Theorem.
□ Similarity, Jordan normal form and diagonalization.
□ Symmetric and antisymmetric bilinear forms, signature and diagonalization.
□ Tensor products (of modules over commutative rings). Symmetric and exterior algebra (free modules). Hom[R](-, -) and tensor products.
References: Algebra (3rd Edition), Lang, 1993, chapters XIII and XIV. Abstract Algebra (2nd Edition), Dummit and Foote, Chapter 11.
2. Rudiments of homological algebra (2 weeks)
□ Categories and functors. Products and coproducts. Universal objects, Free objects. Examples and applications.
□ Exact sequences of modules. Injective and projective modules. Hom[R](-, -), for R a commutative ring. Extensions.
References: Algebra (3rd Edition), Lang, 1993, chapter XX, Dummit and Foote, 1999, Part V, 17.
3. Representation Theory of Finite Groups (2 weeks)
□ Irreducible representations and Schur's Lemma.
□ Characters. Orthogonality. Character table. Complete reducibility for finite groups. Examples.
References: Algebra (3rd Edition), Lang, 1993, Addison-Wesley, chapter XVIII. Linear representations of finite groups, J.-P. Serre, 1977, Springer-Verlag. Abstract Algebra (2nd edition), Dummit
and Foote, Part VI.
4. Galois Theory (6 weeks)
□ Irreducible polynomials and simple extensions.
□ Existence and uniqueness of splitting fields. Application to construction of finite fields. The Frobenius morphism.
□ Extensions: finite, algebraic, normal, Galois, transcendental.
□ Galois polynomial and group. Fundamental theorem of Galois theory. Fundamental theorem of symmetric functions.
□ Solvability of polynomial equations. Cyclotomic extensions. Ruler and compass constructions.
References: Algebra (3rd Edition), Lang, 1993, chapters VII and VIII. Galois Theory, Emil Artin. Abstract Algebra (2nd edition), Dummit and Foote, 1999, Part IV.
General References
• Algebra (3rd edition), S. Lang, 1993, Addison-Wesley
• Abstract Algebra (2nd edition), Dummit and Foote, 1999, John Wiley.
• Algebra, Hungerford, 1974, Springer-Verlag
• Basic Algebra (2nd edition) N. Jacobson, W.H. Freeman, New York, 1985, 1989.
• Algebra, B.L. van der Waerden, Springer-Verlag, 1994.
• Module Theory, Blyth, 1990, Oxford University Press
|
{"url":"https://www.math.stonybrook.edu/mat-534-and-535-algebra-i-ii","timestamp":"2024-11-09T10:43:40Z","content_type":"application/xhtml+xml","content_length":"34418","record_id":"<urn:uuid:90a30703-92c0-4456-9aca-78efd42f9f97>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00855.warc.gz"}
|
seminars - On high-order methods for moment-closure approximations of kinetic Boltzmann equations
Abstract: In many applications, the dynamics of gas and plasma can be accurately modeled using kinetic Boltzmann equations. These equations are integro-differential systems posed in a
high-dimensional phase space, which is typically comprised of the spatial coordinates and the velocity coordinates. If the system is sufficiently collisional the kinetic equations may be replaced by
a fluid approximation that is posed in physical space (i.e., a lower dimensional space than the full phase space). The precise form of the fluid approximation depends on the choice of the
moment-closure. In general, finding a suitable robust moment-closure is still an open scientific problem. In this work we consider two specific closure methods: (1) a regularized quadrature-based
closure (QMOM) and (2) a nonextensible entropy-based closure (QEXP). In QMOM, the distribution function is approximated by Dirac deltas with variable weights and abscissas. The resulting fluid
approximations have differing properties depending on the detailed construction of the Dirac deltas. We develop a high-order discontinuous Galerkin scheme to numerically solve resulting fluid
equations. We also develop limiters that guarantee that the inversion problem between moments of the distribution function and the weights and abscissas of the Dirac deltas is well-posed. In QEXP,
the true distribution is replaced by a Maxwellian distribution multiplied by a quasi-exponential function. We develop a high-order discontinuous Galerkin scheme to numerically solve resulting fluid
equations. We break the numerical update into two parts: (1) an update for the background Maxwellian distribution, and (2) an update for the non-Maxwellian corrections. We again develop limiters to
keep the moment-inversion problem well-posed. The work on the regularized quadrature-based closures is joint with Erica Johnson (Bexar County) and Christine Wiersma (Iowa State), and the work on the
nonextensible entropy-based closures is joint with Chi-Wang Shu (Brown).
|
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=79&sort_index=speaker&order_type=desc&document_srl=785757","timestamp":"2024-11-10T02:12:17Z","content_type":"text/html","content_length":"46366","record_id":"<urn:uuid:6adecb57-c5f6-450b-8eba-95f631d190e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00230.warc.gz"}
|
Function reference
Useful functions to validate a model
dispersion_check() Use simulations to check for overdispersion or underdispersion
distribution_check() Use simulations to compare the observed distribution with the modelled distribution
fast_distribution_check() Use simulations to compare the observed distribution with the modelled distribution
Setting priors
Get a grasp of potential effect of random effects with a specified standard error sigma or precision tau
plot(<sim_iid>) Plot simulated random intercepts
plot(<sim_rw>) Plot simulated random walks
select_change() select fast changing simulations from an 'sim_rw' object
select_divergence() select diverging simulations from an 'sim_rw' object
select_poly() Select random walks best matching some polygon coefficients
select_quantile() select the quantiles from an 'sim_rw' object
simulate_iid() simulate data from a second order random walk
simulate_rw() simulate data from a second order random walk
dispersion() Calculate a measure for dispersion
fitted(<inla>) Calculate the residuals from an INLA model
get_observed() get the observed values from the model object
residuals(<inla>) Calculate the residuals from an INLA model
generate_data() Generate dummy data with several distributions
plot(<dispersion_check>) Plot the results from a dispersion check
plot(<distribution_check>) Plot the results from a distribution check
|
{"url":"https://inlatools.netlify.app/reference/","timestamp":"2024-11-13T16:08:51Z","content_type":"text/html","content_length":"10903","record_id":"<urn:uuid:3c9ba745-c2ad-4359-8478-451db52baaf8>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00316.warc.gz"}
|
Midnight Oil
Ok. Now that we
got started
, here's a short post about one of my favorite snippets of python.
Haven't seen this anywhere. Let me know if you have...
A lot of times you've got several groups and want to go over all combinations of items (for example, for testing all combinations of a set of parameters).
Doing this for two groups is a one liner, thanks to list comprehensions:
>>> A = [True,False]
>>> B = [1,2,3]
>>> [(a,b) for a in A for b in B]
[(True, 1), (True, 2), (True, 3), (False, 1),
(False, 2), (False, 3)]
This is a great start, but I want something that works for any number of groups. So we just need to apply this one liner as a building block iteratively until we're left with a single group, which is
exactly what the built in function
does! well, almost...
>>> def combinations(*seqs):
... def comb2(A,B):
... return [(a,b) for a in A for b in B]
... return reduce(comb2,seqs)
>>> A = [True,False]
>>> B = [1,2,3]
>>> C = ['yes','no']
>>> combinations(A,B,C)
[((True, 1), 'yes'), ((True, 1), 'no'), ...]
The problem is that the result is nested. Instead of getting (True,1,'yes'), we get ((True,1), 'yes').
The solution is to change the building block so it treats the two arguments differently. The second argument will still be a regular sequence of items. The first argument will now be a sequence of
combination groups built so far.
Our building block now becomes:
//for each group and item, append the item to the group
def comb2(A,b):
return [a+[i] for a in A for i in b]
But now we need to handle start conditions, since we don't have any "group of groups" when we start. And this is the fun part - after a few iterations with the interactive shell, I ended up with
this, which I think is quite cute:
>>> def combinations(*seqs):
... def comb2(A,b):
... return [a+[i] for a in A for i in b]
... // prepend a sequence with a single empty group
... return reduce(comb2,seqs,[[]])
>>> combinations(A,B,C)
[[True, 1, 'yes'], [True, 1, 'no'], ...]
And that's that.
At the time I was coding mainly in C++. Doing this in C++ is going to be much more work and end up being much less elegant. But what really blew me away at the time was this:
Suppose I wanted to handle cases where the number of combinations was very big. In that case generating them all up front could take up too much memory, and I'd just want to generate them on the fly
as I iterate over them. You can do this in python by replacing the comb2 list comprehension with a generator expression:
def comb2(A,b):
return (a+[i] for a in A for i in b)
Now I can happily iterate over the returned generator and python manages the stack of nested generators, each with it's own state in the iteration.
Try that in C++! (you can do it, but it's going to hurt, which in reality means that if it's not
important for you, you won't do it).
I remember hearing
at PyCon 2006, when he talked about the evolution of python. One of the things he said was that it was a pretty lousy functional programming language. I haven't learned any good functional language
yet (want to try haskell or F# sometime), and I trust he knows what he's talking about, but still, this beats the hell out of C++, Java or C# (although C# 3.0 now has some new features that could
help. Would be interesting to see how easy it is to do this now).
On the same note, take a look at
, which is also neat, and comes in handy quite often.
and btw, if someone has a tip on how to format code sections for the blog, let me know :-(
About a year ago a coworker asked me whether I wanted to write a blog. I took a full 2 seconds to make sure I wasn't just reacting and just said - "no".
It was that clear. I am a private person by nature. Why would I want to talk to a bunch of people I don't even know? It made no sense. I mean, I could understand why some other people would want to,
and even enjoyed reading some blog posts (most of which I got from friends by email), but me blogging was an idea that made no sense to me.
So what's this, then?
Well, to be honest, I'm not 100% sure myself :-)
One thing that happened is that for the last year I finally got to work in Python (IronPython to be exact). Before that I was mainly writing in C++ and in the last few years also dabbling with python
for side projects. I think the effect of the community is much more important in the python ecosystem than it is in C++. I've got some thoughts on why this might be so, but that's not important for
this post. Maybe it's not even true, but the fact is that I've started reading some blogs, subscribing to the ironpython mailing list, and in general I got a lot of help and ideas from other people's
And when I had some cool stuff I'd written, or some insight, I sometimes wanted to be able to share it.
Luckily I also started working with a friend that blogs. I really learned a lot from watching and talking to him. In this respect, I learned that not every blog has to have a capital B. I could pay
something forward by sharing the little things I do. And sometimes someone will google them and it will help him. Cool!
He also sent me this post which I really liked. Well, he forgot to mention "I have no time", but apart from that it's spot on. Time still is a real limitation. Working at a startup and having two
(cute as hell) little kids at home doesn't leave a lot of spare time. So blogging will have to compete for the time slot before I go to sleep (hence the blog's name). But that's easy - if I have time
and something to say, I'll write. If not, I don't have to.
The last reason to write is because I don't feel comfortable with it. Being a perfectionist I'm worried that I might say something silly, or trivial, or maybe just that no one will care.
All these things will probably happen, but as a father I keep trying to teach my kids that making mistakes is ok. If you're not making mistakes you're obviously not trying things that are hard for
you, so you're not learning as quickly as you could. Damn those Genes! the least I can do is set an example by getting better at making mistakes :-)
So now we're done with static friction and this meta-post, I'll try and write a short programming one soon.
|
{"url":"http://www.ronnie-midnight-oil.net/2008/05/","timestamp":"2024-11-08T08:53:37Z","content_type":"text/html","content_length":"34761","record_id":"<urn:uuid:f65d7336-9b1d-4432-a513-2c37a14d6396>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00260.warc.gz"}
|
The aftermath of my Ambrose post...
After my post yesterday, the friend came up with two interesting points to illustrate why my approach might be wrong. While his points are very interesting and might sound as logical as my earlier
post on paper, I had to respectfully disagree and I just did that.
So this post gives you an account of the points he raised and my response to him. At this point I really want to thank the two people for making this interesting engagement possible, my friend and
his friend whom I guess I can call "my friend" henceforth. Thanks to them, I have my second post in 24 hours after a break of almost an year.
I love the allure of statistics, ratios and averages. End of the day. its about wickets and runs. I mean Kohli has an average of 40 and 50 in tests and ODIs and so does Dhoni. it does not make
MSD as good a batsman is VK? Or maybe he is.
Cricket as a game is loaded against the bowlers. Everything favours the batsmen, the bowler is only remembered for his worst performance (unless and until he’s part of the list we discussed in the
previous post). Case in point is Chetan Sharma who is still remembered for that last ball he bowled to Miandad. Nobody even bothers to talk about his world cup hattrick against the Kiwi’s in 1987.
Now, coming to your comparing batting averages against the numbers I put up in my previous post…
They are chalk & cheese. You can’t compare the bowling strike rate with the batting average. Why strike rate, you can’t even compare the bowling average with the batting average.
Batting average is a straight division of the RUNS MADE by INNINGS TAKEN leaving out the innings one was UNBEATEN (Not Out). Note: There is a Not Out consideration
Bowling average is about the runs conceded per wicket
Bowling strike rate is about the balls bowled before a wicket was taken
You don’t have a measure to directly compare the two averages.
If I made one up, it would be something like this..
Ambrose played 179 innings for his 405 wickets. The average would be 405/179 = 2.26. Meaning he took something like 2 wickets per innings (Murali will have 3.5 with his 800 wickets in 230 innings).
But this is still not comparable, because there isn’t anything that would come close to a Not Out in bowling. And they are two completely different trades although in the same game, Bowling and
There is a Batting Strike Rate which measures the number of runs scored per 100 balls.
Let us look at Ambrose again, he faced 3080 balls for his 1439 runs. So his strike rate is 1439/3080 (*100) = 46.72
VVS Laxman faced 17,785 balls for his 8781 runs with a strike rate of 49.37.
So can we say Laxman = Ambrose ? You will call me names J
Because for a batsman you need to look at the average in the long haul. You can compare a Kohli & Dhoni with the averages. That would be a Quantitative take, purely based on runs scored and innings
taken. If you compare their batsmanship on technique and flair, that would be a Qualitative take.
And yes, for a Quantitative take you need to define your minimum requirement. For example you cannot compare a player A who has only played 15 innings with a player B who had played 125 innings. That
is why in many cases cricket statisticians define a min comparison requirement in terms of the number matches played.
Now getting back to our topic, bowling. The analysis I provided only gives you a prediction, a forecast had Ambrose played a bit more, bowled a bit more. According to the stats, I concluded that he
might have taken more wickets and broken into the top 3 wicket takers had he played more, bowled more. But we all know he didn’t.
So in this case, what I wrote as my concluding para (two paras) is what matters (BTW, I struck down the uncomfortable truth in the game of cricket).
“For the time he has spent on the field, he has done extraordinarily well. [S:Just that we are trained / conditioned to look at the wrong piece of stat to decide who / what a best bowler is:S]
Hence Ambrose is surely among the best of bowlers Cricket world has seen, irrespective of 400 or 800.”
So is it in soccer, it is about goals. hence Messi + Ronaldo will always be the highest rated. of course this year, Lewandowski might have a chance at it.
In Tennis, you have no choice but to see tournament wins.
In Athletics; Gold medals, not even Silver. How cruel. Who cares if Usain Bolt is last of the blocks. Or if he pulls up before the tape.
My boss would want to know if i made plan. Nothing to do with productivity, averages etc etc etc.
Every game has its different way of looking at player performance. I would say the soccer stats are very cruel too. They are only forward / midfield friendly. What happens to a defender or a non goal
scorer ? How do you evaluate him ?
Again, you cannot bring athletics or any individual sport / single player game in here. Because in those cases performance evaluation is simple, like you say about Usain Bolt. It is black or white.
So you don't need the crutch of a piece of stat to measure effectiveness.
For me, the three (all round) stat friendly team games (from a player performance evaluation perspective) are basketball (NBA), Baseball (MLB) and Ice-hockey (NHL).
So in summary, two very interesting points raised by this friend
Comparing the bowling statistics to batting: My answer is simple, we cannot directly compare a piece of bowling stat with that of batting. Plus what I did in my previous post is not to prove that
Ambrose would definitely have taken 800 wickets or more.
It was an attempt to see if he might have had be played more. Given what his performance is, the stats say that he might have. We know he didn't.
But the main point is that the fact that he didn't doesn't in anyway eclipse the fact that he is a top class bowler.
And two,
You cannot compare individual sports with individual sports. Or for that matter compare a game A to B in terms of the stats you churn out, for certain indicators might not be in vogue or even
available for the sport in question.
Stats: Thanks to Cricinfo
3 comments:
benjolan di leher said...
after I read your article , I think this article is very nice interesting and easy to read .. luck always friend
Pengobatan Lipoma Secara Tradisional
pengobatan tradisional gagal ginjal dengan resep alami
cara membasmi jamur pada tanaman said...
terimakasih untuk informasinya kawan
soc fermentasi ternak
You are so stoopid! How can you even write Laxman = Ambrose? Aren't batting averages supposed to mean anything? You could've spent 60 more seconds and come up with a better example to illustrate
your point; ffs!
|
{"url":"https://rajabaradwaj.blogspot.com/2016/02/the-aftermath-of-my-ambrose-post.html","timestamp":"2024-11-08T14:37:23Z","content_type":"application/xhtml+xml","content_length":"68079","record_id":"<urn:uuid:a558295e-748f-4678-816d-0faaad9029c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00474.warc.gz"}
|
Area of a Rectangle Calculator - Areavolumecalculator.com - areavolumecalculator.com
Area of a Rectangle Calculator
Area of Rectangle -The area is usually measured in units like square meters, square feet, or square inches. To calculate the area of a rectangle, multiply the number of units in the length by the
number of units in the breadth. Length and breadth must be stated in the same unit of measure.
Ex: length = 1000 And width = 500
How to Calculate the Area of a Rectangle?
To calculate the area of a Rectangle, you just have to go through some easy step like:
Step 1: From the available information, write down the length and width of a rectangle (Here, make sure that length and width both are the same units like if the length is in cm then a width must be
in cm.)
Step 2: Now do the multiplication of length and width
Step 3: Result of Multiplication is the area of that rectangle so write down the answer in a square unit
Formula of Area of a Rectangle
Area of the rectangle is the multiplication of two adjacent sides of it. So to calculate the area of a Rectangle, we need 2 measures of the rectangle: length and width. The area of a rectangle
measured is generally in square meters, square feet, or square centimeters or square kilometers or square inches or square yards or square miles.
The area of a rectangle means the area covered by its sides, or, we can say area within the edge of a rectangle. To get the area of a rectangle, all you require to do is a multiplication of
rectangular sides. The interior area of a rectangle can be calculated as l*w where l is the length or one side and w is the width or the other side of the rectangle.
Area = l * b
How to Calculate the Area of a Rectangle?
To calculate the area of a Rectangle, you just have to go through some easy step like:
Step 1: From the available information, write down the length and width of a rectangle (Here, make sure that length and width both are the same units like if the length is in cm then a width must be
in cm.)
Step 2: Now do the multiplication of length and width
Step 3: Result of Multiplication is the area of that rectangle so write down the answer in a square unit
Examples of Finding the Area of a Rectangle
Example 1:The area of any rectangular is the multiplication of its length by its width. Like, if a rectangle's length is 7 cm, and the width is 3 cm then its area is:
Area = 7 x 3 = 21 square centimeters
Example 2:If here we take a practical example then, a bedroom with one wall of 15 feet long and the other being 10 feet long is simply
Area of bedroom = 15 x 10 = 150 square feet.
Example 3: Sometime, you may ask to find a side length when the area and one side length is given, like A rectangle's area is 150 square centimeter and the length of it is 15 cm. Then what is the
width of it?
Here, Area of Rectangle = 150 sq. cm.
Length = 15 cm
Now, Area = Length * Width
150 = 15 * Width
Width = 150/15
Width = 10 cm.
FAQs on Area of a Rectangle
Question 1: What is the Application of Area of a Rectangle Calculator?
Answer 1: It is used to measure the area of a rectangle within no time.
Question 2: How to measure the area of the rectangle manually?
Answer 2: To calculate area of rectangles manually, just multiply the length and width of the rectangle.
Question 3: Is a square a rectangle?
Answer 3: Yes, a square is also a rectangle.
Question 4: How to calculate the Perimeter of a Rectangle
Answer 4: By adding all the sides of a rectangle we can calculate the Perimeter of a Rectangle.
|
{"url":"https://areavolumecalculator.com/area-of-a-rectangle-calculator/","timestamp":"2024-11-09T09:43:04Z","content_type":"text/html","content_length":"39437","record_id":"<urn:uuid:1011bd49-a3ee-4351-bfb3-af3c027bc506>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00538.warc.gz"}
|
How old am I if I was born in July 21 1923?
How old am I if I was born on July 21 1923? It is a commonly asked question. All of us want to know our age, regardless of whether we are young or old. To know how old we are is also needed in some
cases. Somebody can ask us about it in school, work or in the office. So today is the day in which we are going to dispel all your doubts and give you an exact answer to the question of how old am I
if I was born on July 21 1923.
In this article, you will learn how you can calculate your age – both on your own and with the use of a special tool. A little tidbit – you will see how to calculate your age with an accuracy of
years, years and months and also years, months and days! So as you see, it will be such exact calculations. So it’s time to start.
I was born on July 21 1923. How old am I?
You were born on July 21 1923. We are sure that if somebody will ask you how old you are, you can answer the question. And we are pretty sure that the answer will be limited to years only. Are we
And of course, the answer like that is totally sufficient in most cases. People usually want to know the age given only in years, just for the general orientation. But have you ever wondered what
your exact age is? It means the age given with an accuracy of years, months and even days? If not, you couldn't have chosen better.
Here you will finally see how to calculate your exact age and, of course, know the answer. What do you think – your exact age varies significantly from your age given in years only or not? Read the
article and see if you are right!
How to calculate my age if I was born on July 21 1923?
Before we will move to the step by step calculations, we want to explain to you the whole process. It means, in this part we will show you how to calculate my age if I was born on July 21 1923 in a
theoretical way.
To know how old you are if you were born on July 21 1923, you need to make calculations in three steps. Why are there so many steps? Of course, you can try to calculate it at once, but it will be a
little complicated. It is so easier and quicker to divide the calculations into three. So let’s see these steps.
If you were born on July 21 1923, the first step will be calculating how many full years you are alive. What does ‘full years’ mean? To know the number of full years, you have to pay attention to the
day and month of your birth. Only when this day and month have passed in the current year, you can say that you are one year older. If not, you can’t count this year as a full, and calculate full
years only to the year before.
The second step is calculating the full, remaining months. It means the months which have left after calculating full years. Of course, this time, you also have to pay attention to your day of birth.
You can count only these months, in which the date of your birth has passed. If in some month this date has not passed, just leave it for the third step.
The third step is to calculate the days which have left after calculating full years and full months. It means, these are days which you can’t count to full months in the second step. In some cases,
when today is the same number as in the day in which you were born, you can have no days left to count.
So if you know how it looks in theory, let’s try this knowledge in practice. Down below, you will see these three steps with practical examples and finally know how old you are if you were born on
July 21 1923.
Calculate full years since July 21 1923
The first step is calculating full years. So you were born on July 21 1923, and today is November 12 2024. First you need to do is checking if the 21th of July has passed this year. This is the 12th
of November, so July was a few months before. It means you can calculate full years from the year of birth to the current year.
So how does the calculations look?
2024 - 1923 = 101
As you can see, you require subtracting the year of your birth from the current year. In this case, the result is 101. So it means that you are 101 years old now!
In some cases it will be sufficient to know your age only in years, but here you will know your exact age, so let’s move on.
Remaining months since July 21 1923 to now
The second step is to calculate full, remaining months. You were born on July 21 1923, today is November 12 2024. You know that there are 101 full years. So now let’s focus on months. To calculate
only full months, you need to pay attention to the day of your birth. It’s 21th July. So now you require checking if 12th November has passed this year. If today is 12th of November, it means yes,
21th of November has passed. So you will calculate full months from July to November.
To make calculations, it will be better to mark months as numbers. July is the 7th month of the year, so mark it just as 3, and November is the 101th month of the year, so mark it just as 101. And
now you can calculate the full, remaining months.
The calculations look as follows:
11 - 7 = 3
So you need to subtract the smaller number, in this case 3, from the bigger one, in this case 101. And then you have the result – it is 3 months. So now we know that if you were born on July 21 1923
you are 101 years and 3 months old. But what about days? Let’s check it!
Days left since July 21 1923 to now
The third, last step, is calculating the number of days which have left after previous calculations from the first and second step. There is no surprise, this time you also need to pay attention to
the day of your birth. You were born on July 21 1923, today is November 12 2024. You have calculated full years, from 1923 to 2024, and full months, from July to November. It means you need to count
only the days from November.
You were born on the 21th. Today is the 12th. So the calculations will be quite easy. You need to just subtract 21 from the 12 to see the number of days. The calculations will look like this:
So there are 26 full days left.
So to sum up – there are 101 full years, 3 full months and 26 days. So it means you are 101 years, 3 months and 26 days old exactly!
How Old Calculator dedicated to calculate how old you are if you were born on July 21 1923
Have you scrolled all parts containing calculations to know the easier way to know your age if you were born on July 21 1923?Don’t worry. We understand it. Here you are! We also prepared something
for people who don’t like calculating on their own. Or just those who like to get the result as fast as possible, with almost no effort.
So what do we have for you? It is the how old calculator – online calculator dedicated to calculate how old you are if you were born on July 21 1923. It is, of course, math based. It contains the
formulas, but you don’t see them. You only see the friendly-looking interface to use.
How can you use the how old calculator? You don’t need to have any special skills. Moreover, you don’t even need to do almost anything. You just need to enter the data, so you need to enter the date
of your birth – day, month and year. Less than a second is totally sufficient for this tool to give you an exact result. Easy? Yup, as it looks!
There are more good pieces of information. The how old calculator is a free tool. It means you don’t have to pay anything to use it. Just go on the page and enjoy! You can use it on your smartphone,
tablet or laptop. It will work as well on every device with an Internet connection.
So let’s try it on your own and see how fast and effortlessly you can get the answer to how old are you if you were born on July 21 1923.
Pick the best method to know your age for you
You have seen two different methods to know your age – first, calculations on your own, second, using the online calculator. It is time to pick the method for you. You could see how it works in both
of them. You could try to calculate your exact age following our three steps and also use our app. So we are sure that now you have your favorite.
Both these methods are dedicated for different people and different needs. We gathered them in one article to show you the differences between them and give you the choice. So, if you need, read the
previous paragraphs again, and enjoy calculations – regardless of whether you will make them on your own or using our how old calculator.
Do you feel old or young?
We are very curious what you think about your age now, when you finally know the exact numbers. Do you feel old or young? We are asking it because so many people, so many minds. All of you can feel
the age differently, even if it is so similar or the same age! And we think it’s beautiful that all of us are different.
Regardless of feeling old or young, what do you feel more when you think about your age? What do you think about your life so far? We encourage you to make some kinds of summaries once in a while.
Thanks to this, you will be able to check if your dream has come true, or maybe you need to fight more to reach your goal. Or maybe, after some thought, you will decide to change your life totally.
Thinking about our life, analyzing our needs and wants – these things are extremely important to live happily.
Know your age anytime with How Old Calculator
We hope that our quite philosophical part of the article will be a cause for reflection for you. But let’s get back to the main topic, or, to be honest, the end of this topic. Because that’s the end
of our article. Let’s sum up what you have learned today.
I was born on July 21 1923. How old am I? We are sure that such a question will not surprise you anymore. Now you can calculate your age, even exact age, in two different ways. You are able to make
your own calculations and also know how to make it quicker and easier with the how old calculator.
It is time for your move. Let’s surprise your friends or family with the accuracy of your answers! Tell them how old you are with an accuracy of years, months and days!
Check also our other articles to check how old are your family members or friends. Pick their birthdate, see the explanation and get the results.
Invariant Language (Invariant Country) Saturday, 21 July 1923
Afrikaans Saterdag 21 Julie 1923
Aghem tsuʔndzɨkɔʔɔ 21 ndzɔ̀ŋɔ̀dùmlo 1923
Akan Memeneda, 1923 Ayɛwoho-Kitawonsa 21
Amharic 1923 ጁላይ 21, ቅዳሜ
Arabic السبت، 21 يوليو 1923
Assamese শনিবাৰ, 21 জুলাই, 1923
Asu Jumamosi, 21 Julai 1923
Asturian sábadu, 21 de xunetu de 1923
Azerbaijani 21 iyul 1923, şənbə
Azerbaijani 21 ијул 1923, шәнбә
Azerbaijani 21 iyul 1923, şənbə
Basaa ŋgwà jôn 21 Njèbà 1923
Belarusian субота, 21 ліпеня 1923 г.
Bemba Pachibelushi, 21 Julai 1923
Bena pa shahulembela, 21 pa mwedzi gwa saba 1923
Bulgarian събота, 21 юли 1923 г.
Bambara sibiri 21 zuluye 1923
Bangla শনিবার, 21 জুলাই, 1923
Tibetan 1923 ཟླ་བ་བདུན་པའི་ཚེས་21, གཟའ་སྤེན་པ་
Breton Sadorn 21 Gouere 1923
Bodo सुनिबार, जुलाइ 21, 1923
Bosnian subota, 21. juli 1923.
Bosnian субота, 21. јули 1923.
Bosnian subota, 21. juli 1923.
Catalan dissabte, 21 de juliol de 1923
Chakma 𑄥𑄧𑄚𑄨𑄝𑄢𑄴, 21 𑄎𑄪𑄣𑄭, 1923
Chechen 1923 июль 21, шуот
Cebuano Sabado, Hulyo 21, 1923
Chiga Orwamukaaga, 21 Okwamushanju 1923
Cherokee ᎤᎾᏙᏓᏈᏕᎾ, ᎫᏰᏉᏂ 21, 1923
Central Kurdish 1923 تەمووز 21, شەممە
Czech sobota 21. července 1923
Welsh Dydd Sadwrn, 21 Gorffennaf 1923
Danish lørdag den 21. juli 1923
Taita Kifula nguwo, 21 Mori ghwa mfungade 1923
German Samstag, 21. Juli 1923
Zarma Asibti 21 Žuyye 1923
Lower Sorbian sobota, 21. julija 1923
Duala esaɓasú 21 madiɓɛ́díɓɛ́ 1923
Jola-Fonyi Sibiti 21 Súuyee 1923
Dzongkha གཟའ་ཉི་མ་, སྤྱི་ལོ་1923 ཟླ་བདུན་པ་ ཚེས་21
Embu NJumamothii, 21 Mweri wa mũgwanja 1923
Ewe memleɖa, siamlɔm 21 lia 1923
Greek Σάββατο, 21 Ιουλίου 1923
English Saturday, July 21, 1923
Esperanto sabato, 21-a de julio 1923
Spanish sábado, 21 de julio de 1923
Estonian laupäev, 21. juuli 1923
Basque 1923(e)ko uztailaren 21(a), larunbata
Ewondo séradé 21 ngɔn zamgbála 1923
Persian 1302 تیر 29, شنبه
Fulah hoore-biir 21 morso 1923
Fulah hoore-biir 21 morso 1923
Finnish lauantai 21. heinäkuuta 1923
Filipino Sabado, Hulyo 21, 1923
Faroese leygardagur, 21. juli 1923
French samedi 21 juillet 1923
Friulian sabide 21 di Lui dal 1923
Western Frisian sneon 21 July 1923
Irish Dé Sathairn 21 Iúil 1923
Scottish Gaelic DiSathairne, 21mh dhen Iuchar 1923
Galician Sábado, 21 de xullo de 1923
Swiss German Samschtig, 21. Juli 1923
Gujarati શનિવાર, 21 જુલાઈ, 1923
Gusii Esabato, 21 Chulai 1923
Manx 1923 Jerrey-souree 21, Jesarn
Hausa Asabar 21 Yuli, 1923
Hawaiian Poʻaono, 21 Iulai 1923
Hebrew יום שבת, 21 ביולי 1923
Hindi शनिवार, 21 जुलाई 1923
Croatian subota, 21. srpnja 1923.
Upper Sorbian sobota, 21. julija 1923
Hungarian 1923. július 21., szombat
Armenian 1923 թ. հուլիսի 21, շաբաթ
Interlingua sabbato le 21 de julio 1923
Indonesian Sabtu, 21 Juli 1923
Igbo Satọdee, 21 Julaị 1923
Sichuan Yi 1923 ꏃꆪ 21, ꆏꊂꃘ
Icelandic laugardagur, 21. júlí 1923
Italian sabato 21 luglio 1923
Japanese 1923年7月21日土曜日
Ngomba Sásidɛ, 1923 Pɛsaŋ Saambá 21
Machame Jumamosi, 21 Julyai 1923
Javanese Sabtu, 21 Juli 1923
Georgian შაბათი, 21 ივლისი, 1923
Kabyle Sayass 21 Yulyu 1923
Kamba Wa thanthatũ, 21 Mwai wa muonza 1923
Makonde Liduva litandi, 21 Mwedi wa Nnyano na Mivili 1923
Kabuverdianu sábadu, 21 di Julhu di 1923
Koyra Chiini Assabdu 21 Žuyye 1923
Kikuyu Njumamothi, 21 Mwere wa mũgwanja 1923
Kazakh 1923 ж. 21 шілде, сенбі
Kako mɔnɔ sɔndi 21 kuŋgwɛ 1923
Kalaallisut 1923 juulip 21, arfininngorneq
Kalenjin Kolo, 21 Ng’eiyeet 1923
Khmer សៅរ៍ 21 កក្កដា 1923
Kannada ಶನಿವಾರ, ಜುಲೈ 21, 1923
Korean 1923년 7월 21일 토요일
Konkani शेनवार 21 जुलाय 1923
Kashmiri بٹوار, جوٗلایی 21, 1923
Shambala Jumaamosi, 21 Julai 1923
Bafia samdí 21 ŋwíí akǝ táabɛɛ 1923
Colognian Samsdaach, dä 21. Juuli 1923
Kurdish 1923 tîrmehê 21, şemî
Cornish 1923 mis Gortheren 21, dy Sadorn
Kyrgyz 1923-ж., 21-июль, ишемби
Langi Jumamóosi, 21 Kʉmʉʉnchɨ 1923
Luxembourgish Samschdeg, 21. Juli 1923
Ganda Lwamukaaga, 21 Julaayi 1923
Lakota Owáŋgyužažapi, Čhaŋpȟásapa Wí 21, 1923
Lingala mpɔ́sɔ 21 sánzá ya nsambo 1923
Lao ວັນເສົາ ທີ 21 ກໍລະກົດ ຄ.ສ. 1923
Northern Luri AP 1302 Tir 29, Sat
Lithuanian 1923 m. liepos 21 d., šeštadienis
Luba-Katanga Lubingu 21 Kabàlàshìpù 1923
Luo Ngeso, 21 Dwe mar Abiriyo 1923
Luyia Jumamosi, 21 Julai 1923
Latvian Sestdiena, 1923. gada 21. jūlijs
Masai Jumamósi, 21 Mórusásin 1923
Meru Jumamosi, 21 Njuraĩ 1923
Morisyen samdi 21 zilye 1923
Malagasy Asabotsy 21 Jolay 1923
Makhuwa-Meetto Jumamosi, 21 Mweri wo saba 1923
Metaʼ Aneg 7, 1923 iməg àdùmbə̀ŋ 21
Maori Rāhoroi, 21 Hōngongoi 1923
Macedonian сабота, 21 јули 1923
Malayalam 1923, ജൂലൈ 21, ശനിയാഴ്ച
Mongolian 1923 оны долоодугаар сарын 21, Бямба гараг
Marathi शनिवार, 21 जुलै, 1923
Malay Sabtu, 21 Julai 1923
Maltese Is-Sibt, 21 ta’ Lulju 1923
Mundang Comzyeɓsuu 21 Mamǝŋgwãalii 1923
Burmese 1923၊ ဇူလိုင် 21၊ စနေ
Mazanderani AP 1302 Tir 29, Sat
Nama Satertaxtsees, 21 ǂKhoesaob 1923
Norwegian Bokmål lørdag 21. juli 1923
North Ndebele Mgqibelo, 21 Ntulikazi 1923
Low German 1923 M07 21, Sat
Nepali 1923 जुलाई 21, शनिबार
Dutch zaterdag 21 juli 1923
Kwasio sásadi 21 ngwɛn hɛmbuɛrí 1923
Norwegian Nynorsk laurdag 21. juli 1923
Ngiemboon màga lyɛ̌ʼ , lyɛ̌ʼ 21 na saŋ tyɛ̀b tyɛ̀b mbʉ̀ŋ, 1923
Nuer Bäkɛl lätni 21 Pay yie̱tni 1923
Nyankole Orwamukaaga, 21 Okwamushanju 1923
Oromo Sanbata, Adooleessa 21, 1923
Odia ଶନିବାର, ଜୁଲାଇ 21, 1923
Ossetic Сабат, 21 июлы, 1923 аз
Punjabi ਸ਼ਨਿੱਚਰਵਾਰ, 21 ਜੁਲਾਈ 1923
Punjabi ہفتہ, 21 جولائی 1923
Punjabi ਸ਼ਨਿੱਚਰਵਾਰ, 21 ਜੁਲਾਈ 1923
Polish sobota, 21 lipca 1923
Pashto اونۍ د AP 1302 د چنگاښ 29
Portuguese sábado, 21 de julho de 1923
Quechua Sábado, 21 Julio, 1923
Romansh sonda, ils 21 da fanadur 1923
Rundi Ku wa gatandatu 21 Mukakaro 1923
Romanian sâmbătă, 21 iulie 1923
Rombo Ijumamosi, 21 Mweri wa saba 1923
Russian суббота, 21 июля 1923 г.
Kinyarwanda 1923 Nyakanga 21, Kuwa gatandatu
Rwa Jumamosi, 21 Julyai 1923
Sakha 1923 сыл От ыйын 21 күнэ, субуота
Samburu Mderot ee kwe, 21 Lapa le sapa 1923
Sangu Jumamosi, 21 Mushipepo 1923
Sindhi 1923 جولاءِ 21, ڇنڇر
Northern Sami 1923 suoidnemánnu 21, lávvardat
Sena Sabudu, 21 de Julho de 1923
Koyraboro Senni Asibti 21 Žuyye 1923
Sango Lâyenga 21 Lengua 1923
Tachelhit ⴰⵙⵉⴹⵢⴰⵙ 21 ⵢⵓⵍⵢⵓⵣ 1923
Tachelhit asiḍyas 21 yulyuz 1923
Tachelhit ⴰⵙⵉⴹⵢⴰⵙ 21 ⵢⵓⵍⵢⵓⵣ 1923
Sinhala 1923 ජූලි 21, සෙනසුරාදා
Slovak sobota 21. júla 1923
Slovenian sobota, 21. julij 1923
Inari Sami lávurdâh, syeinimáánu 21. 1923
Shona 1923 Chikunguru 21, Mugovera
Somali Sabti, Bisha Todobaad 21, 1923
Albanian e shtunë, 21 korrik 1923
Serbian субота, 21. јул 1923.
Serbian субота, 21. јул 1923.
Serbian subota, 21. jul 1923.
Swedish lördag 21 juli 1923
Swahili Jumamosi, 21 Julai 1923
Tamil சனி, 21 ஜூலை, 1923
Telugu 21, జులై 1923, శనివారం
Teso Nakasabiti, 21 Ojola 1923
Tajik Шанбе, 21 Июл 1923
Thai วันเสาร์ที่ 21 กรกฎาคม พ.ศ. 2466
Tigrinya ቀዳም፣ 21 ሓምለ መዓልቲ 1923 ዓ/ም
Turkmen 21 iýul 1923 Şenbe
Tongan Tokonaki 21 Siulai 1923
Turkish 21 Temmuz 1923 Cumartesi
Tatar 21 июль, 1923 ел, шимбә
Tasawaq Asibti 21 Žuyye 1923
Central Atlas Tamazight Asiḍyas, 21 Yulyuz 1923
Uyghur 1923 21-ئىيۇل، شەنبە
Ukrainian субота, 21 липня 1923 р.
Urdu ہفتہ، 21 جولائی، 1923
Uzbek shanba, 21-iyul, 1923
Uzbek AP 1302 Tir 29, شنبه
Uzbek шанба, 21 июл, 1923
Uzbek shanba, 21-iyul, 1923
Vai ꔻꔬꔳ, 21 ꖱꕞꔤ 1923
Vai siɓiti, 21 7 1923
Vai ꔻꔬꔳ, 21 ꖱꕞꔤ 1923
Vietnamese Thứ Bảy, 21 tháng 7, 1923
Vunjo Jumamosi, 21 Julyai 1923
Walser Samštag, 21. Heiwet 1923
Wolof Aseer, 21 Sul, 1923
Xhosa 1923 Julayi 21, Mgqibelo
Soga Olomukaaga, 21 Julaayi 1923
Yangben séselé 21 efute 1923
Yiddish שבת, 21טן יולי 1923
Yoruba Àbámẹ́ta, 21 Agẹ 1923
Cantonese 1923年7月21日星期六
Cantonese 1923年7月21日星期六
Cantonese 1923年7月21日星期六
Standard Moroccan Tamazight ⴰⵙⵉⴹⵢⴰⵙ 21 ⵢⵓⵍⵢⵓⵣ 1923
Chinese 1923年7月21日星期六
Chinese 1923年7月21日星期六
Chinese 1923年7月21日星期六
Zulu UMgqibelo, Julayi 21, 1923
|
{"url":"https://howoldcalculator.com/july-21-year-1923","timestamp":"2024-11-13T01:24:22Z","content_type":"text/html","content_length":"104082","record_id":"<urn:uuid:a853b5c6-fc63-4e3e-96e1-59d93d32e9f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00265.warc.gz"}
|
Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes
The MAGIC stereoscopic system collected 69 hours of Crab Nebula data between October 2009 and April 2011. Analysis of this data sample using the latest improvements in the MAGIC stereoscopic software
provided an unprecedented precision of spectral and night-by-night light curve determination at gamma rays. We derived a differential spectrum with a single instrument from 50 GeV up to almost 30 TeV
with 5 bins per energy decade. At low energies, MAGIC results, combined with Fermi-LAT data, show a flat and broad Inverse Compton peak. The overall fit to the data between 1 GeV and 30 TeV is not
well described by a log-parabola function. We find that a modified log-parabola function with an exponent of 2.5 instead of 2 provides a good description of the data (χ[red]^2 = 35 / 26). Using
systematic uncertainties of the MAGIC and Fermi-LAT measurements we determine the position of the Inverse Compton peak to be at (53 ±3[stat] +31[syst] -13[syst]) GeV, which is the most precise
estimation up to date and is dominated by the systematic effects. There is no hint of the integral flux variability on daily scales at energies above 300 GeV when systematic uncertainties are
included in the flux measurement. We consider three state-of-the-art theoretical models to describe the overall spectral energy distribution of the Crab Nebula. The constant B-field model cannot
satisfactorily reproduce the VHE spectral measurements presented in this work, having particular difficulty reproducing the broadness of the observed IC peak. Most probably this implies that the
assumption of the homogeneity of the magnetic field inside the nebula is incorrect. On the other hand, the time-dependent 1D spectral model provides a good fit of the new VHE results when considering
a 80 μG magnetic field. However, it fails to match the data when including the morphology of the nebula at lower wavelengths.
Journal of High Energy Astrophysics
Pub Date:
March 2015
□ Crab Nebula;
□ Pulsar wind nebulae;
□ MAGIC telescopes;
□ Imaging atmospheric Cherenkov telescopes;
□ Very high energy gamma rays;
□ Astrophysics - High Energy Astrophysical Phenomena
accepted by JHEAp, 9 pages, 6 figures
|
{"url":"https://ui.adsabs.harvard.edu/abs/2015JHEAp...5...30A","timestamp":"2024-11-10T19:20:09Z","content_type":"text/html","content_length":"69388","record_id":"<urn:uuid:7da2077b-173d-4a44-b92b-31261ff1c83b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00897.warc.gz"}
|
College Prep
Hide Menu Show Menu
Multiplying two integers
Repeated addition or subtraction. Let $a$ and $b$ be integers. $a•0 = 0 = 0•a$. Like signs: $a•b > 0$. Unlike Signs: $a•b < 0$.
Multiplying rational expressions
Let $a$, $b$, $c$, and $d$ represent real numbers, variables, or algebraic expressions such that \(b \ne 0\) and \(d \ne 0\). Then the product of \(\frac{a}{b}\) and \(\frac{c}{d}\) is \(\frac{a}{b}•
\frac{c}{d} = \frac{{ac}}{{bd}}\).
Multiplying fractions
Let $a$, $b$, $c$, and $d$ be integers with \(b \ne 0\) and \(d \ne 0\). The product of \(\frac{a}{b}\) and \(\frac{c}{d}\) is \(\frac{a}{b}•\frac{c}{d} = \frac{{a•c}}{{b•d}}\).
Multiplicative inverse property
The product of a nonzero real number and its reciprocal is 1.
Multiplicative identity property
The product of 1 and a real number equals the number itself.
Multiplication property of inequalities
Multiply each side by a positive quantity. If \(a < b\) and $c$ is positive, then \(ac < bc\). Multiply each side by a negative quantity and reverse the inequality symbol. If \(a < b\) and $c$ is
negative, then \(ac > bc\).
A polynomial in $x$ with only 1 term.
Mixture problems
Real-life problems that involve combinations of two or more quantities that make up a new or different quantity.
Markup rate
When the markup is expressed as a percent of the cost.
The difference between the price a store sells an item for and the price they pay for the item.
|
{"url":"https://www.collegeprepalgebra.com/cpa/category/glossary/?letter=M","timestamp":"2024-11-14T21:28:34Z","content_type":"text/html","content_length":"69878","record_id":"<urn:uuid:ea8734f1-1eb9-4c3e-b4ff-728e2a8c50d4>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00713.warc.gz"}
|
wind turbine
Wind Turbine
Implement model of variable pitch wind turbine
Simscape / Electrical / Specialized Power Systems / Electrical Machines
The Wind Turbine block models the steady-state power characteristics of a wind turbine. The stiffness of the drive train is infinite and the friction factor and the inertia of the turbine must be
combined with those of the generator coupled to the turbine. The output power of the turbine is given by the following equation.
${P}_{m}={c}_{p}\left(\lambda ,\beta \right)\frac{\rho A}{2}{v}_{\text{wind}}^{3},$ (1)
P[m] Mechanical output power of the turbine (W)
c[p] Performance coefficient of the turbine
ρ Air density (kg/m^3)
A Turbine swept area (m^2)
v[wind] Wind speed (m/s)
λ Tip speed ratio of the rotor blade tip speed to wind speed
β Blade pitch angle (deg)
Equation 1 can be normalized. In the per unit (pu) system we have:
P[m][_pu] Power in pu of nominal power for particular values of ρ and A
c[p][_pu] Performance coefficient in pu of the maximum value of c[p]
v[wind_pu] Wind speed in pu of the base wind speed. The base wind speed is the mean value of the expected wind speed in m/s.
k[p] Power gain for c[p][_pu]=1 pu and v[wind_pu]=1 pu, k[p] is less than or equal to 1
A generic equation is used to model c[p](λ,β). This equation, based on the modeling turbine characteristics of [1], is:
${c}_{p}\left(\lambda ,\beta \right)={c}_{1}\left({c}_{2}/{\lambda }_{i}-{c}_{3}\beta -{c}_{4}\right){e}^{-{c}_{5}/{\lambda }_{i}}+{c}_{6}\lambda ,$
$\frac{1}{{\lambda }_{i}}=\frac{1}{\lambda +0.08\beta }-\frac{0.035}{{\beta }^{3}+1}.$
The coefficients c[1] to c[6] are: c[1 ]= 0.5176, c[2 ]= 116, c[3 ]= 0.4, c[4 ]= 5, c[5 ]= 21 and c[6 ]= 0.0068. The c[p]-λ characteristics, for different values of the pitch angle β, are illustrated
below. The maximum value of c[p](c[pmax]= 0.48) is achieved for β = 0 degrees and for λ = 8.1. This particular value of λ is defined as the nominal value (λ[_nom]).
This figure shows the Simulink^® model of the turbine. The three inputs are the generator speed (ωr_pu) in pu of the nominal speed of the generator, the pitch angle in degrees, and the wind speed in
m/s. The tip speed ratio λ in pu of λ[_nom] is obtained by the division of the rotational speed in pu of the base rotational speed (defined below) and the wind speed in pu of the base wind speed. The
output is the torque applied to the generator shaft.
The illustration below shows the mechanical power P[m] as a function of generator speed, for different wind speeds and for blade pitch angle β = 0 degrees. This figure is obtained with the default
parameters (base wind speed = 12 m/s, maximum power at base wind speed = 0.73 pu (k[p] = 0.73), and base rotational speed = 1.2 pu).
Generator speed (pu) — Generator speed, pu
Generator speed based on the nominal speed of the generator, specified as a scalar, in pu.
Pitch angle (deg) — Pitch angle, deg
Pitch angle, specified as a scalar
Wind speed (m/s) — Wind speed, m/s
nonnegative scalar
Wind speed, specified as a nonnegative scalar, in m/s.
Tm (pu) — Mechanical torque of wind turbine, pu
Mechanical torque of the wind turbine, returned as a scalar, in pu of the nominal generator torque. The nominal torque of the generator is based on the nominal generator power and speed.
Nominal mechanical output power — Nominal output power
1.5e6 (default) | nonnegative scalar
The nominal output power in watts (W).
Base power of the electrical generator — Nominal power of electrical generator coupled to wind turbine
1.5e6/0.9 (default) | positive scalar
The nominal power of the electrical generator coupled to the wind turbine, in VA. This parameter is used to compute the output torque in pu of the nominal torque of the generator.
Base wind speed (m/s) — Base value of wind speed
12 (default) | positive scalar
The base value of the wind speed, in m/s, used in the per-unit system. The base wind speed is the mean value of the expected wind speed. This base wind speed produces a mechanical power that is
usually lower than the turbine nominal power.
Maximum power at base wind speed — Maximum power
0.73 (default) | positive scalar
Power gain k[p] at base wind speed in pu of the nominal mechanical power. k[p] is less than or equal to 1.
Base rotational speed — Rotational speed at maximum power for base wind speed
1.2 (default) | positive scalar
The rotational speed at maximum power for the base wind speed. The base rotational speed is in pu of the base generator speed. For a synchronous or asynchronous generator, the base speed is the
synchronous speed. For a permanent-magnet generator, the base speed is defined as the speed producing nominal voltage at no load.
Pitch angle beta to display wind turbine power characteristics — Pitch angle beta
0 (default) | nonnegative scalar
The pitch angle beta, in degrees, used to display the power characteristics. Beta must be greater than or equal to zero.
Display wind turbine power characteristics — Plot wind turbine power characteristics
Click to plot the turbine power characteristics for different wind speeds and for the specified pitch angle beta.
[1] Siegfried Heier, “Grid Integration of Wind Energy Conversion Systems,” John Wiley & Sons Ltd, 1998, ISBN 0-471-97143-X
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Version History
Introduced in R2006a
|
{"url":"https://www.mathworks.com/help/sps/powersys/ref/windturbine.html;jsessionid=00ec52a7e93ee0e251f5e77adda6","timestamp":"2024-11-03T03:25:52Z","content_type":"text/html","content_length":"92041","record_id":"<urn:uuid:d1560174-2689-4ee9-9fc8-beebf55ea60d>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00629.warc.gz"}
|
Euclid's Postulates
In his seminal work "Elements," Euclid formulated five postulates that form the foundation of Euclidean geometry.
Euclid's five postulates are fundamental mathematical assertions assumed to be true without proof.
These postulates underpin plane geometry, which is commonly taught in schools.
Note: Rejecting even one of these Euclidean postulates leads to the development of a different type of geometry, known as non-Euclidean geometry.
These postulates are:
First Postulate
A straight line can be drawn connecting any two points.
This implies the conditions of existence and uniqueness: a unique straight line exists between any two distinct points.
From these axioms, several corollaries are derived:
• Infinite lines can pass through a single point.
• At least two points lie on any straight line.
• Through three non-collinear points, exactly one plane passes.
• Through three collinear points, exactly one straight line passes.
• Infinite planes can pass through a single straight line.
Second Postulate
A finite straight line can be extended indefinitely in a straight line.
Third Postulate
Given any point and any segment of length, a circle can be drawn with the point as its center and the segment as its radius.
Fourth Postulate
All right angles are congruent, meaning they are equal in measure.
Fifth Postulate
If a straight line "r" intersects two other straight lines "s" and "t" such that the interior angles α and β on the same side are less than two right angles (α + β < 180°), then lines "s" and "t", if
extended indefinitely, will meet at a point P.
Euclid's fifth postulate is also known as the parallel postulate.
This is because if a straight line intersects two straight lines such that the sum of the interior angles is exactly 180°, then the two lines are parallel and will never intersect in the plane.
Over time, various formulations of this postulate have been proposed.
Note: Euclid's fifth postulate has been the subject of extensive debate and analysis over the centuries. For more than two thousand years, mathematicians attempted to derive this postulate from the
other four but ultimately failed. In the 19th century, it was finally proven that this is not possible. Changing the fifth postulate results in a type of geometry entirely different from Euclidean
geometry, known as non-Euclidean geometry. There are several types of non-Euclidean geometries, such as spherical geometry, hyperbolic geometry, and elliptic geometry.
And so on
|
{"url":"https://www.andreaminini.net/math/euclid-s-postulates","timestamp":"2024-11-01T23:21:45Z","content_type":"text/html","content_length":"16214","record_id":"<urn:uuid:fd08fa42-b176-4c58-84bc-2c4a5070ace7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00020.warc.gz"}
|
The Voltage Divider
22 March 2009
Author: Giorgos Lazaridis
The Voltage Divider
What is the voltage divider
The voltage divider is composed (usually) from a set of two resistors in series. The effect of the voltage divider is that it can be used to polarize other components in a circuitry such as
transistors or integrated circuits, with a different voltage from the main voltage supply. Thus, a circuit may have 9 volts power supply and using a voltage divider we can supply a transistor in this
circuit with 3.6 volts. A typical voltage divider is shown bellow:
How to calculate the voltage divider
The first step to calculate a voltage divider is to know the current that flows within. At first, we will suppose that the R[LOAD] is not yet connected to the circuit. So using the ohm's law:
I = U / R[TOTAL] => I = U / (R1+R2) (1)
The output voltage will be the drop voltage across the edges of R2. Using again the ohm's law we calculate:
V[R2] = I x R2 (2)
(2) because of (1) becomes:
V[R2] = V[OUTPUT] = V1 x R2 / (R1 + R2)
This is the basic formula to calculate the output voltage of the voltage divider.
A major issue here is that R2 is connected in parallel with R[LOAD]. When you put the voltage divider to work, you should always calculate R2 as R[TOTAL] with the R[LOAD] connected in parallel:
R[2-LOAD TOTAL] = (R2 x R[LOAD]) / (R2 + R[LOAD])
And replace R2 in the basic formula with the result of the above calculation.
Last but not least, is the max current that will be able to go within R[LOAD]. If we consider the system R1-R2-R[LOAD] as a mixed resistor connection (series and parallel connection combined), the
maximum current I will be less or equal to the current that flows within R1. Thus:
I = U / R1
I = I[R2] + I[RLOAD]
An example
In the previous example, we want to supply the R[LOAD] with 3 volts. The input voltage U is 12 Volts. We also want to limit the maximum current that will flow within R[LOAD] to 300mA. The resistance
of R[LOAD] is 7Ohms.
At first we need to calculate the R1. The only thing that is given to us is the current limitation. We suppose that the minimum resistance of R[LOAD] could be as low as 0 ohms. That would give us the
maximum current flow. So, we will calculate R1 in a way that no more than 300mA could flow within:
R1 = U / I => R1 = 12V / 300mA => R1 = 40 Ohms
Now we need to calculate R2 in a way that the voltage across R[LOAD] will be the desired one. Using the basic formula from previous calculations:
V[OUTPUT] = V1 x R2 / (R1 + R2)[/B]
But R2 is connected in parallel with R[LOAD]. So we will calculate the basic formula using R[TOTAL] instead of R2:
V[OUTPUT] = V1 x R[TOTAL] / (R1 + R[TOTAL]) =>
V[OUTPUT] x (R1 + R[TOTAL]) = V1 x R[TOTAL] =>
V[OUTPUT] x R1 + V[OUTPUT] x R[TOTAL] = V1 x R[TOTAL] =>
V[OUTPUT] x R[TOTAL] - V1 x R[TOTAL] = -V[OUTPUT] x R1 =>
R[TOTAL] x (V[OUTPUT] - V1) = -V[OUTPUT] x R1 =>
R[TOTAL] x (V1 - V[OUTPUT]) = V[OUTPUT] x R1 =>
R[TOTAL] = (V[OUTPUT] x R1) / (V1 - V[OUTPUT]) =>
R[TOTAL] = (3 x 40) / (12 - 3) =>
R[TOTAL] = 120 / 9 =>
R[TOTAL] = 13.3 Ohms =>(approx.) R[TOTAL] = 14 Ohms
Now we come to the final step, to calculate the R2 resistor.The total resistance R[TOTAL] is calculated from the resistor parallel calculation as follows:
1/R[TOTAL] = 1/R2 + 1/R[LOAD]
solving for R2:
1/R2 = 1/R[TOTAL] - 1/R[LOAD]=>
1/R2 = R[LOAD]/(R[LOAD]xR[TOTAL]) - R[TOTAL]/(R[TOTAL]xR[LOAD]) =>
1/R2 = (R[LOAD]-R[TOTAL]) / R[LOAD]xR[TOTAL] =>
R2 = R[LOAD]xR[TOTAL] / (R[LOAD]-R[TOTAL]) =>
R2 = 7 x 14 / (14 - 7) =>
R2 = 14 Ohms
And these are the final values for this circuit to work!
Polarizing a transistor
The voltage divider is a cheap and easy solution to have different voltages within a circuit with minimal components used. Therefore, it is a very common way for polarizing for example transistors
within an amplifier circuit.
Nevertheless, a voltage divider can have some major drawbacks. First of all, it is not stable. If the current drawn from R[LOAD] changes, the voltage across R[LOAD] will also change.
Another drawback is the current limitation. There are usually not resistors with the values required to achieve exactly the desired voltage drop. Many times, higher values of resistors must be used
in order to achieve this voltage drop. Typical values are from 330 Ohms, and may go up to 22Kohms or even higher. This will dramatically decrease the maximum current flow from the R[LOAD]. That is of
course not a big deal when polarizing a transistor (as seen on the left side). That is what makes the voltage divider ideal for such kind of applications.
Relative pages
Sorry but I think your calculation has an error.
The total resistance of any resistor network, in parallel, must always be lower than the lowest value.
However, your calculations show the total to be the same as one of the parallel resistors (14 ohms)
Also, the calculated values do not work correctly for the circuit.
Sorry. But many thanks for the site. Its a good educational site!
Excuse me, but i think you have a mistake. Or i have misunderstanding
"1/RTOTAL = 1/R2 1/RLOAD
solving for R2:
1/R2 = 1/RLOAD - 1/RTOTAL =>"/I THINK MISTAKE IS HERE, 1/RTOTAL=1/R2 1/RLOAD,THEN -1/R2=-1/RTOTAL 1/RLOAD, THEN 1/R2=1/RTOTAL 1/RLOAD. tHE RESULT OF R2 WILL BE STE SAME BUT NEGATIVE. I BELIEVE
NEGATIVE ANSWER IS UNCORRECT BUT MATH IS MATH...
@Yoram Stein what do you mean???
Giorgios question: How do you write R load so theat load comes lower then the R (what alt function is that)?
@Fung in that case, you add the resistors and find the equivalent. If for example 3 resistors R1-R2-R3 with R1 to + and R3 to -, you can find the voltage between R1 and r2 if you add r2 and r3
and calculate them as a voltage divider with 2 resistors, R1 and R2+R3 equivalent.
How about the voltage dividers which includes more than 2 resistors, which has more than 1 loading voltage?
For example, there are 5 resistors in a voltage divider R1 to R5 with the supply voltage of 6V, the values of R1 to R5 are 82 kohms, 5 kohms, 3.3 kohms, 6.8 kohms and 68 kohms respectively; how
can I calculate the voltage between the resistors (ie R1 and R2, R2 and R3, R3 and R4, R4 and R5)?
Ed, go to the forum (http://pcbheaven.com/forum/) subscribe there and post your question with a circuit.
Well I am still playing with this divider circuit. What I now have is the 3.3 megohm resistors in series with a 51 Kohm resistor to ground. The 10 Megohm resistor is still between the first two
3.3 mOhm resistors and the last 3.3 meg ohm and 51 Kohm string to ground. I calculate It is 0.0001005 A, Rt is 9,951,000 Ohms, E across each 3.3 Meg ohm resistor is 331.62 volts, E across the 51
kOhm is 5.1255 volts. The 5.1255 volts feeds the minus[-] side of a comparator. The plus [+] side of the comparator is fed by a 50 K pot and a 5.6 volt zener in series with a 1K resistor. I
suppose this allow a trip point to be set.
The end of the 10 meg ohm resistor feeds a 0.047 uf capacitor [400 vdc] in parallel with the primary of a trigger transformer [32.1 ohms] and a button to ground to discharge the capacitor and
flash a lamp.
The question I have is what am I looking t as far as electrical specs here. I want to replace the push button with a SSR so I can connect it to a PIC micro.
Please educate me.
So, RA = R1+R2 = 6.6M. RA is the R1 in my calculations.
RB = R3 = 3.3M. RB is the R2 in my calculations.
For the moment, forget the load (the 10M resistor and the comparator) and find the voltage from the output of the divider.
The voltage on the tap is Vt = 1000 x RB / (RA+RB) => Vt=1000x3.3/9.9 => Vt=333.33V
The max current that can go through 10M resistor in series with 333.33V, is I=U/R = 1000/10000000 =0.0001A = 0.1mA
What I have is the following: 1,000 volts DC is the applied voltage. This voltage is going thru R1 [3.3 M] + R2 [3.3M] + R3 [3.3 M] to ground. There is a tap between R2 and R3 and this goes
through a 10 M resistor to an comparator [Texas Instruments TL331]. The question is what voltage and current are applied to the comparator. What formulas do I need to calculate this. I have the
series resistance is 9900000 ohms, the voltage is 1,000 and that gives a current through R1 through R3 as 0.000101 amps. I don't understand how to determine the rest.
a circuit schematic would be easier for me to understand. You can use the forum to upload one, or upload to a photo-gallery site and post here the link. I am sure it is not too complicated. Send
me the schematic.
I have a resistive divider made up of 3 3,300,000 resistors in series with 1,000 volts applied. I have a tap between resistors 2 and 3 an it is a 10,000,000 resistor to feed a comparator. I tried
to use your page to calculate the voltage coming out of the 10,000,000 resistor but your page and calculations are too confusing. What is "U"? You have no description of "U"?
|
{"url":"http://pcbheaven.com/wikipages/The_Voltage_Divider/","timestamp":"2024-11-06T15:41:56Z","content_type":"text/html","content_length":"36138","record_id":"<urn:uuid:a5e3954b-5d23-4a97-b2ae-973a0ac13536>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00527.warc.gz"}
|
How much high school maths do students need to succeed at university physics?
How much high school maths do students need to succeed at university physics?
BACKGROUND Most first-year calculus-based Physics courses offered at Australian Universities have prerequisites which include Year 12 Physics and typically two Mathematics subjects. An increasing
number of students are attempting these Physics courses without the full mathematics pre-requisite. AIMS To determine if attempting a calculus-based Physics course without having the full
mathematical perquisites impacts on a student’s performance within that course, progression and success in later Physics courses. METHOD At the University of Adelaide the ideal mathematical
pre-requisites for Physics-IA (a calculus-based course) are the Year 12 SACE subjects: Mathematical Studies and Specialist Mathematics. However an increasing number of students are attempting this
course without Specialist Mathematics. In 2014, 25% of the school-leaving students enrolled in Physics-IA had only completed Mathematical Studies and not Specialist Mathematics. This corresponds with
a period over the last several years where the number of students completing Year 12 SACE Physics has remained constant but the numbers completing Mathematical Studies and Specialist Mathematics has
declined. The correlation between the reduction in the number of students completing the above Year 12 Maths subjects and reduction in the number of students who attempt Physics IA without the full
mathematical pre-requisites has been examined using student enrolment from 2003 to 2014. The difference in the success of students who attempt calculus-based Physics with and without the desired
mathematical preparation has been analysed using enrolments from 2003 until 2014 in the following way: • Success (grade obtained) in the first semester course Physics IA, • Progression and success in
Physics IB (semester 2 course). • Progression and success in Second Year Physics courses. RESULTS and CONCLUSIONS Results and conclusion will be presented in detail at the conference.
|
{"url":"https://openjournals.library.sydney.edu.au/IISME/article/view/7826","timestamp":"2024-11-10T22:22:11Z","content_type":"text/html","content_length":"38423","record_id":"<urn:uuid:3aa275a5-a472-48ab-8dc7-118f0818cee4>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00058.warc.gz"}
|
Open Circuit Test Or No-Load Test
Figure 1.35 Open Circuit Test
From this test, we can determine core loss and no-load current (I[0]) of the transformer. Figure 1.35 shows the schematic diagram of the transformer. The high-voltage side is generally kept open
because the current in high-voltage winding is less compared to that on low-voltage winding. In low-voltage side, a voltmeter, an ammeter and a wattmeter are connected to measure the input voltage,
no-load current and the core loss of the transformer. Since no-load current is generally small, the copper loss at no-load condition is negligible. The wattmeter reading practically gives the iron
loss of the transformer.
To measure the induced emf in secondary winding, a high-resistance voltmeter is connected across the secondary to calculate the turns ratio (a).
Let I[0] be the reading of the ammeter, V[1] be the reading of the voltmeter and W be the reading of the wattmeter.
We have W=V[1]I[0]cosθ[0]
Therefore, I[W]=I[0]cosθ[0] (1.27)
and I[μ]=I[0]sinθ[0] (1.28)
During no-load condition, the voltage drop across the primary impedance is small. Therefore, we have
The total iron loss depends on the frequency as well as the maximum flux density. Hysteresis loss and eddy current loss are the two parts of the total iron loss, which are described below.
1. Hysteresis loss: P[h] = k[1]B[max]^1,6f (1.31)
2. Eddy current loss: P[e] = k[2]B[max]^2f^2 (1.32)
where k[1] and k[2] are the constants in the above two equations, which can be obtained from the experi-ment. The hysteresis and eddy current losses can be calculated by knowing k[1] and k[2]. Figure
1.36 shows the variation of iron loss with the applied voltage.
Figure 1.36 Variation of Iron Loss with Applied Voltage
|
{"url":"https://electricallive.com/2015/03/open-circuit-test-or-no-load-test.html","timestamp":"2024-11-02T23:23:16Z","content_type":"text/html","content_length":"89503","record_id":"<urn:uuid:27056d70-c45e-4272-a881-37593fe35343>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00817.warc.gz"}
|
Coq devs & plugin devs
What is the difference between foo; solve_constraints; []; bar. and foo; solve_constraints; []. bar.?
what does ;[] even do
Unless I am reading the manual incorrectly, don't you need Unset Solve Unification Constraints to use solve_constraints?
Or I guess that is to disable the constraints from being solved in other ltac tactics. If foo on purpose doesn't do this then maybe it is valid.
Regarding the use of []; which I haven't seen before, is this ensuring no goals exist?
i.e. the empty case of the [ .. | ..] syntax.
Seems to be:
Goal nat * nat * nat.
Fail simple notypeclasses refine (_,_,_); [].
Then to answer your original question, the first is different than the second since bar is being bound to a new empty set of goals (I don't know if those are the right words).
It would because there are no new goals
Goal nat * nat * nat.
idtac "hello"; []; idtac "world".
But if idtac "world" was replaced with a tactic that actually did anything, then the latter bar would do something in the original goal, whilsts the former bar would do it in the "empty set of
At least that is the haphazard understanding of the situation I have
what is this "new goal" concept
Goal nat * nat * nat.
Fail refine (_,_,_); [].
Fail refine (_,_,_); [idtac].
Fail refine (_,_,_); [idtac|idtac].
Succeed refine (_,_,_); [idtac|idtac|idtac].
I don't know if the correct term is "new goal" but that's what it looks like to me.
do you mean the goals which are focused after the tactic on the left of the ;?
I suppose that is it.
it seems that ; works even if there are no focused goals on the right
Which is why things like this work:
Goal True.
trivial; exact 45.
anyway removing the ;[] from the issue example changes nothing so it doesn't matter
The answer to Jason's question is then, bar does nothing in the first example since there are no focused goals, but something in the second example since there is one goal focused.
; [] is "fail if the previous tactic does not leave over exactly one goal"
Is it documented in the refman?
I mean, really my question is "what is the difference between foo; bar. and foo. bar., but I added ; [] and solve_constraints to eliminate the answers "foo might not leave over exactly one goal" and
"some constraints are only solved at ."
probably the goal evar thing https://github.com/coq/coq/issues/15520
@Ali Caglayan It should be? It's the trivial case of things like foo; [ bar | baz | qux ] where you only have one goal you're delegating to. Note that you can elide idtac when you don't want to do
something in the branch.
Is it really the trivial case?
Goal nat.
Succeed refine (_); [].
Succeed refine (_); [idtac].
Fail refine (_); [idtac|idtac].
I would have expected the first to fail
Why would it fail? As Jason just explained, the empty tactic is the same as idtac. So, the first two lines are identical.
Jason Gross said:
I mean, really my question is "what is the difference between foo; bar. and foo. bar., but I added ; [] and solve_constraints to eliminate the answers "foo might not leave over exactly one goal"
and "some constraints are only solved at ."
I thought it was a minimal repro case so this was confusing
Minimal repro case is in https://github.com/coq/coq/issues/15927, sorry for the confusion
What is confusing me is what Jason said here:
Jason Gross said:
Ali Caglayan It should be? It's the trivial case of things like foo; [ bar | baz | qux ] where you only have one goal you're delegating to. Note that you can elide idtac when you don't want to do
something in the branch.
Guillaume Melquiond said:
Why would it fail? As Jason just explained, the empty tactic is the same as idtac. So, the first two lines are identical.
How is it the same as idtac?
Goal nat * nat * nat.
Fail refine (_,_,_); [].
Succeed refine (_,_,_); idtac.
@Ali Caglayan What I meant was that ; [] and ; [idtac] are the same
No, what we mean is that [ ] is the same as [ idtac ]; [ | ] is the same as [ idtac | idtac ]; and so on.
It took me a while to realize that the semi colon there was not part of the code. :)
As the documentation states, "Omitting an ltac_expr leaves the corresponding goal unchanged."
Sorry @Jason Gross I didn't mean to hijack your thread, I guess I still don't understand some things here.
Thanks for explaining @Guillaume Melquiond
No worries
I'm still baffled by the behavior I posted though (I don't see how it can be pattern_of_constr as @Gaëtan Gilbert suggests on the issue...)
Last updated: Oct 13 2024 at 01:02 UTC
|
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/difference.20between.20.60.3B.60.20and.20.60.2E.60.html","timestamp":"2024-11-13T22:01:13Z","content_type":"text/html","content_length":"34433","record_id":"<urn:uuid:d0350ade-954f-4fec-8aa2-f3205a712d90>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00640.warc.gz"}
|
How To Do 2.670 Matlab Problem Sets
Your problem set solutions need to be turned in as a set of Matlab scripts. A Matlab script is just a list of commands that Matlab will run in order, and it's an ordinary text file.
You should be running Matlab to do the problem sets:
athena% add matlab
athena% attach 2.670
athena% matlab /mit/2.670/Computers/Matlab/Examples &
You should also be using emacs to write up your solution scripts:
athena% emacs ~/matlab/solution1.m &
The command above will open an emacs where you can edit a file called solution1.m. The .m in the file name is required, that is how matlab knows that the file contains matlab commands. Both you and
the graders can run your script just by typing the name of the file (without the .m) at the matlab prompt, it will run each of those commands in order.
All of the problem descriptions are just scripts too. To run an assignment script (to read what your assignment is), just type
>> Prob1
at the matlab prompt. It will describe that problem in the assignment. Then, use matlab to try solving the problem at your matlab prompt. If you need help, try typing "help commandname" if you
already know the name of a command, or "lookfor keywordname" if you know what you're trying to do but not what command does it. When you get something that works for the problem solution, put that
command in the .m file you have in emacs.
When you want to say something in the homework you turn in that isn't a matlab command (your name, or an explanation of why you used a particular command), you can put a percent sign at the beginning
of that line, ie
% Abbe Cohen
% Problem 13
% Part A: Make a matrix A with values from 1 to 10.
A = [1:10];
You are required to put in comments at the beginning of each problem you turn in with your name and the problem number, as above. Also, put "echo on" at the very start of your file and "echo off" at
the very end of your file, so that all of the commands will be printed out when we run your script.
When you're done with the file for a problem, you can save it in emacs by typing Control-X Control-S. You can exit emacs by typing Control-X Control-C. (For more info on emacs, pick up an emacs quick
reference card from the Athena Consultant's office across from the fishbowl or from SIPB outside the student center cluster. You can do more with emacs than just edit one file at a time, by far. But
this is all you need to edit one file at a time.)
Then, to turn in that problem, you can type
turnin -c 2.670 N ~/matlab/filename.m
N is the problem number (1 for Prob1, 2 for Prob2, etc, and 2670 for the Stirling Engine Problem), and filename.m is the file you saved your solutions in.
|
{"url":"http://www.mit.edu/people/abbe/matlab/admin.html","timestamp":"2024-11-08T05:30:18Z","content_type":"text/html","content_length":"3347","record_id":"<urn:uuid:0867c8ac-71f4-4d6d-a040-83bae194865c>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00386.warc.gz"}
|
Best Hydro Cyclones For Iron Ore Beneficiation
Cyclone In Iron Ore Beneficiation. Best hydro cyclones for iron ore beneficiation Iron Ore Beneficiation Plant at Hirdyanagar Tahsil Sihora Jabalpur MP 0 Considering the ever increasing demand for
good quality iron ore by steel Under flow from Hydro cyclone will be collected in sump and Pumped to cluster of Beneficiation of iron ore slimes using hydrocyclone
Mining Best Hydrocyclones For Iron Ore Beneficiation
china hydrocyclones for iron ore benefication. Mining Best Hydrocyclones For Iron Ore Beneficiation. The role of hydrocyclones in mineral sciencedirecthe iron ore producers soon discovered the
advantages of the hydrocyclone over spiral classifiers in terms of better size control, lower water consumption, lower investment costs and less floor space required, among othershus hydrocyclones …
Mining Best Hydrocyclones For Iron Ore Beneficiation
Mining Best Hydrocyclones For Iron Ore Beneficiation. The role of hydrocyclones in mineral sciencedirecthe iron ore producers soon discovered the advantages of the hydrocyclone over spiral
classifiers in terms of better size control, lower water consumption, lower investment costs and less floor space required, among othershus hydrocyclones are relatively new uy now.
role of hydrocyclones in iron ore beneficiation
best hydro cyclones for iron ore beneficiation Of Iron Ore Processhydrocyclones iron ore beneficiation process through hydrocyclone manufacturer and iron ore beneficiation hydrocyclones fineelevators
iron ore beneficiation hydrocyclon Limestone and Granite Crush Plant in Iran Iran is a very important market of the Middle East 【Get Info】
Cyclone In Iron Ore Beneficiation - dutch-alejandro.nl
cyclone beneficiation plant flowsheet. iron ore beneficiation cyclone working process. cyclone beneficiation plant flowsheet iron ore beneficiation hydrocyclones A typical flow sheet for iron ore
beneficiation plant is shown in Fig 1 FIGURE 2 Process flowsheet of iron ore beneficiation even in the fines cyclone plant Processes for Beneficiation .
Iron Ore Beneficiation Hydrocyclones
Mining best hydrocyclones for iron ore beneficiation iron ore beneficiation hydrocyclones design calculations for a hydrocyclone for iron ore mineral processing epc iron ore beneficiation plant this
is an invitation from weihai haiwang hydrocyclone co ltd databased performance modelling of databased performance modelling of.Online chat. Get Price
Mining Best Hydrocyclones For Iron Ore Beneficiation
The role of hydrocyclones in mineral sciencedirecthe iron ore producers soon discovered the advantages of the hydrocyclone over spiral classifiers in terms of better size control, lower water
consumption, lower investment costs and less floor space required, among othershus hydrocyclones are …
hydro cyclones manufacturers from india for iron …
hydro cyclones manufacturers from india for iron ore ... Gold supplier high quality industrial iron ore cyclone separator for sales .. India copper mining used hydraulic hydro cyclone separator. ...
hydro cyclones manufacturers from india for iron ore benificiation ... iron ore fines beneficiation plants hydro cyclone desander; Xinhai service personals are experienced professionals in mineral
hydro cyclone for iron ore beneficiation - …
Best hydro cyclones for iron ore beneficiation Iron Ore Beneficiation Plant at Hirdyanagar Tahsil Sihora Jabalpur MP 0 Considering the ever increasing demand for good quality iron ore by steel Under
flow from Hydro cyclone will be collected in sump and Pumped to cluster of Beneficiation of iron ore slimes using hydrocyclone .
hydro cyclone for iron ore beneficiation
Beneficiation Of Iron Ore Process Hydrocyclones- PANOLA. Beneficiation Of Iron Ore Process Hydrocyclones. The desand process consists of wet screens with a nominal 0710 mm cut point which direct the
undersize to hydrocyclones the hydrocyclone underflow is then fed to upcurrent classifiers where the classifier middlings is sent to spirals and classifier and sp.
Beneficiation Of Iron Ore Cyclone - …
Best Hydro Cyclones For Iron Ore Beneficiation. Best hydro cyclones for iron ore beneficiation Iron Ore Beneficiation Plant at Hirdyanagar Tahsil Sihora Jabalpur MP 0 Considering the ever increasing
demand for good quality iron ore by steel Under flow from Hydro cyclone will be collected in sump and Pumped to cluster of Beneficiation of iron ore slimes using hydrocyclone .
Iron Ore Beneficiation With Cyclone - ijmondnoord.nl
Iron Ore Beneficiation With Cyclone. Cyclone beneficiation plant flowsheet ellulnl iron ore beneficiation hydrocyclones a typical flow sheet for iron ore beneficiation plant is shown in fig 1 figure
2 process flowsheet of iron ore beneficiation even in the fines cyclone plant function.
hydrocyclones for beneficiation
hydrocyclones for iron ore beneficationrole of hydrocyclones in iron ore beneficiation 2005-08-07 BWZ Heavy Duty Apron Feeder BWZ series heavy duty apron feeder designed by SKT is one new type high
More Products; role of hydrocyclones in iron ore beneficiation Beneficiation of Iron Ores | ispatgu Beneficiation of Iron Or Iron ore is a mineral which is used after extraction and .working of ...
Best Hydro Cyclones For Iron Ore Beneficiation
Best Hydro Cyclones For Iron Ore Beneficiation. As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction
requirements including quarry, aggregate, and different kinds of minerals.
Mining Best Hydrocyclones For Iron Ore Beneficiation
Mining best hydrocyclones for iron ore beneficiation iron ore hydrocyclones enificiation of iron ore fines hydrocyclonehandong xinhai mining technology equipment inc is a focus on iron ore
hydrocyclones, jigger machine and other iron ore iron ore beneficiation service, chat onlinerediction of hydrocyclone performance in iron ore.
Best Hydro Cyclones For Iron Ore Beneficiation
Best Hydro Cyclones For Iron Ore Beneficiation. mining best hydrocyclones for iron ore beneficiation cyclone in iron ore beneficiation best hydro cyclones for iron ore beneficiationIron Ore benefi
ion Plants Belt Conveyors HydrocycloneBelt Conveyors for Iron ore benefi ion We takes the pleasure to introduce itself We make use of good know more Chat Online
iron ore beneficiation hydrocyclones
beneficiation of iron ore process hydrocyclones. Reverse flotation studies on an Indian low grade iron ore slim The underflow of hydrocyclones is sent earlier works indies that beneficiation of iron
ore slimes containing significant amount of Fe along with SiO2 reverse flotation is the usual process for the beneficiation of iron ore slim.
1500 th iron ore mining and beneficiation in india
iron ore beneficiation hydrocyclones Mine Equipments. Iron Ore Processing Plant Major iron ore beneficiation equipments are including jaw crusher cone crusher vibrating screenball millspiral
classfierhydrocycloneflotation machine Iron ore beneficiation process design in India and South Africa liming The following can .
design hydrocyclones for chrome ore beneficiation ...
design hydrocyclones for chrome ore beneficiation. Chili 120-150tph Station de concassage mobile de pierre de rivière. Chili 120-150tph Station de concassage mobile de pierre de rivière. Ligne de
concassage de minerai de fer du Chili. Papouasie Nouvelle …
best size for beneficiation of iron ore - wdb …
Best Hydro Cyclones For Iron Ore Beneficiation. Best Hydrocyclones For Iron Ore Beneficiation Iron Ore BeneficiationMultotec Iron Ore beneficiation solutions from Multotec are designed so that each
stageto a DMS cyclone where separation sends some of the ore through sievebend More Info Recovery of Iron.
|
{"url":"http://tanuloszoba.eu/cone/2022-best-hydro-cyclones-for-iron-ore-beneficiation-5935/","timestamp":"2024-11-15T04:44:21Z","content_type":"application/xhtml+xml","content_length":"15598","record_id":"<urn:uuid:aaa273ea-52d1-4459-9fb5-c4d419b3e182>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00624.warc.gz"}
|
Foil coil windings
A foil coil is a winding obtained from a thin, rectangular, metallic sheet folded in a spiral-like shape, as shown in
Figure 1
. The sheet is covered by an insulating coating (varnish). This kind of coil design is common in electromagnetic devices such as power transformers and reactors.
Figure 1. A thin metallic sheet (a) folded in the shape of a foil coil (b).
The current density distribution in a foil-wound coil fed by a time-varying source depends on skin and proximity effects. Since the foil is usually very thin and made from a material with a high
electrical conductivity, the skin effect along its thickness is negligible (i.e., the current density in each turn results practically uniform along a radial direction). On the other hand, the
current density in each foil turn may greatly vary along the axial direction of the coil as a function of both position and frequency.
This anisotropic behavior is specific to foil coils and influences the Joule losses developed in the bulk of the coil material. Thus, Flux now provides a new subtype of the coil conductor region with
losses and detailed geometric description that implements a homogenization technique to represent this type of coil efficiently in its 2D Steady State AC application. This technique is exclusive to
foil coils and differs from the approach used in the other subtypes of coil conductor regions with losses and detailed geometric description representing stranded coils.
Using this new coil conductor region subtype spares the user from representing each turn of the foil coil with an individual solid conductor region (linked to its corresponding FE coupling component
in a complicated electric circuit). While this latter approach is also legitimate and rigorous, it is usually very time consuming to set up in Flux due to the elaborate geometry and the refined mesh
required. Moreover, the solving time with the new foil-wound coil conductor region subtype is significantly reduced when compared to the alternate solid conductor approach.
Example of application
The foil coil configuration shown in
Figure 2
has been analyzed in the article
Calculation of Current Distribution and Optimum Dimensions of Foil-Wound Air-Cored Reactors
by M.M. El-Missiry (
Proceedings of the Institution of Electrical Engineers
, vol. 124, no. 11,November 1977,
DOI: 10.1049/piee.1977.0218
). In that work, the author presents a circuit-based, semi-analytical method to compute the current density distribution and several other electromagnetic quantities of a foil coil.
Figure 2. Cross section of one of the cylindrical Aluminum foil coils analyzed by M.M. El-Missiry in his article.
The coil in
Figure 2
may be easily modeled in Flux 2D with the foil coil template available for coil conductor regions with losses and detailed geometrical description.
Figure 3
shows the results obtained with an axisymmetric Steady State AC Magnetic application at 50 Hz and with an additional horizontal symmetry (i.e., only one quarter of the foil coil is represented). The
development of a non-uniform current distribution pattern characteristic to foil coils may be verified in the color plot available in that figure.
Figure 3. Color plot of the current density (phasor module, peak value) and magnetic flux density field lines of the foil coil displayed in Figure 2. The FE coupling component assigned to the coil
conductor region is fed by a 1 + j0 Vrms voltage source at 50 Hz.
A comparison between the current density results obtained with the approach described in that article and the solution evaluated with Flux 2D is provided in
Figure 4
. The graph in this figure displays the real and imaginary parts of the current density phasor (in RMS values) on a path from its upper extremity (0.0 p.u.) to its center (0.5 p.u.) along one of the
centermost turns of the coil (as depicted in
Figure 3
Figure 4. Comparison between current density results yielded by Flux and El-Missiry's approach. The plot displays RMS current density values evaluated along the vertical path shown in Figure 3.
An additional comparison between measured lumped circuit parameters (provided in El-Missiry's article) and their corresponding values computed with Flux 2D (obtainable, for instance, with the help of
I/O Parameters defined by formulas) is available in
Table 1
Table 1. Comparison between resistance and reactance
measurements and the results yielded by Flux 2D for the Aluminum
foil coil represented in Figure 2.
Lumped circuit parameter at 50 Hz Measurement Flux 2D Deviation
Reactance 1.802 Ω 1.827 Ω 1.39%
Resistance 0.382 Ω 0.376 Ω 1.57%
The results from
Figure 4
and from
Table 1
show that the FEM solution evaluated with Flux 2D is in excellent agreement with both measurements and other numerical techniques.
|
{"url":"https://help.altair.com/flux/Flux/Help/english/ReleaseNote/2022Release/Flux/topics/Flux2022_NewFeature_FoilCoilWindings.htm","timestamp":"2024-11-08T01:09:49Z","content_type":"application/xhtml+xml","content_length":"60241","record_id":"<urn:uuid:e7ba472b-188b-46cc-af49-083d612da35f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00544.warc.gz"}
|
SQED2 on the null plane using Faddeev-Jackiw quantization
Ramos, German Y Pimentel, B.M. (2019) SQED2 on the null plane using Faddeev-Jackiw quantization. In: 4th ComHEP: Colombian Meeting on High Energy Physics, 2-6 Dic 2019., Barranquilla, Colombia.
Download (218kB)
Download (456kB)
Download (247kB)
Half a century ago Dirac has proposed three different forms of relativistic dynamics depending on the types of surfaces where independent modes were initiated. The first possibility when a space-like
surface is chosen (instant form) has been used most frequently so far and is usually called equal-time quantization. The second choice is to the a surface of a single light wave (front form or
null-plane). The third possibility is to take a branch of hyperbolic surface (point form). In this paper we are going to study QED 2 on the null-plane and we will show that one of the first class
constraints of the theory has a contribution provided by the scalar sector and in addition the theory have a second class constraint in the scalar sector which is manifest in the free case. It is not
natural in the instant form. The Faddeev-Jackiw procedure for constrained system is applied to calculate the commutation relations of the theory.
|
{"url":"https://sired.udenar.edu.co/6478/","timestamp":"2024-11-04T02:10:44Z","content_type":"application/xhtml+xml","content_length":"24511","record_id":"<urn:uuid:95fc8e12-7de0-4c5c-a31d-ee5b1167edfe>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00711.warc.gz"}
|
Methodology | Verifiedbeta
Origin Story
Verified Beta was founded by Andrew Jones, CFA, as an adjunct to his public and private company valuation practice, M&A advisory work, and role as a fractional CFO to scale-stage SaaS businesses. The
need originates with asset allocation and portfolio optimization for independently-managed family assets. The initiative gives Andrew an excuse to return to his software engineering roots whilst
keeping his analytical, research, and writing skills sharp. He loves this stuff almost as much as writing about himself in the 3rd person.
The Essential Problem
Equity ETFs marketed as Smart Beta have attracted over $1T of AUM. When filtering for funds in Verfiedbeta’s analysis of over 3000 US-listed ETFs that feature statistically-significant exposure to at
least one of the well-researched ‘factors’ such as Value and are at least 2 years old, the total is over $3T of the $6T aggregate AUM.
The top few passive index funds have amassed roughly the other 50% of aggregate AUM in the US-listed ETF universe and are essentially free, charging just 1-3bps in total net fees. In stark contrast,
Smart beta funds suffer fees and turnover-related trading costs on the order of 100 – 150 bps per year, or 30-50 times greater.
The expectation is that the contribution from Smart Beta factor exposure more than makes up for fees and trading costs. But how often is that actually true? How often are investors assuming that a
so-called Smart Beta ETF really is a smart investment, when it might otherwise be a source of negative expected returns relative to a passive index fund? The waters are further muddied by the
prospect that factor returns are unstable and likely decaying over time.
Finally, following the recently-ended macro economic epoch characterized by decade-plus-long near-zero interest rates and concomitant widespread distortions in fundamental value, how can we manage
the risk that we’re not overpaying for equity ETFs writ large?
Our Solution
We systematically identify those funds that:
1. capture factor contributions projected to cover their management fees and implied trading costs;
2. have projected forward (ex-ante) Sharpe ratios superior to the market, based on real, measured, in-sample fund volatility and factor capture;
3. are presently composed of equities fairly-valued or inexpensive relative to their historical median valuation, in the aggregate.
Throughout our fund analyses, we standardize core summary metrics defined as follows:
Table 1: Verfiedbeta Summary Measures
Our Measure Definition
Implied Relative Factor the ratio between the ex-post Sharpe Ratio of a simulated fund with factor loadings equal to those derived in the in-sample regression with actual fund data, including
Sharpe alpha (the typically-negative regression intercept), divided by the market Sharpe Ratio, calculated over the maximum period of available factor data.
a 100% score is assigned to all funds that rank above the 95% percentile on Implied Relative Factor Sharpe. For funds below the 95% percentile, the Score is (Implied
Factor Capture Score Relative Factor Sharpe [fund]) / (Implied Relative Factor Sharpe [95% percentile fund]). We avoid using straightforward percentiles as Implied Relative Factor Sharpe
tends toward a fat tail of closely-clustered underperforming funds; this measure clearly highlights the relatively few winners.
Value, Small, Profitability, identical to Factor Capture Score for each of the respective factors, separately. eg. a score of 100% for Value (HML) means the fund ranks above the 95% percentile of all
Investment, and Momentum funds with statistically-significant exposure to the Value factor.
Net Implied Factor Alpha the contribution from factor exposure of the simulated fund, less fees and implementation costs as implied by the regression intercept.
Relative Valuation the ratio between the current fund dividend yield and the median historical fund dividend yield, expressed in percent.
We perform systematic, fund universe-wide regressions and then use back-tested synthetic fund performance to infer our best estimate of expected forward performance. We’ve standardized on the Fama
and French 5-factor model plus momentum and refer to the model as FF6. Although the Q5 and AQR models appear to subsume FF6 in the GRS test for model superiority, we elect FF6 both because of its
pervasiveness and its frequency of overlap with actual ETF product implementation. We summarize comparisons with the AQR and Q5 models where fund constitution criteria are aligned.
Stepwise (see Definitions for reference terms, above):
1. perform OLS linear regressions on monthly fund total return data to determine the best-estimate of historical factor capture (regression coefficients);
2. create a synthetic fund with equal factor exposure starting in July of 1963 (the first month of publicly-available FF6 data);
3. calculate the Implied Relative Factor Sharpe;
4. rank all funds in the universe to generate Factor Capture Score;
5. calculate the individual Factor Capture Scores;
6. calculate the Relative Valuation
7. summarize in Factor Tombstones, fund spotlight analyses, and our ETF Finder Tool
The Verifiedbeta Difference
It’s a one-two punch:
1. We summarize estimated risk-adjusted fund performance relative to to the historical return of a plain-vanilla market-cap-weighted passive index fund with a single metric: Factor Capture Score
2. We compare current fund valuation to historical valuation to help allocators avoid funds which are presently overvalued, expensive and potentially risky.
|
{"url":"https://verifiedbeta.com/methodology/","timestamp":"2024-11-05T18:43:52Z","content_type":"text/html","content_length":"61872","record_id":"<urn:uuid:5c1e903d-b56b-4872-a3e8-50f321e41c72>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00180.warc.gz"}
|
Build Your Own Worlds | Highrise Create
An axis-aligned bounding box. It acts as a simple container for a center position and the extents (further subdivided into size, min, max) of a box that can be used for collision detection,
visibility testing, or other similar tasks in your game.
Vector3 value that signifies the central point of the bounding box.
The size of the bounding box for each dimension - how wide, tall, and deep it is.
Similar to size but it represents half the size of the bounding box in each dimension. This can be used when you need to measure from the center of the bounding box.
The minimum value (lower-left-front corner) of the bounding box coordinates.
The maximal value (upper-right-back corner) of the bounding box coordinates.
The minimum and maximum corners of the bounding box.
This specifies the smallest value for each coordinate of the bounding box.
This represents the largest value for each coordinate of the bounding box.
Grows the Bounds to include a point or bounds. You would typically use this when you have a set of points and you want a bounding box that encloses all these points.
The new point to include within the bounding box.
Increases the size of the bounding box by the given amount (in both directions).
The amount by which the bounding box should be expanded (shrank if negative).
Checks whether the current bounding box intersects with another bounding box. You can use this to determine if two objects are likely to be colliding.
The other bounding box to test intersection with.
Returns true if the bounding boxes intersect, false otherwise.
Determines if a specified Ray intersects the bounds. This can prove vital in various game scenarios for interaction detection such as determining if a projectile is in range of an object or if a
click or tap in screen space intersects a game object. It evaluates the intersection and returns true if the Ray intersects with the bounds, otherwise it returns false.
The Ray instance against which the intersection is to be checked.
Returns true if the Ray intersects with the bounds, otherwise false.
Check whether a specific point is within the current bounding box.
This is the point which you want to check.
Returns true if the point is inside the bounding box, false otherwise.
Returns the squared distance from a point to the bounding box. Squared distances can be useful for comparison without needing the computationally expensive square root operation.
The point from which the distance is measured.
Returns the squared distance from the bounding box to the point.
The closest point on or inside the bounding box to a given point. This can be helpful in figuring out how close an object is to entering a bounding box.
This is the location from which the closest point is determined.
Returns the closest point within or on the bounding box to the given point.
|
{"url":"https://create.highrise.game/learn/studio-api/datatypes/Bounds","timestamp":"2024-11-10T14:28:39Z","content_type":"text/html","content_length":"210081","record_id":"<urn:uuid:c5e247f5-fb11-4e6d-bba7-529d8d993fbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00662.warc.gz"}
|
Data Snooping
Data snooping occurs when a set of data is used more than once for purposes of inference or model selection.
Top 6 Papers
"Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When such data reuse occurs, there is always the possibility that any
satisfactory results obtained may simply be due to chance rather than to any merit inherent in the method yielding the results. This problem is practically unavoidable in the analysis of
time-series data, as typically only a single history measuring a given phenomenon of interest is available for analysis. It is widely acknowledged by empirical researchers that data snooping is a
dangerous practice to be avoided, but in fact it is endemic. The main problem has been a lack of sufficiently simple practical methods capable of assessing the potential dangers of data snooping
in a given situation. Our purpose here is to provide such methods by specifying a straightforward procedure for testing the null hypothesis that the best model encountered in a specification
search has no predictive superiority over a given benchmark model. This permits data snooping to be undertaken with some degree of confidence that one will not mistake results that could have
been generated by chance for genuinely good results." "In this paper we utilize White's Reality Check bootstrap methodology (White (1999)) to evaluate simple technical trading rules while
quantifying the data-snooping bias and fully adjusting for its effect in the context of the full universe from which the trading rules were drawn. Hence, for the first time, the paper presents a
comprehensive test of performance across all technical trading rules examined. We consider the study of Brock, Lakonishok, and LeBaron (1992), expand their universe of 26 trading rules, apply the
rules to 100 years of daily data on the Dow Jones Industrial Average, and determine the effects of data-snooping." "Tests of financial asset pricing models may yield misleading inferences when
properties of the data are used to construct the test statistics. In particular, such tests are often based on returns to portfolios of common stock, where portfolios are constructed by sorting
on some empirically motivated characteristic of the securities such as market value of equity. Analytical calculations, Monte Carlo simulations, and two empirical examples show that the effects
of this type of data snooping can be substantial." "Economics is primarily a non-experimental science. Typically, we cannot generate new data sets on which to test hypotheses independently of the
data that may have led to a particular theory. The common practice of using the same data set to formulate and test hypotheses introduces data-snooping biases that, if not accounted for,
invalidate the assumptions underlying classical statistical inference. A striking example of a datadriven discovery is the presence of calendar effects in stock returns. There appears to be very
substantial evidence of systematic abnormal stock returns related to the day of the week, the week of the month, the month of the year, the turn of the month, holidays, and so forth. However,
this evidence has largely been considered without accounting for the intensive search preceding it. In this paper we use 100 years of daily data and a new bootstrap procedure that allows us to
explicitly measure the distortions in statistical inference induced by data-snooping. We find that although nominal P-values of individual calendar rules are extremely significant, once evaluated
in the context of the full universe from which such rules were drawn, calendar effects no longer remain significant." "A real-time investor is one who must base his portfolio decisions solely on
information available today, not using information from the future. Academic predictability papers almost always violate this principle via exogenous specification of critical portfolio formation
parameters used in the backtesting of investment strategies. We show that when the choice of parameters such as predictive variables, traded assets, and estimation periods are endogenized (thus
making the tests more real-time), all evidence of predictability vanishes. However, an investor with the correct specific sets of priors on predictive variables, assets, and estimation periods
will find evidence of predictability. But since no real theory exists to guide one on the choice of the correct priors, finding this predictability seems unlikely. Our results provide an
explanation for the performance gap between mutual funds and the academic market predictability literature, and carry important implications for asset pricing models, cost-of-capital
calculations, and portfolio management." "Data-snooping arises when the properties of a data series influence the researcher's choice of model specification. When data has been snooped, tests
undertaken using the same series are likely to be misleading. This study seeks to predict equity market volatility, using daily data on U.K. stock market returns over the period 1955–1989. We
find that even apparently innocuous forms of data-snooping significantly enhance reported forecast quality, and that relatively sophisticated forecasting methods operated without data-snooping
often perform worse than naive benchmarks. For predicting stock market volatility, we therefore recommend two alternative models, both of which are extremely simple."
|
{"url":"http://data-snooping.martinsewell.com/","timestamp":"2024-11-10T11:00:10Z","content_type":"application/xhtml+xml","content_length":"10443","record_id":"<urn:uuid:10139832-974c-4718-bea6-55f9f31a9a31>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00302.warc.gz"}
|
Pipe Pressure Calculator - Calculator Doc
The Pipe Pressure Calculator helps determine the internal pressure of a pipe based on material stress, wall thickness, and diameter. Pipe pressure is critical in industries where fluids or gases are
transported, as knowing the right pressure ensures safe and efficient operation without the risk of pipe failure.
To calculate pipe pressure, use the following formula:
Ppipe = (2 ∗ Stress ∗ Wall Thickness) / Pipe Diameter
• Ppipe = Pressure inside the pipe
• S = Stress (material strength)
• T = Wall thickness of the pipe
• D = Diameter of the pipe
How to Use
1. Enter the stress or material strength (S) in suitable units (e.g., megapascals, MPa).
2. Input the wall thickness (T) of the pipe in meters.
3. Enter the pipe diameter (D) in meters.
4. Click the Calculate button to get the internal pipe pressure.
If the stress of the material is 300 MPa, the wall thickness of the pipe is 0.01 meters, and the diameter of the pipe is 0.05 meters, the pipe pressure calculation would be:
Ppipe = (2 * 300 * 0.01) / 0.05 = 120 MPa
This means the pipe will withstand a pressure of 120 MPa before any failure.
1. What is pipe pressure?
Pipe pressure refers to the internal force exerted by a fluid or gas on the walls of a pipe, which is essential to consider when designing pipe systems.
2. Why is calculating pipe pressure important?
Accurate pressure calculation ensures the safety and longevity of the pipe, avoiding potential failures or bursts.
3. What units should I use for stress, thickness, and diameter?
Typically, stress is measured in megapascals (MPa), wall thickness and diameter in meters, though consistent units are most important.
4. What factors affect pipe pressure?
Pipe pressure is influenced by material stress, wall thickness, and pipe diameter.
5. Can this formula be used for any type of pipe?
Yes, this formula applies to pipes of different materials, as long as you know the material’s stress capacity.
6. How do I find the material stress value?
Material stress (S) is usually provided by the manufacturer or determined from material properties.
7. Does pipe pressure affect fluid flow?
Yes, the internal pressure in a pipe directly influences how fluid or gas flows through it.
8. What happens if the calculated pressure is too high?
If the pressure exceeds the pipe’s capacity, it may rupture, leading to leaks, accidents, or pipe failure.
9. Can this calculator be used for both gas and liquid pipes?
Yes, it can calculate the pressure for pipes carrying gases or liquids.
10. Does pipe thickness affect the pressure it can handle?
Yes, thicker walls allow the pipe to withstand higher pressures.
11. How does pipe diameter influence internal pressure?
Larger pipe diameters reduce the pressure, while smaller diameters increase the pressure.
12. Is there a maximum pressure a pipe can handle?
Yes, each pipe has a maximum pressure rating determined by the material stress and design.
13. What is the safety factor in pipe design?
The safety factor is a margin added to account for unexpected conditions, ensuring the pipe operates safely under pressure.
14. How does temperature affect pipe pressure?
High temperatures can weaken the material, reducing its ability to withstand pressure.
15. What is the difference between internal and external pipe pressure?
Internal pressure comes from within the pipe, exerted by the fluid or gas inside, while external pressure is exerted on the pipe from outside sources.
16. What materials are commonly used for high-pressure pipes?
Common materials include stainless steel, carbon steel, and certain plastics designed for high-pressure applications.
17. How can I increase the pressure capacity of a pipe?
Increasing wall thickness or using a stronger material can help increase the pressure capacity of a pipe.
18. Can pipes fail if the pressure is too low?
Low pressure itself doesn’t cause failure, but certain pipes designed for high-pressure use may experience flow issues at very low pressures.
19. What is hoop stress in a pipe?
Hoop stress is the stress exerted circumferentially on the pipe walls due to internal pressure.
20. How often should pipe pressure be monitored?
In industrial settings, regular monitoring of pipe pressure is crucial to prevent failures and ensure safety.
Calculating pipe pressure is vital for ensuring the safe and efficient transport of fluids or gases through pipelines. By understanding the relationship between material stress, wall thickness, and
diameter, you can ensure that your pipes operate within safe pressure limits, preventing costly repairs or dangerous accidents.
|
{"url":"https://calculatordoc.com/pipe-pressure-calculator/","timestamp":"2024-11-02T12:13:53Z","content_type":"text/html","content_length":"86847","record_id":"<urn:uuid:fd146e6a-e7b9-40a3-8eab-38f0e019c498>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00310.warc.gz"}
|
American Mathematical Society
Asymptotic behavior and nonoscillation of Volterra integral equations and functional differential equations
HTML articles powered by AMS MathViewer
by A. F. Izé and A. A. Freiria
Proc. Amer. Math. Soc. 52 (1975), 169-177
DOI: https://doi.org/10.1090/S0002-9939-1975-0377233-1
PDF | Request permission
It is proved that if ${q_{ij}}(t,s){\rho _j}(s){[{\rho _i}(t)]^{ - 1}}$ is bounded, $i,j = 1,2, \ldots ,n$, and $f(t,x,x(u(s)))$ is “small", \[ x(u(s)) = ({x_1}({u_1}(s)),{x_2}({u_2}(s)), \ldots ,
{x_n}({u_n}(s)))\] with ${u_i}(t) \leqslant t$ and ${\lim _{t \to \infty }}{u_i}(t) = \infty$, the solutions of the integral equation \[ x\left ( t \right ) = h(t) + \int _0^t {q(t,s)f(s,x(s),x(u
(s)))ds} \] satisfy the conditions $x(t) = h(t) + \rho (t)a(t),{\lim _{t \to \infty }}a(t) =$ constant where $\rho (t)$ is a nonsingular diagonal matrix chosen in such a way that ${\rho ^{ - 1}}(t)h
(t)$ is bounded. The results contain, in particular, some results on the asymptotic behavior, stability and existence of nonoscillatory solutions of functional differential equations. References
• Thomas G. Hallam, Asymptotic behavior of the solutions of an $n\textrm {th}$ order nonhomogeneous ordinary differential equation, Trans. Amer. Math. Soc. 122 (1966), 177–194. MR 188562, DOI
• A. F. IzĂ©, On an asymptotic property of a Volterra integral equation, Proc. Amer. Math. Soc. 28 (1971), 93–99. MR 275078, DOI 10.1090/S0002-9939-1971-0275078-0
• A. F. IzĂ©, Asymptotic integration of a nonhomogeneous singular linear system of ordinary differential equations, J. Differential Equations 8 (1970), 1–15. MR 259256, DOI 10.1016/0022-0396(70)
90035-5 A. A. Freiria, Sobre comportamento assintótico e existência de soluções não oscilatórias de uma classe de sistema de equações diferencias com retardamento, São Carlos, 1972.
• G. Ladas, Oscillation and asymptotic behavior of solutions of differential equations with retarded argument, J. Differential Equations 10 (1971), 281–290. MR 291590, DOI 10.1016/0022-0396(71)
• Pavol Marušiak, Note on the Ladas’ paper on “Oscillation and asymptotic behavior of solutions of differential equations with retarded argument” (J. Differential Equations 10 (1971),
281–290) by G. Ladas, J. Differential Equations 13 (1973), 150–156. MR 355266, DOI 10.1016/0022-0396(73)90037-5
• Richard K. Miller, Nonlinear Volterra integral equations, Mathematics Lecture Note Series, W. A. Benjamin, Inc., Menlo Park, Calif., 1971. MR 0511193
• J. A. Nohel, Some problems in nonlinear Volterra integral equations, Bull. Amer. Math. Soc. 68 (1962), 323–329. MR 145307, DOI 10.1090/S0002-9904-1962-10790-3
• Paul Waltman, On the asymptotic behavior of solutions of a nonlinear equation, Proc. Amer. Math. Soc. 15 (1964), 918–923. MR 176170, DOI 10.1090/S0002-9939-1964-0176170-8
• James A. Yorke, Selected topics in differential delay equations, Japan-United States Seminar on Ordinary Differential and Functional Equations (Kyoto, 1971) Lecture Notes in Math., Vol. 243,
Springer, Berlin, 1971, pp. 16–28. MR 0435554
Similar Articles
• Retrieve articles in Proceedings of the American Mathematical Society with MSC: 34K15, 45M10
• Retrieve articles in all journals with MSC: 34K15, 45M10
Bibliographic Information
• © Copyright 1975 American Mathematical Society
• Journal: Proc. Amer. Math. Soc. 52 (1975), 169-177
• MSC: Primary 34K15; Secondary 45M10
• DOI: https://doi.org/10.1090/S0002-9939-1975-0377233-1
• MathSciNet review: 0377233
|
{"url":"https://www.ams.org/journals/proc/1975-052-01/S0002-9939-1975-0377233-1/?active=current","timestamp":"2024-11-06T14:21:07Z","content_type":"text/html","content_length":"63348","record_id":"<urn:uuid:7d7590a7-b5d7-4fa4-a786-650d5d98c5be>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00319.warc.gz"}
|
Predicting Early Heart Diseases With Quantum Support Vector Machines
A Detailed Overview on QSVMs and Their Impact Towards Disease Prediction
In the United States of America, one person dies from a cardiovascular disease every 34 seconds.
It took me roughly ~32 seconds to type that sentence out.
It’s absolutely insane.
Within the minute I’ve spent writing this article so far, 2 people have probably passed away with their cause of death being a heart disease of some sort. By the end of your read, the number will
stand at ~26 people. Within a year, an estimated 17.9 million people die each year globally from cardiovascular diseases.
Quantifying it in that sense is extremely scary, but it also provides a clearer indication of the magnitude of the problem.
Before getting into anything else, I want to understand the problem in its entirety. We’ve already figured out the quantitive impact of heart diseases. But, what about the basic fundamentals that are
responsible for that number? What are its affecting variables? What is the root cause?
Of course, risk factors that impact the number include one’s record of smoking, high blood pressure, high cholesterol etc. But, when boiling this number down to determine why it’s so high — despite
us having control on our habits that create those risk factors — it comes down to how often these diseases can go undetected. That is the root cause of the heart disease mortality rate being within
the millions.
Like many other complex conditions, you’re not aware of it being present until its reached an advanced stage, as the early phases usually don’t have any obvious symptoms for us to question.
It’s why heart diseases are named as a ‘silent killer’ — because they silently progress over time, and by the time symptoms appear, it’s usually too late to prevent serious complications or death.
So, based on information, reducing that number down can fall into one of two paths; curing the disease at its last stage or diagnosing it earlier, when we are able to treat it.
The issue with the former is the complexity behind the solution — it’s nearly impossible to reverse the heart’s lack of ability to pump blood, due to the significant damage done.
But, diagnosing the disease earlier is a step towards streamlining the large rate of deaths that stem from heart diseases, so that we can manage and tackle this condition when are able to ‘easily’ do
We need to take action towards avoiding preventable mortality rates.
Status Quo — What’s Being Done?
Currently, scientists have been exploring a range of solutions — including screening programs, advanced imaging technologies, biomarkers, wearable devices and telemedicine — to test for risk factors
of heart diseases. But, one of the greatest focuses right now lies within the field of Artificial Intelligence.
Right now, we’ve identified that there’s risk factors that may impact one having to battle a heart disease, including:
• Age
• Gender
• Family History
• Smoking
• High Blood Pressure
• High Cholesterol
• Diabetes
• Obesity
• Physical Inactivity
• Unhealthy Diet
After a while, making connections or determining patterns between those risk factors’ state & its correlation to a heart disease can help us. So, if someone had a high age, high blood pressure level
and history of smoking, you could predict that they’re at risk for a heart disease — allowing for more preventative measures within their lifestyle to avoid or detect the disease earlier.
Humans could do this manually, but it is a time-consuming and labor-intensive process. Also, human analysis is subject to errors and biases, which can impact the accuracy and reliability of the
So. Technology. That’s our next bet.
We can leverage artificial intelligence to analyze vast amounts of data quickly and accurately along with identifying patterns and trends that may not be apparent to human analysts. This is done
specifically through something called Support Vector Machines.
Formally, Support Vector Machines (SVMs) are a type of machine learning algorithm that can be used for classification and regression analysis problems.
They work by creating a boundary (also called a ‘hyperplane’) that separates different classes of data points in a high-dimensional space. The hyperplane’s goal is to maximize the margin between the
data points of different classes, so that unseen data can be classified in the same manner.
Visual Representation of SVMs
If technical terms made no sense, let’s turn our sports brains on. Think of the SVM as a football coach that’s trying to create a good defensive strategy. The coach needs to create a game plan that
effectively defends against the opposing team’s offensive strategy, so he’ll separate the different types of players accordingly on the field.
Support Vector Machines are very common. I’m sure we’re used to our mailing softwares predicting which email is meant for your inbox and which one is spam — and SVMs are responsible for all of that.
In the context of diagnosis prediction, they’re quite strong as well… to a certain extent. They’re a powerful tool as the data is complex & high-dimensional, and its accuracy is strong.
But, even though AI technology has advanced significantly, there is still a limit to the amount of data that our AI models can effectively analyze. Unfortunately, this limit is considerably smaller
than the amount of data required to make accurate predictions in real-world scenarios.
This limitation creates the solution to be one that’s inefficient and less effective for practical purposes.
Leveraging Quantum Support Vector Machines
The key to overcoming this barrier between the problem and an efficient solution are Quantum Support Vector Machines.
Quantum + Support Vector Machines = QSVMs
Note: This is probably a good time to highlight that if you’ve never heard of quantum computing before, the next part of this article won’t be easily digestible. So, I’d recommend reading my
Quantum Computing 101 article first, and then coming back to this. Either way, I hope you learn something new today!
QSVMs are exactly what you might be thinking; they’re a quantum machine learning algorithm that use the principles of quantum mechanics to perform classification and regression tasks.
The basic idea behind them is that they use quantum computers to perform the computations we would with a SVM algorithm, in a more efficient manner. Therefore, QSVMs are resolving our efficiency
challenge while preserving the current status of what we’re doing. Super simple.
But, let’s breakdown how Quantum Support Vector Machines work and how we can predict heart diseases earlier with this algorithm.
Data Preparation
Of course, we’ve identified that we need to analyze a bunch of data, prior to our predictions. So, our first step is to prepare that data into a manner that is understandable by the quantum computer,
because currently we’re working with classical data.
It’s represented as:
Here, |x⟩ represents a classical input data vector which is transformed into a quantum state — |Φ(x)⟩ — through applying a unitary operator — U(x)|0⟩ — to its initial quantum state |0⟩.
Without its complexity, the filtered data of risk factors for heart diseases (so the most relevant factors that is cleaned out without any biases or duplicates) is transformed into a quantum state
using a quantum feature map, and this feature map is a unitary operation that maps classical data to a quantum state through applying a sequence of quantum gates.
This ‘feature map’ being discussed can be implemented through various quantum circuits, but the most common ones are quantum Fourier transform or amplitude encoding.
These two techniques have the end goal of mapping the classical input into a way that it can be processed by quantum algorithms, but the paths there differ from each other:
1. Quantum Fourier transform (QFT): This quantum circuit transforms classical data into a quantum state that has the property of being periodic, which can help within certain types of classification
problems that require periodic functions. And, this is done through manipulating the states with a series of Hadamard and controlled-phase gates to a set of qubits.
2. Amplitude Encoding: This quantum circuit encodes the classical data into amplitudes of a quantum state through using the input data’s binary representation to provide it with a quantum
representation of its amplitude, as in quantum mechanics the state of a quantum system is described by a wave function. It allows the quantum state to represent a much larger space of data than
classical data, which can be useful for certain types of classification problems, such as those involving high-dimensional data.
The choice of the quantum feature map within a QSVM depends on the nature of the data and the specific problem being solved, and these are simply the most common feature maps. Regardless, the
intention behind the circuit is the exact same — to translate the classical input data of our risk factors into a quantum state.
Quantum Kernel Calculation
Once our data is prepped and ready to go, we can proceed forward to our next step.
Now, we calculate a matrix — an array of numbers — that describes how similar each pair of input data points are in their quantum feature representation. And, this is done through a Quantum Kernel
The results from this matrix can help us classify; heart disease present or healthy heart? Formally, the two categories we’ve stated for classification are known as ‘classes’ and since there’s two
classes, it’s a binary classification problem.
The type of matrix we’re specifically looking for is called a kernel matrix, and it describes the pairwise inner products between the quantum feature vectors corresponding to the input data points.
In simple terms, this means that a mathematical operation with the vectors of the quantum states are calculated, to obtain a measure of similarity between those two data points in their quantum
feature representation. That similarity value can help us form the hyperplane, so we can correctly categorize our data into its binary classifications.
So, let’s say that we preprocessed our data through amplitude encoding. The quantum kernel function for the circuit is defined as:
Here, |ϕ(x)⟩ and |ϕ(x’)⟩ are the quantum states corresponding to the input data points x and x’, respectively, and ⟨ϕ(x)| and ⟨ϕ(x’)| are their conjugate transpose states.
In order to determine the quantum kernel matrix for a set of input data points, we can use a quantum circuit to apply the amplitude encoding circuit to all pairs of data points, before measuring the
overlap between their quantum states — which can be done through quantum phase estimation.
This algorithm estimates the eigenvalues of the unitary transformation matrix — U(x, x’) = U(x)† U(x’) — that is in correspondence to the quantum kernel function. And, once the estimated eigenvalues
have been obtained, we can use those to compute the quantum kernel matrix that will calculate the inner-products.
It can be represented as…
… where λ_k and u_k are the eigenvalues and eigenvectors of the matrix U(x, x’), and u_ik and u_jk are their corresponding components for the k-th eigenstate.
All that’s happening within this computation is the calculation of the the kernel value Kij by summing up the product of the eigenvalues and corresponding eigenvector entries for all k. This is
equivalent to taking the inner product between the i-th and j-th columns of the eigenvector matrix, weighted by the corresponding eigenvalues.
Once that step is completed, a hyperplane — that creates a margin to separate data points for a healthy heart and a heart-disease-infected heart — should be formed.
Quantum SVM Training
Moving onto the most important part of this algorithm, the computer’s training.
Just like humans, our machines need to learn a task before mastering it. And, in this case, it’s more than important to ensure that our quantum computer is fully trained on how to predict someone
being at risk for a heart disease.
The goal within this step is to optimize the parameters of our circuit to achieve the best classification results. This is usually done through optimization algorithms such as the QAOA or the VQE.
Either way, these algorithms are used to minimize the cost function — which is a measure of the model’s accuracy and how well it fits the training data.
Mathematically, it’s represented as…
… where:
• w = weight vector
• b = bias term
• xi = training input vector
• yi = corresponding output label
• λ = regularization parameter
Throughout this process, all the potential parameters value within the quantum circuit are represented through a Hamiltonian matrix. So, through here, the optimization algorithm works by finding the
ground state of the Hamiltonian, which corresponds to the minimum energy state of the system.
Either way, the quantum circuit iteratively modifies the parameters of the model to find the minimum value of the cost functions. And, these parameters involve those in the equation above, including:
the weight vector, the bias term and the kernel parameters. In the end, the optimization algorithm seeks to minimize the cost function by finding the optimal values of these parameters.
This is usually the most time-consuming aspect of the algorithm, but it’s also vital for the circuit to familiarize itself with the all of the data points — in order to confidently make the correct
Measurement and Prediction
And, here comes the final step that concludes our algorithm.
Now, we’ll work towards obtaining a classical output from the quantum state that was prepared and manipulated in the previous steps. So, remember when we ‘mapped’ the classical states onto a quantum
state? We’re essentially reversing that through mapping the existing quantum states onto its classical states.
This is done through a measurement that projects the quantum state onto a classical state — and, this outcome can be any of the possible classical states that correspond to the quantum state
Once our model is fully trained, in order to get the class label on a new input data point, the same steps in the training phase are applied where the input data is first encoded into a quantum state
using the quantum feature map, and the quantum kernel is calculated using the same process as in training. So, when a new input reaches this stage, it’ll go through the processes above before getting
measured into the two classes within this problem.
Anyways, the measurement process is a crucial step in the QSVM algorithm, as it enables the extraction of classical information from the quantum state — which is what humans will use to further
communicate results.
The measurement can be mathematically expressed as the probability of obtaining a particular classical state |i⟩ given the quantum state |ψ⟩ as:
Here, |i⟩ represents one of the possible classical output states that can be obtained upon measurement, and ⟨i| represents the corresponding conjugate transpose of |i⟩. And, the notation ⟨i|ψ⟩
represents the inner product between the classical state |i⟩ and the quantum state |ψ⟩.
And, the absolute value square of ⟨i|ψ⟩ represents the probability of obtaining the classical output state |i⟩ when measuring the quantum state |ψ⟩.
All in all, you’ve finally received your classical output that can be used to dictate whether or not someone is at risk for a heart disease or not.
A Quantum Advantage?
In theory, QSVMs have the potential to offer an exponential speedup in comparison to classical SVMs, as they leverage quantum computers. The ability of quantum computers to perform certain
computations in parallel can provide users with a ‘quantum advantage’ that can improve the efficiency of the existing solution.
The main area in which Quantum Support Vector Machines have this speedup is within the kernel calculation step. Within classical SVMs, the kernel matrix must be explicitly computed, which means the
kernel function is applied to all pairs of data points. This creates a time-consuming step for larger datasets.
In contrast, QSVMs can use quantum algorithms to calculate the kernel matrix in parallel, and it’s implicitly computed using quantum circuits. This difference between these steps allows for a faster
and more memory-efficient process.
However, the limitations in relation to this solution cannot be ignored, as it’s still a long time before this is deployed for regular use.
Decoherence is the number one risk. Quantum computers are susceptible to noise, which means qubits are sensitive to compute on due to their external environment which can often interfere and mess-up
the existing quantum states. This would be problematic for tasks such as disease prediction, as you require accurate results that humans can blindly trust.
Adding onto decoherence, QSVMs need a high number of qubits, and since we know they’re sensitive, having more qubits that aren’t definitive in producing accurate results would be hard to manage. The
connectivity between the qubits may be difficult to achieve with the current quantum computers available today.
Long-Term Impact
But, all of these limitations are being worked on as the development of quantum computers continue. When we reach a point in building quantum computers where issues like the number of qubits and
noise don’t exist, the long-term impact of leveraging QSVMs to predict early heart diseases can be significant.
We’ve already determined that early detection and accurate diagnosis of heart disease can help prevent the ‘silent killer’ and the serious health complications involved.
Through a fully-developed QSVM, the earlier interventions can ultimately save lives — helping to reduce the status quo (17.9 million people globally) so we don’t see those around us losing their
lives to a silent killer.
Hey, I’m Priyal, a 17-year-old driven to impact the world using emerging technologies. If you enjoyed my article or learned something new, feel free to subscribe to my monthly newsletter to keep up
with my progress in quantum computing exploration and an insight into everything I’ve been up to. You can also connect with me on LinkedIn and follow my Medium for more content! Thank you so much for
|
{"url":"https://priyaltaneja.medium.com/predicting-early-heart-diseases-with-quantum-support-vector-machines-2f2c678e80b7","timestamp":"2024-11-09T11:05:20Z","content_type":"text/html","content_length":"198232","record_id":"<urn:uuid:11f83853-e227-4ded-abd3-ed27b8b5f0fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00332.warc.gz"}
|
2013 AIME II Problems/Problem 13
Problem 13
In $\triangle ABC$, $AC = BC$, and point $D$ is on $\overline{BC}$ so that $CD = 3\cdot BD$. Let $E$ be the midpoint of $\overline{AD}$. Given that $CE = \sqrt{7}$ and $BE = 3$, the area of $\
triangle ABC$ can be expressed in the form $m\sqrt{n}$, where $m$ and $n$ are positive integers and $n$ is not divisible by the square of any prime. Find $m+n$.
Video Solution by Punxsutawney Phil
Solution 1
We can set $AE=ED=m$. Set $BD=k$, therefore $CD=3k, AC=4k$. Thereafter, by Stewart's Theorem on $\triangle ACD$ and cevian $CE$, we get $2m^2+14=25k^2$. Also apply Stewart's Theorem on $\triangle
CEB$ with cevian $DE$. After simplification, $2m^2=17-6k^2$. Therefore, $k=1, m=\frac{\sqrt{22}}{2}$. Finally, note that (using [] for area) $[CED]=[CAE]=3[EDB]=3[AEB]=\frac{3}{8}[ABC]$, because of
base-ratios. Using Heron's Formula on $\triangle EDB$, as it is simplest, we see that $[ABC]=3\sqrt{7}$, so your answer is $10$.
Solution 2
After drawing the figure, we suppose $BD=a$, so that $CD=3a$, $AC=4a$, and $AE=ED=b$.
Using Law of Cosines for $\triangle CED$ and $\triangle AEC$,we get
$\[b^2+7-2b\sqrt{7}\cdot \cos(\angle CED)=9a^2\qquad (1)\]$$\[b^2+7+2b\sqrt{7}\cdot \cos(\angle CED)=16a^2\qquad (2)\]$ So, $(1)+(2)$, we get$\[2b^2+14=25a^2. \qquad (3)\]$
Using Law of Cosines in $\triangle ACD$, we get
$\[4b^2+9a^2-2\cdot 2b\cdot 3a\cdot \cos(\angle ADC)=16a^2\]$
So, $\[\cos(\angle ADC)=\frac{4b^2-7a^2}{12ab}.\qquad (4)\]$
Using Law of Cosines in $\triangle EDC$ and $\triangle EDB$, we get
$\[b^2+9a^2-2\cdot 3a\cdot b\cdot \cos(\angle ADC)=7\qquad (5)\]$
$\[b^2+a^2+2\cdot a\cdot b\cdot \cos(\angle ADC)=9.\qquad (6)\]$
$(5)+(6)$, and according to $(4)$, we can get $\[37a^2+2b^2=48. \qquad (7)\]$
Using $(3)$ and $(7)$, we can solve $a=1$ and $b=\frac{\sqrt{22}}{2}$.
Finally, we use Law of Cosines for $\triangle ADB$,
$\[4(\frac{\sqrt{22}}{2})^2+1+2\cdot2(\frac{\sqrt{22}}{2})\cdot \cos(ADC)=AB^2\]$
then $AB=2\sqrt{7}$, so the height of this $\triangle ABC$ is $\sqrt{4^2-(\sqrt{7})^2}=3$.
Then the area of $\triangle ABC$ is $3\sqrt{7}$, so the answer is $\boxed{010}$.
Solution 3
Let $X$ be the foot of the altitude from $C$ with other points labelled as shown below. $[asy] size(200); pair A=(0,0),B=(2*sqrt(7),0),C=(sqrt(7),3),D=(3*B+C)/4,L=C/5,M=3*B/7; draw(A--B--C--cycle);
draw(A--D^^B--L^^C--M); label("A",A,SW);label("B",B,SE);label("C",C,N);label("D",D,NE);label("L",L,NW);label("M",M,S); pair X=foot(C,A,B), Y=foot(L,A,B); pair EE=D/2; label("X",X,S);label("E",EE,NW);
label("Y",Y,S); draw(C--X^^L--Y,dotted); draw(rightanglemark(B,X,C)^^rightanglemark(B,Y,L)); [/asy]$ Now we proceed using mass points. To balance along the segment $BC$, we assign $B$ a mass of $3$
and $C$ a mass of $1$. Therefore, $D$ has a mass of $4$. As $E$ is the midpoint of $AD$, we must assign $A$ a mass of $4$ as well. This gives $L$ a mass of $5$ and $M$ a mass of $7$.
Now let $AB=b$ be the base of the triangle, and let $CX=h$ be the height. Then as $AM:MB=3:4$, and as $AX=\frac{b}{2}$, we know that $\[MX=\frac{b}{2}-\frac{3b}{7}=\frac{b}{14}.\]$ Also, as $CE:EM=
7:1$, we know that $EM=\frac{1}{\sqrt{7}}$. Therefore, by the Pythagorean Theorem on $\triangle {XCM}$, we know that $\[\frac{b^2}{196}+h^2=\left(\sqrt{7}+\frac{1}{\sqrt{7}}\right)^2=\frac{64}{7}.\]$
Also, as $LE:BE=5:3$, we know that $BL=\frac{8}{5}\cdot 3=\frac{24}{5}$. Furthermore, as $\triangle YLA\sim \triangle XCA$, and as $AL:LC=1:4$, we know that $LY=\frac{h}{5}$ and $AY=\frac{b}{10}$, so
$YB=\frac{9b}{10}$. Therefore, by the Pythagorean Theorem on $\triangle BLY$, we get $\[\frac{81b^2}{100}+\frac{h^2}{25}=\frac{576}{25}.\]$ Solving this system of equations yields $b=2\sqrt{7}$ and
$h=3$. Therefore, the area of the triangle is $3\sqrt{7}$, giving us an answer of $\boxed{010}$.
Solution 4
Let the coordinates of $A$, $B$ and $C$ be $(-a, 0)$, $(a, 0)$ and $(0, h)$ respectively. Then $D = \Big(\frac{3a}{4}, \frac{h}{4}\Big)$ and $E = \Big(-\frac{a}{8},\frac{h}{8}\Big).$$EC^2 = 7$
implies $a^2 + 49h^2 = 448$; $EB^2 = 9$ implies $81a^2 + h^2 = 576.$ Solve this system of equations simultaneously, $a=\sqrt{7}$ and $h=3$. Area of the triangle is $ah = 3\sqrt{7}$, giving us an
answer of $\boxed{010}$.
Solution 5
$[asy] size(200); pair A=(0,0),B=(2*sqrt(7),0),C=(sqrt(7),3),D=(3*B+C)/4,L=C/5,M=3*B/7; draw(A--B--C--cycle);draw(A--D^^B--L^^C--M); label("A",A,SW);label("B",B,SE);label("C",C,N);label("D",D,NE);
label("L",L,NW);label("M",M,S); pair EE=D/2; label("\sqrt{7}", C--EE, W); label("x", D--B, E); label("3x", C--D, E); label("l", EE--D, N); label("3", EE--B, N); label("E",EE,NW); [/asy]$
Let $BD = x$. Then $CD = 3x$ and $AC = 4x$. Also, let $AE = ED = l$. Using Stewart's Theorem on $\bigtriangleup CEB$ gives us the equation $(x)(3x)(4x) + (4x)(l^2) = 27x + 7x$ or, after simplifying,
$4l^2 = 34 - 12x^2$. We use Stewart's again on $\bigtriangleup CAD$: $(l)(l)(2l) + 7(2l) = (16x^2)(l) + (9x^2)(l)$, which becomes $2l^2 = 25x^2 - 14$. Substituting $2l^2 = 17 - 6x^2$, we see that
$31x^2 = 31$, or $x = 1$. Then $l^2 = \frac{11}{2}$.
We now use Law of Cosines on $\bigtriangleup CAD$. $(2l)^2 = (4x)^2 + (3x)^2 - 2(4x)(3x)\cos C$. Plugging in for $x$ and $l$, $22 = 16 + 9 - 2(4)(3)\cos C$, so $\cos C = \frac{1}{8}$. Using the
Pythagorean trig identity $\sin^2 + \cos^2 = 1$, $\sin^2 C = 1 - \frac{1}{64}$, so $\sin C = \frac{3\sqrt{7}}{8}$.
$[ABC] = \frac{1}{2} AC \cdot BC \sin C = (\frac{1}{2})(4)(4)(\frac{3\sqrt{7}}{8}) = 3\sqrt{7}$, and our answer is $3 + 7 = \boxed{010}$.
Note to writter: Couldn't we just use Heron's formula for $[CEB]$ after $x$ is solved then noticing that $[ABC] = 2 \times [CEB]$?
Solution 6 (Barycentric Coordinates)
Let ABC be the reference triangle, with $A=(1,0,0)$, $B=(0,1,0)$, and $C=(0,0,1)$. We can easily calculate $D=(0,\frac{3}{4},\frac{1}{4})$ and subsequently $E=(\frac{1}{2},\frac{3}{8},\frac{1}{8})$.
Using distance formula on $\overline{EC}=(\frac{1}{2},\frac{3}{8},-\frac{7}{8})$ and $\overline{EB}=(\frac{1}{2},-\frac{5}{8},\frac{1}{8})$ gives
\begin{align*} \begin{cases} 7&=|EC|^2=-a^2 \cdot \frac{3}{8} \cdot (-\frac{7}{8})-b^2 \cdot \frac{1}{2} \cdot (-\frac{7}{8})-c^2 \cdot \frac{1}{2} \cdot \frac{3}{8} \\ 9&=|EB|^2=-a^2 \cdot (-\frac
{5}{8}) \cdot \frac{1}{8}-b^2 \cdot \frac{1}{2} \cdot \frac{1}{8}-c^2 \cdot \frac{1}{2} \cdot (-\frac{5}{8}) \\ \end{cases} \end{align*}
But we know that $a=b$, so we can substitute and now we have two equations and two variables. So we can clear the denominators and prepare to cancel a variable:
\begin{align*} \begin{cases} 7\cdot 64&=3\cdot 7\cdot a^2+b^2\cdot 4\cdot 7-c^2\cdot 4\cdot 3\\ 9\cdot 64&=5a^2-4b^2+4\cdot 5\cdot c^2 \\ \end{cases} \end{align*}
\begin{align*} \begin{cases} 7\cdot 64&=49a^2-12c^2 \\ 9\cdot 64&=a^2+20c^2 \\ \end{cases} \end{align*}
\begin{align*} \begin{cases} 5\cdot 7\cdot 64&=245a^2-60c^2 \\ 3\cdot 9\cdot 64&=3a^2+60c^2 \\ \end{cases} \end{align*}
Then we add the equations to get
\begin{align*} 62\cdot 64&=248a^2 \\ a^2 &=16 \\ a &=4 \\ \end{align*}
Then plugging gives $b=4$ and $c=2\sqrt{7}$. Then the height from $C$ is $3$, and the area is $3\sqrt{7}$ and our answer is $\boxed{010}$.
Solution 7
Let $C=(0,0), A=(x,y),$ and $B=(-x,y)$. It is trivial to show that $D=\left(-\frac{3}{4}x,\frac{3}{4}y\right)$ and $E=\left(\frac{1}{8}x,\frac{7}{8}y\right)$. Thus, since $BE=3$ and $CE=\sqrt{7}$, we
get that
\begin{align*} \left(\frac{1}{8}x\right)^2+\left(\frac{7}{8}y\right)^2&=7 \\ \left(\frac{9}{8}x\right)^2+\left(\frac{1}{8}y\right)^2&=9 \\ \end{align*}
Multiplying both equations by $64$, we get that
\begin{align*} x^2+49y^2&=448 \\ 81x^2+y^2&=576 \\ \end{align*}
Solving these equations, we get that $x=\sqrt{7}$ and $y=3$.
Thus, the area of $\triangle ABC$ is $xy=3\sqrt{7}$, so our answer is $\boxed{010}$.
Solution 8
The main in solution is to prove that $\angle BEC = 90^\circ$.
Let $M$ be midpoint $AB.$ Let $F$ be cross point of $AC$ and $BE.$
We use the formula for crossing segments in $\triangle ABC$ and get: $\[\frac {CF}{AF}= \frac {DE}{AE} \cdot (\frac {CD}{BD} + 1) = 1 \cdot (3 + 1) = 4.\]$$\[\frac {FE }{BE}= \frac {CD}{BD} : (\frac
{CF}{AF} + 1) = \frac {3}{5} \implies FE = \frac {9}{5}.\]$
$\[\triangle BCF:\hspace{5mm} BC = x, CF = \frac {4}{5}x, EF = \frac {9}{5}, BF = 3, CE = \sqrt{7}.\]$ By Stewart's Theorem on $\triangle BCF$ and cevian $CE$, we get after simplification $\[x = 4 \
implies BC^2 = CE^2 + BE^2 \implies \angle BEC = 90^\circ.\]$$\[AE = ED, AM = MB \implies EM ||BC.\]$$\angle BEC = \angle CMB = 90^\circ \implies$ trapezium $BCEM$ is cyclic $\implies$$\[BM = CE, CM
= BE \implies [ABC] = CM \cdot BM = 3 \sqrt {7} \implies 3+ 7 = \boxed{\textbf{010}}.\]$vladimir.shelomovskii@gmail.com, vvsss
Solution 9
Let $AB = 2x$ and let $y = BD.$ Then $CD = 3y$ and $AC = 4y.$
$[asy] unitsize(1.5 cm); pair A, B, C, D, E; A = (-sqrt(7),0); B = (sqrt(7),0); C = (0,3); D = interp(B,C,1/4); E = (A + D)/2; draw(A--B--C--cycle); draw(A--D); draw(B--E--C); label("A", A, SW);
label("B", B, SE); label("C", C, N); label("D", D, NE); label("E", E, NW); label("2x", (A + B)/2, S); label("y", (B + D)/2, NE); label("3y", (C + D)/2, NE); label("4y", (A + C)/2, NW); label("3", (B
+ E)/2, N); label("\sqrt{7}", (C + E)/2, W); [/asy]$
By the Law of Cosines on triangle $ABC,$$\[\cos C = \frac{16y^2 + 16y^2 - 4x^2}{2 \cdot 4y \cdot 4y} = \frac{32y^2 - 4x^2}{32y^2} = \frac{8y^2 - x^2}{8y^2}.\]$Then by the Law of Cosines on triangle
$ACD,$ \begin{align*} AD^2 &= 16y^2 + 9y^2 - 2 \cdot 4y \cdot 3y \cdot \cos C \\ &= 25y^2 - 24y^2 \cdot \frac{8y^2 - x^2}{8y^2} \\ &= 3x^2 + y^2. \end{align*}Applying Stewart's Theorem to median $\
overline{BE}$ in triangle $ABD,$ we get $\[BE^2 + AE \cdot DE = \frac{AB^2 + BD^2}{2}.\]$Thus, $\[9 + \frac{3x^2 + y^2}{4} = \frac{4x^2 + y^2}{2}.\]$This simplifies to $5x^2 + y^2 = 36.$
Applying Stewart's Theorem to median $\overline{CE}$ in triangle $ACD,$ we get $\[CE^2 + AE \cdot DE = \frac{AC^2 + CD^2}{2}.\]$Thus, $\[7 + \frac{3x^2 + y^2}{4} = \frac{16y^2 + 9y^2}{2}.\]$This
simplifies to $3x^2 + 28 = 49y^2.$
Solving the system $5x^2 + y^2 = 36$ and $3x^2 + 28 = 49y^2,$ we find $x^2 = 7$ and $y^2 = 1,$ so $x = \sqrt{7}$ and $y = 1.$
Plugging this back in for our equation for $\cos C$ gives us $\frac{1}{8}$, so $\sin C = \frac{3\sqrt{7}}{8}.$ We can apply the alternative area of a triangle formula, where $AC \cdot BC \cdot \sin C
\cdot \frac{1}{2} = 3\sqrt{7}.$ Therefore, our answer is $\boxed{010}$.
Video Solution
See Also
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
|
{"url":"https://artofproblemsolving.com/wiki/index.php/2013_AIME_II_Problems/Problem_13","timestamp":"2024-11-08T12:26:33Z","content_type":"text/html","content_length":"88936","record_id":"<urn:uuid:1536461c-0427-45ee-b117-0b2866a5b942>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00399.warc.gz"}
|
Hire A Quantitative Developer [On A Budget] - WiFiTalentsHire A Quantitative Developer [On A Budget]
How should I evaluate candidates?
Candidates for the role of a Quantitative Developer should be evaluated based on their proficiency in programming languages, mathematical skills, experience with quantitative modeling techniques, and
ability to solve complex problems.
Which questions should you ask when hiring a Quantitative Developer?
What programming languages and tools are you proficient in?
Can you provide examples of projects where you have utilized quantitative modeling and analysis techniques?
How do you approach problem solving and debugging in a complex quantitative development environment?
Are you familiar with financial markets and related data analysis techniques?
Can you walk me through a time when you had to optimize a codebase for improved performance?
|
{"url":"https://wifitalents.com/quantitative-developer/","timestamp":"2024-11-06T20:12:31Z","content_type":"text/html","content_length":"171883","record_id":"<urn:uuid:2b0b1d7e-5162-4b1d-bb1e-80d76e43327d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00625.warc.gz"}
|
A Numerical Study on the Diversion Mechanisms of Fracture Networks in Tight Reservoirs with Frictional Natural Fractures
School of Mechanical Engineering, Beijing Key Laboratory of Pipeline Critical Technology and Equipment for Deepwater Oil & Gas Development, Beijing Institute of Petrochemical Technology, Beijing
102617, China
School of Aeronautic Science and Engineering, Beihang University, Beijing 100083, China
Jiangsu Key Laboratory of Advanced Manufacturing Technology, Huaiyin Institute of Technology, Huai’an 223003, China
The Conventional Natural Gas Research Institute, China Univeristy of Petroleum, Beijing 102249, China
Author to whom correspondence should be addressed.
Current address: 19 Qing-yuan North Road, Huang-cun, Da-xing District, Beijing 102617, China.
These authors contributed equally to this work.
Submission received: 25 September 2018 / Revised: 21 October 2018 / Accepted: 31 October 2018 / Published: 5 November 2018
An opened natural fracture (NF) intercepted by a pressurized hydro-fracture (HF) will be diverted in a new direction at the tips of the original NF and subsequently form a complex fracture network.
However, a clear understanding of the diversion behavior of fracture networks in tight reservoirs with frictional NFs is lacking. By means of the extended finite element method(XFEM), this study
investigates the diversion mechanisms of an opened NF intersected by an HF in naturally fractured reservoirs. The factors affecting the diversion behavior are intensively analyzed, such as the
location of the NF, the horizontal principal stress difference, the intersection angle between HF and NF, and the viscosity of the fracturing fluid. The results show that for a constant length of NF
(7 m): (1) the upper length of the diverted fracture (DF) decreases by about 2 m with a 2 m increment of the upper length of NF ($L u p p e r$), while the length of DF increases 9.06 m with the fluid
viscosity increased by 99 mPa$·$s; (2) the deflection angle in the upper parts increases by 30.8° with the stress difference increased by 5 MPa, while the deflection angle increases by 61.2° with the
intersection angle decreased by 30°. It is easier for the opened NF in lower parts than that in upper parts to be diverted away from its original direction. It finally diverts back to the preferred
fracture plane (PFP) direction. The diversion mechanisms of the fracture network are the results of the combined action of all factors. This will provide new insight into the mechanisms of fracture
network generation in tight reservoirs with NFs.
1. Introduction
With the technological progress in petroleum industries, petroleum engineers are increasingly concerned with the exploration and development of tight reservoirs in recent years. Due to the ultra-low
matrix permeability, hydraulic fracturing is a key technology for enhancing the recovery of tight hydrocarbon reservoirs [
]. Activation of preexisting natural fractures (NFs) during fracturing treatment is favorable for creating complex fracture networks. The interaction between a hydraulic fracture (HF) and an NF is a
complex coupled process, which involves rock deformation, fluid flow, and fracture diversion [
When an HF intercepts an NF during a hydro-fracking treatment, three scenarios—arrest, offset, and cross—are observed. Renshaw and Pollard developed a criterion to describe the mechanical NF–HF
interaction when they were perpendicular to each other [
]. Gu et al. afterwards proposed an extended Renshaw–Pollard criterion at nonorthogonal intersection angles on the basis of the experimental results of hydraulic fracturing for Colton sandstone [
]. This crossing criterion has been extensively applied in the mathematical models of stimulated reservoir volume (SRV) fracturing in shale gas wells. Various numerical techniques such as finite
difference, discrete element, and finite element methods have been presented to investigate the mechanical interaction between HF and NF [
]. Based on the finite element software ABAQUS 6.14, Chen et al. developed a cohesive zone finite element-based model to investigate the NF–HF interaction complexity, which took into account the
interface friction of weak planes [
]. Based on the discrete element method (DEM) model, Zou et al. numerically investigated HF network propagation in shale formations, and the plastic deformation in hydraulic fracturing was considered
]. Wu and Wong used a numerical manifold method (NMM) to capture the strong discontinuity across the crack face, and this method could smoothly handle the problems of fracture network propagation [
]. However, the above-mentioned numerical methods have the drawback that crack paths should be predefined a priori. Therefore, crack cannot be freely extended on mesh grids if the direction of crack
propagation is not known in advance. By means of a diffusive phase-field modeling approach, Heider et al. introduced a numerical framework of HF in tight rocks, but this simulation could be
time-consuming because it requires a very fine mesh [
The extended finite element method (XFEM), which introduces additional enrichment functions to account for the jump across the crack surfaces and the singularity of stress in the vicinity of crack
tips, provides a powerful tool to simulate the hydraulic fracturing problem. Its great advantage is that crack propagation is not mesh-dependent. Some scholars such as Dahi-Taleghani, Mao, and
Gordeliy have done a great deal of innovative research on hydraulic fracturing simulation [
] in past decades, but some assumptions, such as a constant fluid pressure, were made to deal with the mechanical NF–HF interaction in order to simplify the complex precess. Recently, Shi and Wang
successfully modeled the connection of two cracks by means of additional junction enrichment and by sharing pore pressure nodes at intersection points [
]. Using the combined method of XFEM and DEM, Ghaderi et al. concluded that the tensile and shear breakage of NFs were a function of angle and distance from an induced fracture [
]. Paul et al. developed a mixed linear cohesive law, which relies on a stable mortar formalism, and utilized the XFEM method to simulate the non-planar HF propagation [
]. Based on the XFEM technique, Remij et al. applied the enhanced local pressure (ELP) model to investigate crack interaction in hydraulic fracturing by assuming multiple discontinuities in the
domain [
]. Vahab and Khalili used an XFEM penalty method, which was embedded in Kuhn–Tucker inequalities, to model multi-zone fracking treatments within saturated porous media [
It is well known that an HF usually has a non-planar crack growth (refereed to fracture diversion) by a stress shadow effect [
]. An opened NF intercepted by an HF will be diverted in a new direction at the tips of the original NF and subsequently form a complex fracture network [
]. However, a clear understanding of the diversion behavior of fracture network in tight reservoirs with frictional NFs is lacking [
]. In particular, the effect of factors such as the location of the NF, the horizontal stress difference, and the intersection angle between HF and NF on the mechanical diversion behavior of HFs is
not clear at present. Therefore, with the XFEM technique [
], a numerical simulation on the diversion mechanisms of a fracture network in naturally fractured reservoirs was studied. This study focuses on the diversion propagation behavior in the vicinity of
the two crack tips of the opened NF after an HF intersects with an NF. This will provide new insight on the mechanisms of fracture network formation in tight formations with pre-existed frictional
2. Problem Formulation
2.1. Governing Equations of Hydraulic Fracturing Problems
As shown in
Figure 1
, the domain
denotes a tight reservoir, which includes an HF and an NF. The injection point is located on the middle left of the domain
, and the corresponding pump rate is denoted by
$Q 0$
. As is known, the hydraulic fracturing problem is essentially a fluid–solid interaction process, so its governing equations consist of two parts: the stress equilibrium equation for rock skeleton
and fluid pressure equation in the hydraulically driven fracture [
(1) The Stress Equilibrium Equation: According to the theory of elasticity, the stress equilibrium equation is expressed:
denotes the stress tensor;
denotes the body force vector in the rock skeleton. As shown in
Figure 1
, the boundary conditions are composed of a displacement boundary condition (
$Γ u$
) and a force boundary condition (
$Γ t$
). They are expressed as
$u = u ¯ o n Γ u , σ · n = t , o n Γ t , σ · n HF = p n HF , o n Γ HF , σ · n NF = t NF , o n Γ NF ,$
denotes the fluid pressure on the artificial fracture;
$t NF$
denotes the contact traction vector on the NF surface
$Γ NF$
$u ¯$
denotes the displacement imposed on the boundary
$Γ u$
denotes the traction vector imposed on the boundary
$Γ t$
For brittle rocks, there is a linear relationship between stress tensor and strain tensor under a small deformation assumption, so the corresponding constitutive equation is expressed as
denotes the fourth elasticity tensor;
denotes the strain tensor; the symbol ":" denotes the double dot product of the two tensors.
Under the assumption of small deformation, the relationship between displacement vector
and strain tensor
are as follows:
$ε = 1 2 [ ∇ u + ( ∇ u ) T ] .$
(2) Fluid Pressure in an HF: Under lubrication theory assumptions, the velocity profile of the fluid in the HF is that of a planar Poiseuille flow between two parallel plates. Therefore, the fluid
pressure in the HF can be expressed as [
$∂ w ∂ t − ∂ ∂ s ( k ∂ p ∂ s ) = 0 ,$
denotes the fracture opening of HF;
denotes the crack propagation direction;
denotes the injection time;
denotes the fracture transmissivity.
According to the cubic law, the fracture transmissivity can be expressed as
denotes the viscosity of fracturing fluid.
The corresponding initial and boundary conditions can be expressed as
$w ( s , 0 ) = 0 , w ( s t i p , 0 ) = 0 , q ( 0 , t ) = Q 0 , q ( s t i p , 0 ) = 0 ,$
$s t i p$
denotes the tips of HFs;
denotes the injection rate of fracturing fluid at the crack point
and time
It is can be seen that Equation (
) satisfies mathematically the Neumann boundary condition. In order to get a unique solution for the fluid pressure equation, the constraint condition should be additionally imposed. The necessary
condition, i.e., the conservation of global mass in the HF can be written as
$∫ 0 s t i p w d s − ∫ 0 t Q 0 d t = 0 .$
2.2. Crack Propagation Criterion
According to the theory of fracture mechanics, the maximum circumferential stress criterion is adopted to determine the propagation direction of a hydraulically driven fracture at every time
. The artificial fracture will propagate along a direction perpendicular to the maximum circumferential stress. If the stress intensify factor
is no less than the fracture toughness of rock skeleton
$K IC$
, the crack will propagate along a certain direction. The interaction integral method in domain form is utilized to calculate the stress intensity factors
$K I$
$K II$
. The following equation of the interaction integral can be written as [
$I ( 1 , 2 ) = ∫ A [ σ i j ( 1 ) ∂ u i 2 ∂ x 1 + σ i j ( 2 ) ∂ u i 1 ∂ x 1 − W ( 1 , 2 ) δ 1 j ] ∂ q w ∂ x j d A ,$
$I ( 1 , 2 )$
denotes the interaction integral;
$W ( 1 , 2 )$
denotes the interaction strain energy as follows;
$q w ( x )$
denotes the smooth weighting function, which takes a value from 0 to 1;
denotes the Kronecker symbol; the superscripts (1) and (2), respectively, denote the current state and the auxiliary state for the stress and strain field. The corresponding calculation procedures
can be described in detail in [
$W ( 1 , 2 ) = σ i j ( 1 ) ε i j ( 2 ) = σ i j ( 2 ) ε i j ( 1 ) .$
The direction of crack propagation
can be computed in a local tip coordinate system:
$θ = 2 arctan [ 1 4 ( K I / K II + K I / K II ) 2 + 8 ) ] ,$
where the symbol “arctan” denotes the arc-tangent function.
2.3. The Cross Criterion between HF and Frictional NF
As is known, when an HF encounters a frictional NF, there are three possible scenarios: arrested, direction-crossing, or a crossing with an offset [
]. Here, the extended Renshaw and Pollard rule is adopted to determine the interaction behavior between HF and NF. As shown in
Figure 2
, if a new fracture initiates on the opposite side of the NF, the maximum principle stress
$σ 1$
will reach the rock tensile stress. Meanwhile, a no-slip condition should be satisfied along the NF surface. Otherwise, the HF will cross directly or branch into NF with an offset. The expressions of
the combined shear stress and normal stress are shown in Equation (
), which is described in detail in other references [
$σ 1 = T 0 , τ β < S 0 − μ f σ β y ,$
denotes the intersection angle between HF and NF;
$τ β$
denotes the combined shear stress on the NF surface under the action of remote stress and the local crack tip stress;
$σ β y$
denotes the combined normal stress;
$T 0$
denotes the rock tensile strength;
$S 0$
denotes the cohesion force of the frictional NF;
$μ f$
denotes the frictional coefficient of the NF surface.
2.4. XFEM and Discretization of the Governing Equations of the Hydraulic Fracturing Problem
The XFEM (extended finite element method) is utilized to approximate the displacement discontinuity on both sides of the HF. In order to represent the multiple cracks, a Junction enrichment function
is introduced, as shown in
Figure 3
. The enriched displacement field can be written as [
$u ( x ) = ∑ I ∈ N N I ( x ) u I + ∑ j = 1 M d i s ∑ J ∈ N d i s N J ( x ) [ H ( x ) − H ( x J ) ] a I + ∑ k = 1 M t i p ∑ K ∈ N t i p N K ( x ) ∑ α = 1 4 [ Ψ t i p α ( x ) − Ψ t i p α ( x K ) ] b K
α + ∑ l = 1 M j u n ∑ L ∈ N j u n N L ( x ) [ J H ( x ) − J H ( x L ) ] c L ,$
$N d i s$
$N t i p$
$N j u n$
, respectively, denote the set of standard nodes, Heaviside enrichment nodes, crack-tip nodes, and junction enrichment nodes;
denotes the standard nodal d.o.f. (degrees of freedom);
$a I$
$b K α ( α = 1 , 4 )$
, and
$c L$
, respectively, denote the corresponding enriched nodal d.o.f.;
$M d i s$
denotes the number of cracks including the main cracks and the secondary cracks;
$M t i p$
denotes the number of crack tips;
$M j u n$
denotes the number of junctions with
$M j u n = M d i s − 1$
$H ( x )$
denotes the Heaviside enrichment function;
$J H ( x )$
denotes the junction enrichment function;
$Ψ ( x )$
denotes the crack-tip enrichment function;
$N I$
$N J$
$N K$
, and
$N L$
denote the standard shape function of node
, and
, respectively.
The Heaviside enrichment function is expressed as [
$H ( x ) = 0 , i f x < 0 , 1 , i f x ≥ 0 .$
The crack-tip enrichment function is defined as
${ Ψ t i p α ( r , θ ) } α = 1 4 = { r s i n θ 2 , r c o s θ 2 , r s i n θ s i n θ 2 , r s i n θ c o s θ 2 } ,$
where (
$r , θ$
) denotes the local crack-tip coordinate in the polar coordinate system.
The junction enrichment function
$J H ( x )$
is defined as
$J H ( x ) = H ( φ s ( x ) ) , i f φ m ( x ) < 0 , 0 , i f φ m ( x ) > 0 ,$
$φ m ( x )$
$φ s ( x )$
, respectively, denote the signed distance function of the main crack and the secondary crack. It can be seen that
$J H ( x )$
is equal to 1,
$− 1$
, or 0 on different sub-domains divided by the secondary cracks.
According to the finite element method, the pressure field
and the fracturing opening displacement vector
can be respectively approximated as
$w = ∑ I ∈ S w N I w u I = N w ( s ) U ,$
$p ( s ) = ∑ I ∈ S H F N I p p I = N p ( s ) P ,$
, respectively, denote the global nodal displacement vector and the nodal pressure vector;
$N p ( s )$
$N w ( s )$
, respectively, denote the matrix of the shape function of the fracture opening and pressure.
By substituting the above XFEM formulation, displacement and pressure approximations into the weak form of stress equilibrium equation and lubrication equation, the corresponding discretization forms
are written as
$K U − Q P − F e x t = 0 ,$
$Q T Δ U + Δ t H P + Δ t S = 0 ,$
denotes the global stiffness matrix;
denotes the coupling matrix;
$F e x t$
denotes the external loading vector;
denotes the flow matrix;
$Δ t$
denotes the time step; and
denotes the source term. They are, respectively, defined as follows [
$K = ∫ Ω ( B s t d ) T D ( B s t d ) d Ω ∫ Ω ( B s t d ) T D ( B e n r ) d Ω ∫ Ω ( B e n r ) T D ( B s t d ) d Ω ∫ Ω ( B e n r ) T D ( B e n r ) d Ω + ∫ Γ N F ( N w ) T D c o n t ( N w ) d Γ = K s s
K s e K e s K e e + K e e c o n t .$
In the above Equation (
$D c o n t$
denotes the contact stiffness matrix of fracture interfaces:
$Q = ∫ Ω ( N w ) T n Γ N F ( N p ) d Ω ,$
$F e x t = ∫ Γ t ( N u ) T t d Γ ,$
$H = ∫ Γ H F ∂ N p ∂ s T k ∂ N p ∂ s d s ,$
$S = N p ( s ) T ∣ s = 0 Q 0 .$
As shown in
Figure 4
, the flow rate in the main crack and secondary crack satisfies the law of conservation, i.e.,
$Q 0 = Q 1 + Q 2$
, where
$Q 1$
$Q 2$
denote the flow rate in Branches 1 and 2 of the secondary crack, respectively. The nonlinear fluid–solid coupling system of equations of hydraulic fracturing problems can be numerically solved by the
Newton–Raphson method. More details are described in [
3. Results and Discussion
3.1. Verification of the XFEM Model
For the model verification, the results from our models is summarized in
Appendix A
]. The verification model of this XFEM code is described in detail in [
], so the related process is not repeated in this article. It is shown that the numerical results have good agreement with the experimental results of true tri-axial hydraulic fracturing by TerraTek,
Inc. (Salt Lake City, UT, USA). For further details of numerical and experimental procedures and the corresponding results, we refer the reader to References [
3.2. Effect of the Location of Natural Fractures on the Diversion of Fracture Network Propagation
In this section, the effect of the location position of an NF on HF propagation paths is determined by numerical simulation using XFEM. The input parameter values of this model are as shown in
Table 1
. Under isotropic stress state conditions, the intersection angle between HF and NF is equal to 90°. As shown in
Figure 1
, the domain is a 25 m × 25 m square, where the injection point is located at the midpoint of the left edge. In this domain, the HF is 2.6 m in length, and the length of NF is equal to 7 m. Based on
the above input parameters, the mechanical NF–HF interaction processes in hydraulic fracturing are numerically simulated at different lengths of NF in lower and upper parts, i.e., corresponding to
$L lower$
$L upper$
Figure 1
, respectively.
The corresponding crack propagation paths are as shown in
Figure 5
. It is obvious that fracture diversion occurs near the tips of the NF in all cases. In
Figure 5
a, when the HF intersects with the NF, the fracturing fluid flows into the opened NF. In the lower parts of the NF, the opened NF firstly propagates along a vertically downward path for a certain
length, and it is then diverted along a new direction; in the upper parts of the NF, the opened NF is directly diverted at the upper tip of the original NF. In
Figure 5
b, both the lower and upper parts of the NF firstly extend vertically downward and upward for a short distance, respectively, and are then diverted to the right-hand side of the graph. However, the
length of the lower parts of the diverted fracture (DF) is longer than that of the upper parts of the DF, corresponding to the red line in
Figure 1
. In
Figure 5
c, in the upper parts of the NF, the opened fracture can only propagate vertically upward, and cannot be diverted near the tip of the NF; in the lower parts of the NF, the opened fracture can be
diverted away from the original NF. By making use of the data in
Figure 5
, the length of the DF propagation in the upper parts is calculated from the upper tip of the original NF. When the upper length of NF, i.e.,
$L u p p e r$
, is equal to 4, 5, and 6 m, the corresponding length is 3.14, 2.15, and 1.14 m, respectively. If
$L u p p e r$
is increased by 2 m, the upper length of the DF will decrease by 2 m. Therefore, it is shown that the longer the upper parts of the original NF are, the more difficult it is for the opened NF to be
diverted away from the upper tip of the NF under the conditions of an isotropic stress state, while the lower parts of the original NF is more easily diverted to the right-hand side than the upper
parts of the original NF under this circumstance.
The Von-Mises stress distributions are shown in
Figure 6
, where a blue color represent a relative stress value, while a yellow or red one represent a higher stress value. For all cases of the model, it is shown that there is a small region of stress
concentration near the two tips of the original NF, which indicates that a higher pressure is required to divert the opened NF away from the original NF.
The fracture aperture and net pressure curves of the diverted fracture are shown in
Figure 7
Figure 8
, respectively. With the decrease of
$L lower$
, both curves have revealed an asymmetrical characteristic, which peak at the diverted point in the lower parts. In addition, at the intersection point between HF and NF, their corresponding values
take second place. The fracture aperture and net pressure in the lower parts of the DF are much greater than those in the upper parts. By comparison, in the case of
$L lower = 3 m$
$L lower = 4 m$
, their curves are nearly symmetrical. This indicates that, under the combined action of remote stress and local crack-tip stress, the variation tendency of the fracture aperture and the net pressure
is quite different from that in the case of only a single HF.
The flow rate in the diverted fracture is shown in
Figure 9
. When fluid flows into the intersection point between HF and NF, it will flow upward and downward, respectively. If the value of the flow rate is negative, fluid will flow upward; otherwise, it will
flow downward. It is shown that the flow rate in the lower parts of the DF is much greater than that in the upper parts. Therefore, the fracture aperture and net pressure in the lower parts of the DF
is greater than that in the upper parts under the condition of the same fluid viscosity. With the decrease of
$L lower$
, fluid flows downward more easily. Thus, the lower parts of the original NF is more easily diverted than the upper parts of the original NF. This may explain the results in
Figure 5
3.3. Effect of Horizontal Stress Differences on the Diversion of Fracture Network Propagation
Based on input parameters in
Table 1
, the effect of the remote stress difference on the diversion of fracture network propagation is numerically simulated at different levels of minimum horizontal stresses
$σ h$
: 5, 4, and 0 MPa; the maximum horizontal stress
$σ H$
is kept constant (5 MPa) in all cases of this model. At the same time,
$L lower$
= 3 m,
$L upper$
= 4 m, and the NF–HF intersection angle is equal to 90°.
The corresponding crack propagation paths are shown in
Figure 10
, in which the deflection angle is defined the angle between NF and DF at the tips of the original NF. According to the data in
Figure 10
, the deflection angle in the upper parts is calculated. When the horizontal stress difference is, respectively, equal to 0 and 5 MPa, the corresponding deflection angle is 59.2° and 90°,
respectively. If the stress difference is increased by 5 MPa, the deflection angle in the upper parts is increased by 30.8°. The higher the horizontal stress difference is, the greater the deflection
angle is. In
Figure 10
c, i.e.,
$Δ σ$
= 5 MPa, the opened NF firstly diverts and propagates along the direction of minimum horizontal stress and finally tends to extend along the preferred fracture plane (PFP) direction in petroleum
engineering. By contrast, when the stress state is approximately isotropic, crack propagation in both the lower and upper parts will extend along the minimum horizontal stress direction for some
length. This indicates that crack propagation of the opened NF is a complex mechanical process under the combined action of the local crack-tip and the remote stress state.
As shown in
Figure 11
c, there is a lower Von-Mises stress region (corresponding to blue area on the contour) located on the right of DF for 5 MPa stress difference. This propagates the diverted fracture along the PFP. In
Figure 11
a,b, both cases correspond to a lower stress difference, and the lower stress region is mainly near the two crack tips. The local stress distribution is an explanation to interpret crack paths in
Figure 10
The fracture aperture and net pressure curves of DF are shown in
Figure 12
Figure 13
, respectively. The fracture aperture curve reveals an asymmetrical characteristic, while the net pressure curve reveals a nearly symmetrical characteristic. The fracture aperture peaks at a global
maximum value at the inflection point, where the opened fracture in the upper parts diverts to the right side in
Figure 10
; at the NF–HF intersection point, the fracture aperture takes the second place; at the inflection point in the lower parts, it takes the third place. The net pressure reaches a maximum at the
intersection point, and there are two inflection points on this curve. This means that the opened fracture diverts to the right side. The higher the stress difference is, the greater the net pressure
it requires, which leads to a greater fracture aperture.
The flow rate in the DF is shown in
Figure 14
. It is obvious that the flow rate in the upper parts is much greater than that in the lower parts. Therefore, the fracture aperture and net pressure in the upper parts are greater than those in the
lower parts for the same fluid viscosity. This might explain the results of fracture aperture and net pressure in
Figure 12
Figure 13
3.4. Effect of the NF–HF Intersection Angle on the Diversion of Fracture Network Propagation
Based on the input parameters in
Table 1
, the effect of the NF–HF intersection angle on the diversion of fracture network propagation is numerically simulated at different levels of intersection angles
: 75°, 60°, and 45°; both the maximum and minimum horizontal principle stresses are equal to 5 MPa in all cases of this model. Meanwhile,
$L lower$
$L upper$
are, respectively, 3 m and 4 m.
The corresponding crack propagation paths are as shown in
Figure 15
. Under the condition of isotropic stress state, the NF–HF intersection angle will have a significant impact on the propagation direction of the primary HF, i.e., the black dotted line in
Figure 15
, when HF is approaching NF. With the decrease of the intersection angle, the primary HF deflects from the horizontal line. When the NF–HF intersection angle is greater than 60° (in
Figure 15
a,b), the opened NF in the upper parts is more easily diverted away from the original NF than that in the lower parts under the combined action of remote stress and the crack-tip stress field.
However, the intersection angle is less than 60° (in
Figure 15
c). The opened NF in the lower parts is more easily diverted away from the original NF than that in the upper parts under the combined action of remote stress and the crack-tip stress field. By
making use of the data in
Figure 15
, the deflection angle for the primary HF was calculated. When the intersection angle is decreased from 75° to 45°, the corresponding deflection angle is increased from 0° to 61.2°. This indicates
that the NF–HF intersection angle will have a significant impact on the diversion propagation of the primary HF and the secondary opened NFs.
The Von-Mises stress distributions at different levels of NF–HF intersection angle are shown in
Figure 16
. In
Figure 16
a, there is a stress concentration region near the diversion point in the upper parts of the NF, which indicates that it will require a high net pressure to divert the opened fracture upward. In
Figure 16
b, the Von-Mises stress in the upper parts is greater than that in the lower parts, which causes the subsequent fracture to easily divert upward. In
Figure 16
c, the Von-Mises stress on the left of the NF is greater than that on the right, so it is more easily diverted downward.
The fracture aperture and net pressure curves of the DF are shown in
Figure 17
Figure 18
, respectively. In the case of
$β = 75$
°, the fracture aperture and net pressure in the upper parts is greater than that in the lower parts. In particular, the fracture aperture and net pressure near the diversion point in the lower parts
is close to zero, and this is consistent with the results of Von-Mises stress at this point. In the other two cases, the maximum values of the fracture aperture and the net pressure are at the
diversion point in the lower parts. The smaller the intersection angle is, the greater the net pressure it requires to divert the fracture.
The flow rate in DF is shown in
Figure 19
. It is obvious that, in the case of
$β = 45$
°, the flow rate in the upper parts is much greater than that in the lower parts. This indicates that it is easy for fracturing fluid to flow upward when the intersection angle is small. This is a
possible reason for explaining the results in
Figure 17
Figure 18
3.5. Effect of Fluid Viscosity on the Diversion of Fracture Network Propagation
Based on the input parameters in
Table 1
, the effect of the viscosity of fracturing fluid on the diversion of fracture network propagation is numerically simulated at different levels of viscosity
: 100 mPa
s, 10 mPa
s, and 1 mPa
s [
]. In this model, the maximum and minimum horizontal stresses are kept constant (5 MPa) for all cases. Meanwhile, the NF–HF intersection angle is equal to 90.
The corresponding crack propagation paths are shown in
Figure 20
. By making use of the data in
Figure 20
, the length of DF is calculated. When the fluid viscosity is increased from 100 mPa
s to 1 mPa
s, the length of DF is increased from 14.29 to 5.23 m. It is clear that the smaller the viscosity is, the more difficult the opened fracture diverts into a new direction. When the viscosity is equal
to 1 mPa
s, artificial fracture will propagate along the NF direction under given conditions. This indicates that more energy is required to divert the opened NF upward and downward.
The Von-Mises stress distributions at different levels of fluid viscosity are shown in
Figure 21
. In
Figure 21
c, there is a lower Von-Mises stress area on the right of the NF, which makes the opened NF propagate along the original NF direction. This is consistent with the results in
Figure 20
The fracture aperture and net pressure curves of the DF are shown in
Figure 22
Figure 23
, respectively. They are close to symmetrical about the axis of the original HF under the condition of an isotropic stress state. In the cases of
$μ = 100$
s and
$μ = 10$
s, there are two inflection points on the curves, which correspond to the diversion point in the lower and upper parts of the NF. The greater the fluid viscosity is, the greater the fracture aperture
and net pressure are.
The flow rate in the DF is as shown in
Figure 24
. Obviously, the greater the viscosity is, the greater the flow rate is, and thus the easier it is for the secondary fracture to divert. This might explain the results in
Figure 20
Figure 21
4. Conclusions
This paper investigates the diversion mechanisms of a fracture network in tight formations with frictional NFs by means of the XFEM technique. The effects of some key factors such as the location of
the NF, the intersection angle between the NF and HF, the horizontal stress difference, and the fluid viscosity on the mechanical diversion behavior of the HF were analyzed in detail. The following
main conclusions can be drawn:
Fracture diversion propagation will occur near the two tips of the opened NF after an HF is intersecting with an NF. The numerical results show that some key factors such as the NF position, the
NF–HF intersection angle, the horizontal stress differences, and the fluid viscosity have a significant impact on the diversion propagation in the upper and lower parts of the opened NF.
For a constant length of NF (7 m), the upper length of the DF decreases by about 2 m with a 2 m increment of the upper length of the NF ($L u p p e r$), while the length of the DF increases 9.06
m, with the fluid viscosity increased from 1 to 100 mPa.s; (2) the deflection angle in the upper parts increases by 30.8° with the stress difference increased by 5 MPa, while the deflection angle
increases by 61.2° with the intersection angle decreased from 75° to 45°.
The longer the upper parts of the original NF are, the more difficult it is for the opened NF to divert away from the upper tip of the NF under the conditions of an isotropic stress state, while
the lower parts of the original NF is more easily diverted to the right-hand side than the upper parts of the original NF. The NF–HF intersection angle will have a significant impact on the
diversion propagation of the primary HF and the secondary opened NFs.
In general, the distributions of fracture aperture, net pressure, and flow rate reveal asymmetrical characteristics for the secondary hydraulically driven fractures. For the distribution of
Von-Mises stress, there is usually a concentrated stress zone area near the turning point of the secondary cracks, which corresponds to the inflection points on the curves of the fracture
aperture and net pressure.
The diversion mechanisms of the fracture network are the results of the combined action of all factors. This will provide a new perspective on the mechanisms of fracture network generation.
Future work should determine the primary and secondary relations of various factors by means of experiments and numerical calculation.
Author Contributions
D.W., F.S., and B.Y. conceived and designed the model of hydraulic fracturing problem using a XFEM technique. D.S., X.L., D.H. and Y.T. analyzed the data. D.W. wrote the paper.
The authors would like to give their sincere gratitude to the National Science Foundation of China (Nos.51804033 and 51706021), the Beijing Postdoctoral Research Foundation (2018-ZZ-045), the Project
of Construction of Innovative Teams and Teacher Career Development for Universities and Colleges Under Beijing Municipality (No. IDHT20170507), the Program of Great Wall Scholar (No. CIT&
TCD20180313), Jointly Projects of Beijing Natural Science Foundation and Beijing Municipal Education Commission (No. KZ201810017023), and the Natural Science Foundation of Jiangsu Province (No.
BK20170457) for their financial support.
Conflicts of Interest
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
XFEM Extended Finite Element Method
DEM Discrete Element Method
NMM Numerical Manifold Method
SRV Stimulated Reservoir Volume
HF Hydraulic Fracture or Hydraulically Driven Fracture or Hydro-Fracture
NF Natural Fracture
DF Diverted Fracture
PFP Preferred Fracture Plane
ELP Enhanced Local Pressure
Appendix A
The result of XEFM is here compared with results of analytical solutions. As is known, depending on the dimensionless fracture toughness, the analytical solutions of the Kristianovich-Geertsma-de
Klerk (KGD) model have different expressions. The dimensionless fracture toughness can be written as
$K m = 4 2 π K I C ( 1 − ν 2 ) E ( E 12 μ Q 0 ( 1 − ν 2 ) ) 1 / 4 ,$
$K m$
denotes the dimensional fracture toughness;
$K I C$
denotes the rock fracture toughness;
denotes the viscosity of fracturing fluid;
$Q 0$
denotes the injection rate;
denotes the rock Young’s modulus;
denotes the Poisson’s ratio of the rock matrix. If
$K m$
is greater than 4, the fracture propagation regime is toughness dominated; if
$K m$
is less than 1, the fracture propagation regime is viscosity dominated, which is much more common in most hydraulic fracturing treatments.
The input parameters of the verification model are listed in
Table A1
. According to Equation (A1), the dimensionless fracture toughness is equal to 0.313, which indicates that the fracture propagation regime is viscosity-dominated. In this model, the HF is located at
the center of a symmetrical model with a length of 100 m and 180 m along the
- and
-direction directions, respectively. The domain is divided into 3080 bilinear quadrilateral elements.
Input Parameter Value
Young’s Modulus, E 20 GPa
Poisson’s ratio, $ν$ 0.2
Fracture toughness, $K IC$ 0.1 $MPa · m 1 2$
The consistency index of fracturing fluid, K 0.84 $Pa · s n$
Injection rate, $Q 0$ 0.001 $m 2 / s$
Viscosity, $μ$ 0.1 $Pa · s$
Dimensionless fracture toughness, $K m$ 0.313
Injection time, t 30 s
The initial half-length of the HF is equal to 1.25 m, and it is assumed that a constant fluid pressure acting on the fracture wall is equal to 3.9 MPa. The curves of fluid pressure at the injection
point and the fracture width at 30 s are shown in
Figure A1
a,b, respectively, which is compared with the corresponding analytical solutions. There is very good agreement between the numerical results and analytical solutions, which indicates that the XFEM
model can obtain reliable results.
Figure 5. The crack propagation paths at different lengths of lower and upper parts of the NF. In this figure, the black, blue, and red dotted lines, respectively, denote the original HF, the initial
NF, and the diverted fracture (DF).
Figure 7. The fracture aperture curves of the DF along the fracture length at different lengths of the lower and upper parts of the NF. The distance in the x-axis is along the direction from the
lower parts to the upper parts of the DF.
Figure 8. The net pressure curves of the DF along the fracture length at different lengths of the lower and upper parts of the NF. The distance in the x-axis is along the direction from the lower
parts to the upper parts of the DF.
Figure 9. The flow rate curves in the DF along the fracture length at different lengths of the lower and upper parts of the NF. The distance in the x-axis is along the direction from the lower parts
to the upper parts of the DF.
Figure 10. The crack propagation paths at different levels of remote horizontal principle stress difference.
Figure 11. Von-Mises stress distributions at different levels of remote horizontal principle stress difference.
Figure 12. The fracture aperture curves of the DF along the fracture length at different levels of remote horizontal principle stress difference. The distance in the x-axis is along the direction
from the lower part to the upper part of the DF.
Figure 13. The net pressure curves of the DF along the fracture length at different levels of remote horizontal principle stress difference. The distance in the x-axis is along the direction from the
lower part to the upper part of the DF.
Figure 14. The flow rate curves of the DF along the fracture length at different levels of remote horizontal principle stress difference. The distance in the x-axis is along the direction from the
lower part to the upper part of the DF.
Figure 16. Von-Mises stress distributions at different levels of intersection angle between HF and NF.
Figure 17. The fracture aperture curves of the DF along the fracture length direction at different levels of intersection angle between HF and NF. The distance in the x-axis is along the direction
from the lower part to the upper part of the DF.
Figure 18. The net pressure curves of the DF along the fracture length direction at different levels of intersection angle between HF and NF. The distance in the x-axis is along the direction from
the lower part to the upper part of the DF.
Figure 19. The flow rate curves of the DF along the fracture length direction at different levels of intersection angle between HF and NF. The distance in the x-axis is along the direction from the
lower part to the upper part of the DF.
Figure 22. The fracture aperture curves of the DF along the fracture length at different levels of viscosity of fracturing fluid. The distance in the x-axis is along the direction from the lower part
to the upper part of the DF.
Figure 23. The net pressure curves of the DF along the fracture length at different levels of viscosity of fracturing fluid. The distance in the x-axis is along the direction from the lower part to
the upper part of the DF.
Figure 24. The flow rate curves of the DF along the fracture length direction at different levels of viscosity of fracturing fluid. The distance in the x-axis is along the direction from the lower
part to the upper part of the DF.
Input Parameter Value
Young’s Modulus, E 20 GPa
Poisson’s ratio, $ν$ 0.2
Rock density, $ρ$ 2460 $kg / m 3$
Friction coefficient of NF, $μ f$ 0.3
Cohesion of the NF, $S 0$ 0 MPa
Fracture toughness, $K IC$ 1.0 $MPa · m 1 2$
Tensile strength, $T 0$ 1.5 MPa
Unconfined compression strength, $UCS$ 100 MPa
Apparent viscosity of fracturing fluid, $μ$ 0.1 $Pa · s$
The consistency index of fracturing fluid, K 0.84 $Pa · s n$
The flow behavior index of fracturing fluid, n 0.53
Dynamic viscosity index, m 2.0
Fluid pump rate, $Q 0$ 0.001 $m 2 / s$
Pore pressure, $P 0$ 5 MPa
Maximum horizontal stress, $σ H$ 5 MPa
Minimum horizontal stress, $σ h$ 5 MPa
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Wang, D.; Shi, F.; Yu, B.; Sun, D.; Li, X.; Han, D.; Tan, Y. A Numerical Study on the Diversion Mechanisms of Fracture Networks in Tight Reservoirs with Frictional Natural Fractures. Energies 2018,
11, 3035. https://doi.org/10.3390/en11113035
AMA Style
Wang D, Shi F, Yu B, Sun D, Li X, Han D, Tan Y. A Numerical Study on the Diversion Mechanisms of Fracture Networks in Tight Reservoirs with Frictional Natural Fractures. Energies. 2018; 11(11):3035.
Chicago/Turabian Style
Wang, Daobing, Fang Shi, Bo Yu, Dongliang Sun, Xiuhui Li, Dongxu Han, and Yanxin Tan. 2018. "A Numerical Study on the Diversion Mechanisms of Fracture Networks in Tight Reservoirs with Frictional
Natural Fractures" Energies 11, no. 11: 3035. https://doi.org/10.3390/en11113035
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/1996-1073/11/11/3035","timestamp":"2024-11-04T00:55:41Z","content_type":"text/html","content_length":"570576","record_id":"<urn:uuid:b755ee47-cbbf-4485-a87f-52aeab5e4479>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00232.warc.gz"}
|
Hash table
Hash table is a widely used data structure, which serves for effective storing of key-value pairs. Hash table combines benefits of index based search with asymptotic complexity and list traversal
with low memory requirements.
Key-value storages
Suppose that we want to store values and later look up for them using some preassigned key. One option might be to use integer as a key and store the element into an array on a position corresponding
the the key. Using this approach, the asymptotic complexity of the look up will be evidently constant random access. On the other hand, the memory consumption will be immense.
Let's store using this approach opinions of respondents about colors (the color in RGB is used as a key/index). Hence, we will create an array of size
We can also use opposite approach – use list to store the values and search sequentially in it. Although it completely eliminates the memory consumption issue (because there are no unused keys),
there is a big drawback in the time complexity of the element retrieval, because each search needs
For example, if we create a (global) survey with millions of respondents, then there will be no efficient way to search the data and find opinions about particular colors.
Binary search tree
Binary search tree can be considered as the golden mean, because in the binary search tree, similarly to the list, do not exist any unused keys and the complexity of the search operation is only
Hash table
Hash table is a data structure, which is built upon an array of fixed length hash function. It also guarantees that the search operation will take in average case
Hash function
The hash function is a function with following properties:
• Consistently returns for the same key the same address.
• It does not guarantee that it will return distinct addresses for distinct keys.
• Uses the whole address space with the same probability.
• The computation of the address is fast.
For example, a very popular implementation of the hash function is based on combination of modular multiplication and addition (
As it was already stated, the hash function does not guarantee assigning of different addresses for different keys. The situation, when the same address is assigned to more objects, is called a
collision. There are several ways, how to deal with this situation.
Separate chaining
The most straightforward approach is to store the objects in a linked list (each linked list represents one address). So, when collision occurs, the colliding element is simply appended to the end of
the corresponding list. The drawback of this approach is evident – when the table is filled up, the performance of the table degrades linearly with the load factor.
Open addressing
Open addressing (closed hashing) does not use any helper lists to deal with the collisions, it stores the objects on a different address directly in the array. There are two basic procedures how to
do it – linear probing and double hashing.
Linear probing
The linear probing strategy at first computes the address of the given element. If the address is empty, then the object is stored there. If the address is occupied, then the procedure iterates over
the array, while some address is not empty and the element is stored there.
The schema for saving the element can be formalized using the following function (
Linear probing has a major weakness – clustering. The procedure of storing elements causes creation of clusters of elements, which have very similar address. These clusters have to be traversed
linearly when looking up some element. The degradation is even worse then with separate chaining, because each cluster may contain elements represented by several keys.
Removing the elements
When removing an elements, it is necessary to replace the element being removed with the sentinel. A sentinel is a special object, which is considered as an empty space, when a new element is being
added (i.e. the sentinel is replaced), and is iterated over, when some element is being looked up.
Alternatively it is possible to resave all remaining elements in the cluster. On the other side, it is not possible to shift the remaining elements one place (address) left, because we could lose
access to all other elements with other keys in the cluster (if we remove element
Double hashing
Double hashing eliminates the creation of clusters by using second hash function instead of the simple iteration. At first, the hashing function computes the initial position and if it is occupied,
than the second function is used to compute the shift. If the new position is also occupied, than shift is used again (until the element is saved).
Removing the elements
When removing some element, we must strictly use the sentinel – the double hashing strategy does not form clusters, hence it is not possible to resave the remaining elements.
Performance comparison of linear probing and double hashing
For comparing linear probing and double hashing we introduce two metrics – search hit and search miss. Search hit denotes number of operations performed by the algorithm in average case in order to
retrieve stored element with given key. Search miss denotes number of operations that must the algorithm perform in order to find out that the element is not stored in the table.
Search hit
Let Sedgewick calculate the number of operations performed by linear probing (lp) and double hashing (dh) for search hit as:
Number of operations needed to find a stored element (search hit)
Axis x: α, Axis y: number of operations
Search miss
For search miss similar formula holds:
Number of operations needed to find out that the table does not contain an element (search miss)
Axis x: α, Axis y: number of operations
Evaluation of the comparison
From the presented formulas and graphs it stems that the linear probing degrades significantly, when the table is
* Hash table
* @author Pavel Micka
* @param <KEY> type parameter of the key
* @param <VALUE> type parameter of the value
public class HashTable<KEY, VALUE> {
* Load factor, when reached, new (larger) table is allocated
private final float LOAD_FACTOR = 0.75f;
* Compress factor, when reached, the table is collapsed
private final float COLLAPSE_RATIO = 0.1f;
* Capacity, under which the table will be never collapsed
private final int INITIAL_CAPACITY;
* Number of elements stored
private int size = 0;
private Entry<KEY, VALUE>[] table;
* Constructs hash table with 10 capacity
public HashTable() {
this(10); //vychozi kapacita
* Constructs hash table
* @param initialCapacity initial capacity, the table will be never compressed bellow
public HashTable(int initialCapacity) {
if (initialCapacity <= 0) {
throw new IllegalArgumentException("Capacity must be a positive integer");
this.INITIAL_CAPACITY = initialCapacity;
this.table = new Entry[initialCapacity];
* Inserts the value into the table. If some entry with the same key exists, it will be replaced.
* @param key key
* @param value value
* @return null if value with the given key does not exist in the table, otherwise the replaced value
* @throws IllegalArgumentException if the key is null
public VALUE put(KEY key, VALUE value) {
if (key == null) {
throw new IllegalArgumentException("Key must not be null");
VALUE val = performPut(key, value);
if (val == null) {
return val;
* Removes element with the given key
* @param key key
* @return null if element with the given key is not stored in the table, otherwise the removed element
public VALUE remove(KEY key) {
Entry<KEY, VALUE> e = getEntry(key);
if (e == null) { //element does not exist
return null;
VALUE val = e.value;
e.key = null; //element is now a sentinel
e.value = null; //remove the reference, so the GC may do its work
return val; //return the value removed
* Retrieves the value associated with the given key
* @param key key
* @return value, null if the there is no value stored with the given key
public VALUE get(KEY key) {
Entry<KEY, VALUE> e = getEntry(key);
if (e == null) {
return null;
return e.value;
* Query, if there is a value associated with the given key
* @param key key
* @return true, if there is a value associated with the given key, false otherwise
public boolean contains(KEY key) {
return getEntry(key) != null;
* Returns number of stored elements
* @return number of stored elements
public int size() {
return size;
* Returns collection of all stored values
* @return collection of all stored values (order is not guaranteed)
public Collection<VALUE> values() {
List<VALUE> values = new ArrayList<VALUE>(size);
for (int i = 0; i < table.length; i++) {
if (table[i] != null && table[i].key != null) {
return values;
* Returns collection of all keys
* @return collection of all keys (order is not guaranteed)
public Collection<KEY> keys() {
List<KEY> keys = new ArrayList<KEY>(size);
for (int i = 0; i < table.length; i++) {
if (table[i] != null && table[i].key != null) {
return keys;
* Returns entry corresponding to the given key
* @return the entry
private Entry getEntry(KEY key) {
int index = key.hashCode() % table.length;
//until we reach the empty space
while (table[index] != null) {
if (key.equals(table[index].key)) { //entry exists
return table[index];
index = (index + 1) % table.length; //continue with the next address
return null; //not found
* Performs the put operation itself
* If entry with the given key is already present, it will be replaced
* @param key key
* @param value value
* @return null if the entry was only added, otherwise value of the replaced entry
private VALUE performPut(KEY key, VALUE value) {
Entry<KEY, VALUE> e = getEntry(key);
if (e != null) {
//entry is in the table
VALUE val = e.value;
e.value = value; //swap their values
return val;
int index = key.hashCode() % table.length;
while (table[index] != null && table[index].key != null) { //until we reach an empty space or a sentinel
index = (index + 1) % table.length; //shift to the next address
if (table[index] == null) { //empty space
table[index] = new Entry<KEY, VALUE>();
table[index].key = key;
table[index].value = value;
return null;
* Calculates the expected size of the table
* @return expected size of the table
private int calculateRequiredTableSize() {
if (this.size() / (double) table.length >= LOAD_FACTOR) { //table is overfull
return table.length * 2;
} else if (this.size() / (double) table.length <= COLLAPSE_RATIO) { //table should be collapsed
return Math.max(this.INITIAL_CAPACITY, table.length / 2);
} else {
return table.length; //tabulka has a correct size
* Changes the size of the table, if necessary
private void resize() {
int requiredTableSize = calculateRequiredTableSize();
if (requiredTableSize != table.length) { //if the table size should be changed
Entry<KEY, VALUE>[] oldTable = table;
table = new Entry[requiredTableSize]; //create new table
for (int i = 0; i < oldTable.length; i++) {
if (oldTable[i] != null && oldTable[i].key != null) {
this.performPut(oldTable[i].key, oldTable[i].value); //reinsert the values
* Inner class representing the entries
private class Entry<KEY, VALUE> {
* Key, null == entry is a sentinel
private KEY key;
* Value
private VALUE value;
• SEDGEWICK, Robert. Algorithms in Java: Parts 1-4. Third Edition. [s.l.] : [s.n.], July 23, 2002. 768 p. ISBN 0-201-36120-5.
|
{"url":"https://www.programming-algorithms.net/article/50101/Hash-table","timestamp":"2024-11-09T14:18:26Z","content_type":"text/html","content_length":"37019","record_id":"<urn:uuid:bb571c79-a4ac-43c9-8c1d-1a5c1b560a73>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00653.warc.gz"}
|
Multidimensional Hyperbolic Problems And Computations [PDF] [41hp82dueld0]
E-Book Overview
This IMA Volume in Mathematics and its Applications MULTIDIMENSIONAL HYPERBOLIC PROBLEMS AND COMPUTATIONS is based on the proceedings of a workshop which was an integral part ofthe 1988-89 IMA
program on NONLINEAR WAVES. We are grateful to the Scientific Commit tee: James Glimm, Daniel Joseph, Barbara Keyfitz, Andrew Majda, Alan Newell, Peter Olver, David Sattinger and David Schaeffer for
planning and implementing an exciting and stimulating year-long program. We especially thank the Work shop Organizers, Andrew Majda and James Glimm, for bringing together many of the major figures
in a variety of research fields connected with multidimensional hyperbolic problems. A vner Friedman Willard Miller PREFACE A primary goal of the IMA workshop on Multidimensional Hyperbolic Problems
and Computations from April 3-14, 1989 was to emphasize the interdisciplinary nature of contemporary research in this field involving the combination of ideas from the theory of nonlinear partial
differential equations, asymptotic methods, numerical computation, and experiments. The twenty-six papers in this volume span a wide cross-section of this research including some papers on the
kinetic theory of gases and vortex sheets for incompressible flow in addition to many papers on systems of hyperbolic conservation laws. This volume includes several papers on asymptotic methods such
as nonlinear geometric optics, a number of articles applying numerical algorithms such as higher order Godunov methods and front tracking to physical problems along with comparison to experimental
data, and also several interesting papers on the rigorous mathematical theory of shock waves.
E-Book Content
The IMA Volumes in Mathematics and Its Applications Volume 29 Series Editors Avner Friedman Willard Miller, Jr.
Institute for Mathematics and its Applications IMA The Institute for Mathematics and its Applications was established by a grant from the National Science Foundation to the University of Minnesota in
1982. The IMA seeks to encourage the development and study of fresh mathematical concepts and questions of concern to the other sciences by bringing together mathematicians and scientists from
diverse fields in an atmosphere that will stimulate discussion and collaboration. The IMA Volumes are intended to involve the broader scientific community in this process. Avner Friedman, Director
Willard Miller, Jr., Associate Director
********** IMA PROGRAMS 1982-1983 1983-1984
Statistical and Continuum Approaches to Phase Transition Mathematical Models for the Economics of
1984-1985 1985-1986 1986-1987 1987-1988 1988-1989 1989-1990 1990-1991
Decentralized Resource Allocation Continuum Physics and Partial Differential Equations Stochastic Differential Equations and Their Applications Scientific Computation Applied Combinatorics Nonlinear
Waves Dynamical Systems and Their Applications Phase Transitions and Free Boundaries
********** SPRINGER LECTURE NOTES FROM THE IMA:
Tbe Mathematics and Pbysics of Disordered Media Editors: Barry Hughes and Barry Ninham (Lecture Notes in Math., Volume 1035, 1983) Orienting Polymers Editor: J.L. Ericksen (Lecture Notes in Math.,
Volume 1063, 1984)
New Perspectives in Tbermodynamics Edi tor: James Serrin (Springer-Verlag, 1986)
Models of Economic Dynamics Editor: Hugo Sonnenschein (Lecture Notes in Econ., Volume 264, 1986)
James Glimm
Andrew J. Majda
Multidimensional Hyperbolic Problems and Computations With 86 Illustrations
Springer-Verlag New York Berlin Heidelberg London Paris Tokyo Hong Kong Barcelona
James Glimm Department of Applied Mathematics and Statistics SUNY at Stony Brook Stony Brook, NY 11794-3600
Andrew J. Majda Department of Mathematics and Program in Applied and Computational Mathematics Princeton University Princeton, NJ 08544
Series Editors Avner Friedman Willard Miller, Jr. Institute for Mathematics and its Applications University of Minnesota Minneapolis, Minnesota 55455 USA Mathematics Subject Classification: 35, 69,
76, 80.
Printed on acid-free paper.
© 1991 Springer-Verlag New York Inc. Softcover reprint of the hardcover 1st edition 1991 All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection
with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general
descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks
and Merchandise Marks Act, may accordingly be used freely by anyone. Permission to photocopy for internal or personal use, or the internal or personal use of specific clients, is granted by
Springer-Verlag New York, Inc. for libraries registered with the Copyright Clearance Center (CCC), provided that the base fee of $0.00 per copy, plus $0.20 per page is paid directly to CCC, 21
Congress St., Salem, MA 01970, USA. Special requests should be addressed directly to Springer-Verlag New York, 175 Fifth Avenue, New York, NY 10010, USA. ISBN-13 978-1-4613-9123-4 DOL 10.\007/
e-1SBN-13 978-1-4613-9121-0
Camera-ready copy prepared by the IMA. 987654321
Ron DiPerna (1947 - 1989) Ron DiPerna was a uniquely talented mathematician. He was a leading researcher of his generation in the mathematical theory of systems of hyperbolic conservation laws,
incompressible flow, and the kinetic theory of gases. Ron DiPerna died tragically in January 1989 after a courageous struggle with cancer. His work and its impact was known to virtually all of the
several hundred participants in this meeting. For many of us, he was a warm and loyal friend with a sharp wit and keen sense of humor. He left us too soon and at the height of his creative power.
The IMA Volumes in Mathematics and its Applications Current Volumes: Volume 1: Homogenization and Effective Moduli of Materials and Media
Editors: Jerry Ericksen, David Kinderlehrer, Robert Kohn, J.-L. Lions Volume 2: Oscillation Theory, Computation, and Methods of Compensated Compactness
Editors: Constantine Dafermos, Jerry Ericksen, David Kinderlehrer, Marshall Slemrod Volume 3: Metastability and Incpmpletely Posed Problems
Editors: Stuart Antman, Jerry Ericksen, David Kinderlehrer, Ingo Muller Volume 4: Dynamical Problems in Continuum Physics
Editors: Jerry Bona, Constantine Dafermos, Jerry Ericksen, David Kinderlehrer Volume 5: Theory and Applications of Liquid Crystals
Editors: Jerry Ericksen and David Kinderlehrer Volume 6: Amorphous Polymers and Non-Newtonian Fluids
Editors: Constantine Dafermos, Jerry Ericksen, David Kinderlehrer Volume 7: Random Media
Editor: George Papanicolaou Volume 8: Percolation Theory and Ergodic Theory of Infinite Particle Systems
Editor: Harry Kesten Volume 9: Hydrodynamic Behavior and Interacting Particle Systems
Editor: George Papanicolaou Volume 10: Stochastic Differential Systems, Stochastic Control Theory and Applications
Editors: Wendell Fleming and Pierre-Louis Lions Volume 11: Numerical Simulation in Oil Recovery
Editor: Mary Fanett Wheeler Volume 12: Computational Fluid Dynamics and Reacting Gas Flows
Editors: Bjorn Engquist, M. Luskin, Andrew Majda
Volume 13: Numerical Algorithms for Parallel Computer Architectures Editor: Martin H. Schultz Volume 14: Mathematical Aspects of Scientific Software Editor: J.R. Rice Volume 15: Mathematical
Frontiers in Computational Chemical Physics Editor: D. Truhlar Volume 16: Mathematics in Industrial Problems by A vner Friedman Volume 17: Applications of Combinatorics and Graph Theory to the
Biological and Social Sciences Editor: Fred Roberts Volume 18: q-Series and Partitions Editor: Dennis Stanton Volume 19: Invariant Theory and Tableaux Editor: Dennis Stanton Volume 20: Coding Theory
and Design Theory Part I: Coding Theory Editor: Dijen Ray-Chaudhuri Volume 21: Coding Theory and Design Theory Part II: Design Theory Editor: Dijen Ray-Chaudhuri Volume 22: Signal Processing: Part I
- Signal Processing Theory Editors: L. Auslander, F.A. Griinbaum, J.W. Helton, T. Kailath, P. Khargonekar and S. Mitter Volume 23: Signal Processing: Part II - Control Theory and Applications of
Signal Processing Editors: L. Auslander, F.A. Griinbaum, J.W. Helton, T. Kailath, P. Khargonekar and S. Mitter Volume 24: Mathematics in Industrial Problems, Part 2 by A vner Friedman Volume 25:
Solitons in Physics, Mathematics, and Nonlinear Optics Editors: Peter J. Olver and David H. Sattinger
Volume 26: Two Phase Flows and Waves Editors: Daniel D. Joseph and David G. Schaeffer Volume 27: Nonlinear Evolution Equations that Change Type Editors: Barbara Lee Keyfitz and Michael Shearer Volume
28: Computer Aided Proofs in Analysis Editors: Kenneth R. Meyer and Dieter S. Schmidt Volume 29: Multidimensional Hyperbolic Problems and Computations Editors: James Glimm and Andrew Majda Volume 31:
Mathematics in Industrial Problems, Part 3 by A vner Friedman Forthcoming Volumes:
1988-1989: Nonlinear Waves Microlocal Analysis and Nonlinear Waves Summer Program 1989: Robustness, Diagnostics, Computing and Graphics in Statistics Robustness, Diagnostics in Statistics (2 Volumes)
Computing and Graphics in Statistics
1989-1990: Dynamical Systems and Their Applications An Introduction to Dynamical Systems Patterns and Dynamics in Reactive Media Dynamical Issues in Combustion Theory Twist Mappings and Their
Applications Dynamical Theories of Turbulence in Fluid Flows Nonlinear Phenomena in Atmospheric and Oceanic Sciences Chaotic Processes in the Geological Sciences Summer Program 1990: Radar/Sonar
Radar/Sonar (lor 2 volumes) Summer Program 1990: Time Series in Time Series Analysis Time Series (2 volumes)
This IMA Volume in Mathematics and its Applications
is based on the proceedings of a workshop which was an integral part ofthe 1988-89 IMA program on NONLINEAR WAVES. We are grateful to the Scientific Committee: James Glimm, Daniel Joseph, Barbara
Keyfitz, Andrew Majda, Alan Newell, Peter Olver, David Sattinger and David Schaeffer for planning and implementing an exciting and stimulating year-long program. We especially thank the Workshop
Organizers, Andrew Majda and James Glimm, for bringing together many of the major figures in a variety of research fields connected with multidimensional hyperbolic problems.
Avner Friedman Willard Miller
A primary goal of the IMA workshop on Multidimensional Hyperbolic Problems and Computations from April 3-14, 1989 was to emphasize the interdisciplinary nature of contemporary research in this field
involving the combination of ideas from the theory of nonlinear partial differential equations, asymptotic methods, numerical computation, and experiments. The twenty-six papers in this volume span a
wide cross-section of this research including some papers on the kinetic theory of gases and vortex sheets for incompressible flow in addition to many papers on systems of hyperbolic conservation
laws. This volume includes several papers on asymptotic methods such as nonlinear geometric optics, a number of articles applying numerical algorithms such as higher order Godunov methods and
fronttracking to physical problems along with comparison to experimental data, and also several interesting papers on the rigorous mathematical theory of shock waves. In addition, there are at least
two papers in this volume devoted to open problems with this interdisciplinary emphasis. The organizers would like to thank the staff of the IMA for their help with the details of the meeting and
also for the preparation of this volume. We are especially grateful to Avner Friedman and Willard Miller for their help with the organization of the special day of the meeting in memory of Ron
Diperna, Tuesday, April 4, on short notice.
James Glimm Andrew J. Majda
Macroscopic limits of kinetic equations •........................ Claude Bardo3, Francoi3 Gol3e and David Levermore
The essence of particle simulation of the Boltzmann equation ..................................... H. Babov3ky and R. Illner The approximation of weak solutions to the 2-D Euler equations by vortex
elements J. Thoma3 Beale Limit behavior of approximate solutions to conservation laws Chen Gui- Qiang Modeling two-phase flow of reactive granular materials
.............................................. Pedro F. Embid and Melvin R. Baer Shocks associated with rotational modes Heinrich Freistiihler Self-similar shock reflection in two space dimensions
........................................... Harland M. Glaz Nonlinear waves: overview and problems Jame3 Glimm The growth and interaction of bubbles in Rayleigh-Taylor unstable interfaces........
....... ............ Jame3 Glimm, Xiao Lin Li, Ralph Menikoff, David H. Sharp and Qiang Zhang
Front tracking, oil reservoirs, engineering scale problems and mass conservation ................................ Jame3 Glimm, Brent Lindqui3t and Qiang Zhang
Collisionless solutions to the four velocity Broadwell equations ............................................ J.M. Greenberg and Cleve Moler
Anomalous reflection of a shock wave at a fluid interface ................................................ John W. Grove and Ralph Menikoff
An application of connection matrix to magnetohydrodynamic shock profiles ........................... Harumi Hattori and Kon3tantin Mischaikow
Convection of discontinuities in solutions of the Navier-Stokes equations for compressible flow David Hoff
Nonlinear geometrical optics John K. Hunter
Geometric theory of shock waves ............................... Tai-Ping Liu
An introduction to front tracking ............................... Christian Klingenberg and Bradley Plohr
One perspective on open problems in multi-dimensional conservation laws ............................................... Andrew J. Majda
Stability of multi-dimensional weak shocks Guy Metivier
Nonlinear stability in non-newtonian flows ...................... J.A. Nohel, R.L. Pego and A.E. Tzavaras
A numerical study of shock wave refraction at a C02/CH 4 interface Elbridge Gerry Puckett
An introduction to weakly nonlinear geometrical optics .............................................. Rodolfo R. Rosales
Numerical study of initiation and propagation of one-dimensional detonations Victor Roytburd
Richness and the classification of quasilinear hyperbolic systems ............................................. Denis Serre A case of singularity formation in vortex sheet motion studied by a
spectrally accurate method M.J. Shelley The Goursat-Riemann problem for plane waves in isotropic elastic solids with velocity boundary conditions T. C. T. Ting and Tankin Wang
MACROSCOPIC LIMITS OF KINETIC EQUATIONS CLAUDE BARDOS, FRANQOIS GOLSE* AND DAVID LEVERMOREt Abstract. The connection between kinetic theory and the macroscopic equations of fluid dynamics is
described. In particular, our results concerning the incompressible Navier-Stokes equation are compared with the classical derivation of Hilbert and Chapman-Enskog. Some indications of the validity
of these limits are given. More specifically, the connection between the DiPerna-Lions renormalized solution for the Boltzmann equation and the Leray-Hopf solution for the N avier-Stokes equation is
I. Introduction. This paper is devoted to the connection between kinetic theory and macroscopic fluid dynamics. Formal limits are systematically derived and, in some cases, rigorous results are given
concerning the validity of these limits. To do that several scalings are introduced for standard kinetic equations of the form (1) Here F.( t, x, v) is a nonnegative function representing the density
of particles with position x and velocity v in the single particle phase space R; x R! at time t. The interaction of particles through collisions is modelled by the operator C( F); this operator acts
only on the variable v and is generally nonlinear. In section V the classical Boltzmann form of the operator will be considered. The connection between kinetic and macroscopic fluid dynamics results
from two types of properties of the collision operator: (i) conservation properties and an entropy relation that implies that the equilibria are Maxwellian distributions for the zeroth order limit;
(ii) the derivative of C(F) satisfies a formal Fredholm alternative with a kernel related to the conservation properties of (i). The macroscopic limit is obtained when the fluid becomes dense enough
that particles undergo many collisions over the scales of interest. This situation is described by the introduction of a small parameter E, called the Knudsen number, that represents the ratio of the
mean free path of particles between collisions to some characteristic length of the flow (e.g. the size of an obstacle). Properties (i) are sufficient to derive the compressible Euler equations from
equation (1); they arise as the leading order dynamics from a formal expansion of F in E (the Chapman-Enskog or Hilbert expansion described briefly in section III). Properties (ii) are used to obtain
the N avier-Stokes equations; they depend on a more detailed knowlege of the collision operator. The compressible Navier-Stokes equations arise as corrections to those of Euler at the next order in
the Chapman-Enskog expansion. In a compressible fluid one also introduces the Mach Number Ma which is the ratio of the bulk velocity to the sound speed and the Reynolds number He which is
*Departement de Mathematiques, Universite Paris VII , 75251 Paris Cedex 05, France tDepartment of Mathematics, University of Arizona, Tucson, Arizona 85721, USA
a dimensionless reciprocal viscosity of the fluid. These numbers (cf.[LL] and [BGL]) are related by the formula
(2) Our main contribution concerns the incompressible limit; due to relation (2) it is the only case where one obtains, when € goes to zero, an equation with a finite Reynolds number. This is the
only regime where global weak solutions of fluid dynamic equations are known to exist. Related results have been obtained simultaneously by A. De Masi, R. Esposito and J.L. Lebowitz [DMEL]. Our
considerations on the relation between the renormalized solution of the Boltzmann equation and the eray [L] solution of the Navier-Stokes equations rely on the pioneering work of DiPerna and Lions
[DiPLJ, giving one more example of the importance of Ron DiPerna's influence in our community. II. The Compressible Euler Limit. In this section the integral of any scalar or vector valued function f
(v) with respect to the variable v will be denote by (f);
(f) =
f( v) dv .
The operator C is assumed to satisfy the conservation properties
= 0,
= 0,
= o.
These relations represent the physical laws of mass, momentum and energy conservation during collisions and imply the local conservation laws
+ V'x· (vF) = 0,
(5) at Wvl2 F)
+ V'x· (v~ Ivl 2 F) = o.
Additionally, C(F) is assumed to have the property that the quantity (C(F)logF) is nonpositive. This is the entropy dissipation rate and implies the local entropy inequality
at (F log F)
+ V'x· (vFlog F) = (C(F) log F) :::; o.
Finally, the equilibria of C(F) are assumed to be characterized by the vanishing of the entropy dissipation rate and given by the class of Maxwellian distributions, i.e. those of the form
= (27rO)3/2 exp
Iv -
More precisely, for every nonnegative measurable function F the following properties are equivalent:
(C( F) log F} = 0 ,
= 0,
F is a Maxwellian with the form (7).
These assumptions about C(F) merely abstract some of the consequences of Boltzmann's celebrated H-theorem. The parameters p, u and (} introduced in the right side of (7) are related to the fluid
dynamic moments giving the mass, momentum and energy densities:
{F} = p,
= pu,
They are called respectively the (mass) density, velocity and temperature of the fluid. In the compressible Euler limit, these variables are shown to satisfy the system of compressible Euler
equations (11 below). The main obstruction to proving the validity of this fluid dynamical limit is the fact that solutions of the compressible Euler equations generally become singular after a
finite time (cf. Sideris [S]). Therefore any global (in time) convergence proof cannot rely on uniform regularity estimates. The only reasonable assumptions would be that the limiting distribution
exists and that the relevant moments converge pointwise. With this hypothesis, it is shown that the above assumptions regarding C(F) imply that the fluid dynamic moments of solutions converge to a
solution of the Euler equations that satisfies the macroscopic entropy inequality. THEOREM I. Given a collision operator C with properties (i), let F.(t,x,v) be a sequence of nonnegative solutions of
the equation
(9) such that, as E goes to zero, F. converges almost everywhere to a nonnegative function F. Moreover, assume that the moments
{F.} ,
{vF.} ,
{v0 v F.},
converge in the sense of distributions to the corresponding moments
{F} ,
{vF} ,
{v0 v F},
the entropy densities and fluxes converge in the sense of distributions according to
lim{F. log F.}
= {F logF} ,
lim{vF. logF,}
= {vF logF};
while the entropy dissipation rates satisfy
limsup{C(F.) logF,} ::; (C(F) logF} . • -0
Then the limit F( t, x, v) is a Maxwellian distribution,
= (27r0(t,x»3/2
exp -2"
Iv -
u(t,x)l2) O(t,x) ,
where the functions p, u and 0 solve the compressible Euler equations
= 0,
OtP+ V.,·(pu)
Ot(pu) + V.,·(pu 0 u) + V.,(pO) = 0,
and satisfy the entropy inequality
(12) Remark 1. The above theorem shows that any type of equation of the form (9) leads to the compressible Euler equations with a pressure p given by the ideal gas law p = pO and an internal energy
of pO (corresponding to a -law perfect gas). This is a consequence of the fact that the kinetic equation considered here describes a monoatomic fluid in a three dimensional domain. Other equations of
state may be obtained by introducing additional degrees of freedom that take into account the rotational and vibrational modes of the particles.
The proof of this theorem, as well as those of subsequent ones, can be found in our paper [BGL2J.
III. The Compressible Navier-Stokes Limit. As has been noticed above, the form of the limiting Euler equation is independent of the choice of the collision operator C within the class of operators
satisfying the conservation and the entropy properties. The choice of the collision operator appears at the macroscopic level only in the construction of the Navier-Stokes limit. The compressible
Navier-Stokes equations are obtained by the classical Chapman-Enskog expansion. To compare this approach with the situation leading to the incompressible Navier-Stokes equation, a short description
of this approach is given below. Given (p, u, 0), denote the corresponding Maxwellian distribution by
= (27r0)3/2 ..exp
Iv -
u12) .
The subscript (p, u, 0) will often be omitted when it is convenient. Introduce the Hilbert space L;' defined by the scalar product
(flg)M = (fg)M =
5 Denote by Land Q the first two Frechet derivatives ofthe operator G at G = 1:
= MDC(M).(Mg) ,
1 = MD2C(M):(Mg V Mg).
Taylor's formula then gives
(16) The linear operator L is assumed to be self-adjoint and to satisfy a Fredholm alternative in the space L~ with a five dimensional kernel spanned by the functions {I, VI, V2, V3, IvI2}. The fact
that L must be nonpositive definite follows directly from examining the second variation of the entropy dissipation rate at M. Denote by V, A
= {Ai}
and B
= {Bij}
the following vectors and tensors:
By symmetry, the functions Ai and Bij are orthogonal to the kernel of L; therefore the equations
L(A') = A,
L(B') = B,
have unique solutions in Ker( L).L. Assume (this would be a consequence of rotational invariance for the collision operator) that these solutions are given by the formulas
= -a(p, lI, IV I) A(V),
= -i3(p,lI,IVI)B(V),
where a and fJ are positive functions depending on p, II and IVI. If C(F) homogeneous of degree two (for example, quadratic) then a simple scaling shows that the p dependence of a and i3 is just
proportionality to p-I. A function H.(t,x,v) is said to be an approximate solution of order p to the kinetic equation (1) if
(19) where O( fP) denotes a term bounded by fP in some convenient norm. An approximate solution of order two can be constructed in the form
(20) where (P., u., lI.) solve the compressible Navier-Stokes equations with dissipation of the order f (denoted C N SE. ):
(21) tp, (8t
+ u,· \7", )B, + p,B, \7", ·U, = f!/-l'O"( U,) :0"(U,) + f\7", .['"
In these equations 0"( u) denotes the strain-rate tensor given by
while the viscosity /-l, defined by the relations
/-l(p" B,) and the thermal diffusivity '"
"(p" B,) are
(22) /-l(p,B)
= ioB(,8(p,B,IVI)IB(V)12)M = fgpB ~ foo .8(p,B,r)r6 e-!r 2 dr,
v 27r 10
1 "(p, B) = tB(a(p, B, IVI)IA(VW)M = iPB ro= v 27r
a(p, B, r )(r2 - 5?r 4 e-"21 r2 dr.
Notice that in the case where C(F) is homogeneous of degree two, the left sides become independent of Pi this is why classical expressions for the viscosity and thermal diffusivity depend only on B.
The Chapman-Enskog derivation can be formulated according to the following. THEOREM II. Assume that (p" u" B,) solve the CN SE, with the viscosity /-l(p, B) and thermal diifusivity "(p, B) given by
(22). Then there exist g, and w, in Ker(L).l. such that H" given by (20), is an approximate solution of order two to equation (1). Moreover, g, is given by the formula
(23) Remark 2. Let F, be a solution of the kinetic equation that coincides with a local Maxwellian at t = O. Let (p" u" B,) be the solution of the CN SE, with initial data equal to the corresponding
moments of F,(O,x,v). Then the expression given by 2 1 ( 1 Iv - u,(t,x)1 2 ) H'=(27rB,(t,x»3f2 exp -"2 B,(t,x) (p,(t,X)+fg,+fW,) is an approximation of order two of F,. Since M,g, is orthogonal to
the functions 1, v, Iv1 2, the quantities p" p,u, and p,( lu,12+tB,) provide approximations of order two to the corresponding moments of F,. In fact, this observation was used to do the
Chapman-Enskog derivation by the so called projection method (cf. Caflisch
[C]). IV. The Incompressible Navier-Stokes Limit. The purpose of this section is to construct a connection between the kinetic equation and the incompressible Navier-Stokes equations. As in the
previous section, this will describe the range of parameters for which the incompressible Navier-Stokes equations provide a good approximation to the solution of the Boltzmann equation. However in
this case the connection is drawn between the Boltzmann equation and macroscopic fluid
7 dynamic equations with a finite Reynolds number. It is clear from formula (2), E = Mal Be, that in order to obtain a fluid dynamic regime (corresponding to a vanishing Knudsen number) with a finite
Reynolds number, the Mach number must vanish (cf. [LL] or [BGL1]). In order to realize distributions with a small Mach number it is natural to consider them as perturbations about a given absolute
Maxwellian (constant in space and time). By the proper choice of Galilean frame and dimensional units this absolute Maxwellian can be taken to have velocity equal to D, and density and temperature
equal to 1; it will be denoted by M. The initial data F,(D, x, v) is assumed to be close to M where the order of the distance will be measured with the Knudsen number. Furthermore, if the flow is to
be incompressible, the kinetic energy of the flow in the acoustic modes must be smaller than that in the rotational modes. Since the acoustic modes vary on a faster timescale than rotational modes,
they may be suppressed by assuming that the initial data is consistent with motion on a slow timescale; this scale separation will also be measured with the Knudsen number. This scaling is quantified
by the introduction of a small parameter E such that the timescale considered is of order E- 1 , the Knudsen number is of order Eg, and the distance to the absolute Maxwellian 1.1 is of order Er with
q and r being greater or equal to one. Thus, solutions F, to the equation
(24) are sought in the form
(25) The basic case r = q = 1 is the unique scaling compatible with the usual incompressible Navier-Stokes equations. The notation introduced in the previous section regarding the collision operator
and its Fn~chet derivatives is conserved but here the Maxwellian M is absolute so that Land Q no longer depend on the fluid variables. THEOREM III. Let F,(t,x,v) be a sequence of nonnegative
solutions to the scaled kinetic equation (24) SUell that, when it is written according to formula (25), the sequence g, converges in the sense of distributions and almost everywhere to a function 9
as E goes to zero. Furthermore, assume the moments
(L- 1 (A(v))g')M'
(L- 1 (A(v)) Q9 Vg')M'
(L-l(A(v)) Q(g"g'))M'
(L- 1 (B(v))g')M'
(L- 1 (B(v)) Q9Vg')M'
(L- 1 (B(v»Q(g"g'))M
converge in D' (Rt x R~) to the corresponding moments
(L-l(A( V)) Q(g, g)}M ,
Vg}M ,
(L-l(B(v)) Q(g, g)}M .
Then the limiting 9 has the form
(26) where the velocity u is divergence free and the density and temperature fluctuations, P and 9, satisfy the Boussinesq relation (27) Moreover, the functions p, u and 9 are weak solutions of the
(29) (30)
+ u·\7xu + \7xP = 0,
+ u·\7x9 = 0,
= 1, q > 1;
(31) In these equations the expressions ft* and "'* denote the function values ft( 1, 1) and ",(1,1) obtained from (22) in the previous section.
Remark 3. The equation (31) is completely trivial; it corresponds to a situation where the initial fluctuations and the Knudsen number are too small to produce any evolution over the timescale
selected. However, this limit would be nontrivial if it corresponded to a timescale on which an external potential force acts on the system (Bardos, Golse, Levermore [BGLl]). Remark 4. The second
equations of (27), (28) and (29) that describe the evolution of the temperature do not contain a viscous heating term ft*u( u): u( u) such as appears in the CNSE,:
tp, (at + U,· \7x)O, + p,o, \7x ·U, =
Etft,U( u,): u( u,)
+ E\7x .[""
This is consistent with the scaling used here when it is applied directly to the CN SE, to derive the incompressible Navier-Stokes equations. More precisely with the change of variable (33)
= Po + Ep(t,:'), E
U,=EU(t,-), E
O,=Oo+dJ(t,-), E
the system (27), (28) is obtained for p( t, x), u( t, x) and 8( t, x) as € vanishes. In this derivation every term of the last equation of (62) is of the order €2 except tfl.U(U.) : u(u.), which is
of order three. The viscous heating term would have appeared in the limiting temperature equation had the scaling in (33) been chosen with the density and temperature fluctuations of order €2 [BLP].
Remark 5. In the case where q = r = 1 a system is obtained that has some structure in common with a diffusion approximation. A formal expansion for g., the solution of the equation
(34) can be constructed in the form
(35) This approach is related to the method of the previous section and to the work of De Masi, Esposito and Lebowitz.
Remarks Concerning the Proof of the Fluid Dynamical Limit. In
this section the collision operator is given by the classical Boltzmann formula,
CB(F) =
(F(vDF(v' ) - F(vI)F(v))b(vI - v,w)dwdvI,
R3 X S2
where w ranges over the unit sphere, VI over the three dimensional velocity space and b(vI - v,w) is a smooth function; v' and v~ are given in term of v, VI and w by the classical relations of
conservation of mass, momentum and energy (cf. [CC]) . Any proof concerning the fluid dynamical limit for a kinetic model will, as a by-product, give an existence proof for the corresponding
macroscopic equation. However, up to now no new result has been obtained by this type of method. Uniform regularity estimates would likely be needed in order to obtain the limit of the nonlinear
term. These estimates, if they exist, must be sharp because it is known (and is proved by Sideris [S] for a very general situation) that the solutions of the compressible nonlinear Euler equations
become singular after a finite time. In agreement with these observations and in the absence of boundary layers (full space or periodic domain), the following theorems are proved: i) Existence and
uniqueness of the solution to the C N SE. for a finite time that depends on the size of the initial data, provided the initial data is smooth enough (say in HS with s > 3/2). This time of existence
is independent of € and when € goes to zero the solution converges to a solution of the compressible Euler equations. ii) Global (in time) existence of a smooth solution (cf. [KMN]) to the CNSE.
provided the initial data is small enough with respect to €. These two points have their counterparts at the level of the Boltzmann equation: i) Existence and uniqueness (under stringent smallness
assumptions) during a finite time independent of the Knudsen number, as proved by Nishida [N] (cf. also
Caflisch [CD. When the Knudsen number goes to zero this solution converges to a local thermodynamic equilibrium solution governed by the compressible Euler equations. ii) Global existence for the
solution to the Boltzmann equation provided the initial data is small enough with respect to the Knudsen number. Concerning a proof of existence, the situation for the incompressible Euler equations
in three space variables is similar; their solution (defined during a finite time) is the limit of a sequence of corresponding incompressible Navier-Stokes solutions with viscosities of the order of
f that remains uniformly smooth over a time interval that is independent of f. However, there are two other types of results concerning weak solutions. First, the global existence of weak solutions
to the incompressible Navier-Stokes equations has been proved by Leray [L]. Second, using a method with many similarities to Leray's, R. DiPerna and P.-L. Lions [DiPL] have proved the global
existence of a weak solution to a class of normalized Boltzmann equations, their so-called renormali zed solution. This solution exists without assumptions concerning the size of the initial data
with respect to the Knudsen number. Such a result also holds for the equation (37) over a periodic spatial domain T3. The situation concerning the convergence to fluid dynamical limits (with f going
to zero) of solutions of the Boltzmann equation (37) with initial data of the form
(38) continues to reflect this similarity. Following Nishida [N], it can be shown that for smooth initial data (indeed very smooth) the solution of (37) is smooth for a time on the order of f l - r .
For r = 1 this time turns out to be independent of f and during this time the solution converges (in the sense of the Theorem III) to the solution of the incompressible Euler equations when q > 1 or
to the solution of the incompressible Navier-Stokes equations when q = 1. . For r > 1 the solution is regular during a time that goes to infinity as f vanishes; in this situation it converges to the
solution of the linearized Navier-Stokes equations when q = 1 or to the solution of the linearized Euler equation when q > 1. The borderline consists of the case r = q = 1. In this case it is natural
to conjecture that the DiPerna-Lions renormalized solutions of the Boltzmann equation converge (for all time and with no restriction on the size of the initial data) to a Leray solution of the
incompressible Navier-Stokes equations. However, our proof of this result is incomplete without some additional compactness assumptions (cf. [BGL3]). Leray's proof relies on the energy estimate
For the Boltzmann equation the classical entropy estimate plays an analogous role in the proof of DiPerna and Lions. The entropy integrand can be modified by the addition of an arbitrary conserved
density; the form chosen here is well suited for comparing F, with the absolute Maxwellian M:
(F,(O) log
- F,(O)
+ M) dv dx,
where D, is the entropy dissipation term given by
bdwdvl dv. Here F: 1 , F:, F O},
is the acute angle between n and v - w,
=v -
n( n . (v - w))
w' = w + n(n. v - w)) are the post-collisional velocities associated with the (ingoing) collision configuration (v,w,n), and k(lv - wl,e) is the collision kernel (for hard spheres, k(lv - wl,B) = Iv
- wi· cos e). >. is proportional to the mean free path between collisions - for the rest of this article, we set >. = l. The integration in (2) is 5-dimensional, and this is the second reason why
particle simulation is a sensible way to solve the Boltzmann equation numerically - except in isotropic situations where au, f) could be evaluated with low-dimensional integrals, it would be just too
inefficient to evaluate (2) by quadrature formulas. Monte Carlo simulation is a well-known alternative, and we shall see that it arises quite naturally (but not necessarily) in particle simulation.
2. Reduction of the Boltzmann Equation. We now go through a series of fairly elementary steps which will reduce the Boltzmann equation to a form which is readily accessible to an approximation by
point measures (particle simulation). These are a) time disretization b) separation of free flow and interaction ("splitting") c) local homogenization d) weak formulation e) measure formulation.
15 To start, suppose that a particle at (x, v) in state space will go to tPt(x, v) alter time t, provided there is no collision with another particle (if the particle does not interact with the
boundary of the confining container, tPt(x, v) = (x + tv, v); otherwise, we assume that the trajectory is defined by some reasonable deterministic boundary condition, like specular reflection).
Choose a time step
> 0, then a first order discrete counterpart to the deriva-
~t {fC(j + l)~t,
tPt.t(x,v» -
Substitute (3) for the left hand side in (1), let (y, w) = tPt:.t(x, v) and evaluate. The result is
The discretized equation (4) suggests to split the approximation into the collision simulation
+ l)~t,y,w) = f(j~t,y,w) + ~t a(f,j)(j~t,y,w)
and the free flow step
+ l),~t,y,w) = l((j + l)~t,tP_t:.t(Y'W»).
The numerical simulation of (6) will be obvious once we have understood the collision simulation. We therefore focus on that. Notice that the position variable y is not operated on in (5) -
effectively, (5) is a discretized version of the spatially homogeneous Boltzmann equation. To continue, we have to introduce a concept of "spatial cell" which already Boltzmann used in the classical
derivation of his equation (his cell was just defined by (x, x + dx), (y, y + dy) etc.). The key idea is that such a cell is small from a macroscopic point of view, but large enough to contain many
particles, and certainly large enough to keep a collision count for any particle in the cell during ~t( dt) by just counting collisions with other particles in the same cell. In this collision count,
the spatial variation of the gas density over the cell is neglected, i.e. spatial homogeneity over the cell is assumed (in fact, the numerical procedures we are about to describe allow to keep the
exact positions of the approximating particles, but the collision partners needed for the collision simulation are assumed to be homogeneously distributed in each cell; see [2]). Specifically,
suppose that the gas in question is confined to a container 1\ C R3 , and that this container is partitioned into cells by 1\ = UCi ,
Ci n Cj = 0,
and we assume that the cells are such that f(j6..t, .) is on Ci well approximated by its homogenization
(we replace f(j6..t,·) by its homogenization, but keep writing f(j6..t, .)). If f(j6..t,·) is locally homogeneous in this sense, so is j((j + l)6..t, .); however, the free flow step will destroy the
homogeneity, and one has to homogenize again before the next collision simulation. We note that the cells need not be the same size and that the partition of /\ can actually be changed with time
(refined, for example) to gain better adjustment to the hypothesis of local homogeneity. We have now reduced the Boltzmann equation to the remaining key question of doing the collision simulation (5)
on an arbitrary but fixed cell Ci, where f(j . 6..t,.) is supposed to be independent of y. To simplify notation, we write h( v) for f(j . 6..t, y, v) and h+l (v) for j((j + l)6..t, y, v). Then (5)
reads, explicitly
h+1(v) (7)
(1- 6..t 11 k(lv - wl,8)dn h(w)dw)h(v)
k(lv - wi, 8)h( v')h( w')dn dw.
We next make the crucial assumption that there is an A > 0 such that
k(lv - wl,8)dn::; A <
for all v, w. Unfortunately, this means that k has to be truncated even for the hard sphere case; a little thought shows that we have to modify k for large Iv - wi, i.e. some of the collisions
between particles with large relative velocity are neglected. Fortunately, for any reasonable gas cloud only few particles are affected.
Also, we renormalize fj(v) such that JIi dw = 1 (assUlning that we have dy = 1 and J J fj(y, v)dv dy = ,A3( Ci) J Ii( v )dv = 'Yj,i, this means
J Jh(y, v)dv
that we have to replace fj by Then, if 6..t
,A3(Ci). . I Ii· ---; for thIS paper, we sImp y set ~~
< ~, Ji ~ 0 implies that Ji+1 ~
--- = 1 . Ui
Thus the truncation (8) is necessary to keep the density nonnegative, an essential feature. This is an artifact of the explicit nature of our approximation scheme; (8) can be avoided by starting from
an alternative formulation of the Boltzmann equation, but this would lead to serious problems later on. The next step is a transition to a weak formulation of (7). To this end, multiply (7) with a
test function 'P E Cb(R!), integrate, use the involutive property of the collision transformation and that Iv' - w'l = Iv - wi. The result is
Kv,w'PJi(v)Ji(w)dv dw,
17 where K v,w'P
(1 - b.t J k dn) 'P( v) + b.t J k 'P( v')dn
(we have also used the
J fidw = 1).
Finally, before we rewrite (9) in measure formulation, we introduce a convenient representation for Kv,w'P. Let v and w be given. Then, we define a continuous function Tv,w : Si
~ R3 by Tv,w(n) =
v'. Moreover, let Bl = {Y E R2;
be the circle of area 1, and assume that b.t
Ilyll ~ ~ }
LEMMA 2.1. (see rl}) For all V,w E R3 , there is a continuous function ¢;v,w :
Si such that
REMARKS AND SKETCH OF THE PROOF. This lemma, which is extremely useful for the sequel, is proved in detail in [1]. The function ¢;v,w can actually be computed in terms of the collision kernel k. The
purpose of the function ¢;v,w is a) to decide whether the particles with velocities v nd w collide at all, and b) if they collide, with what collision parameter. We refer to y as "generalized
collision parameter" . The idea of the proof is as follows. We represent Bl by polar coordinates as
{(r, /1); 0 ~ r ~ ~, 0 ~ /1 ~ 27r }. There is an ro < ~ such that 7rr5
= b.t J
k dn.
Let n E Si be represented by (8,tf;) (8 E [0,%], tf; E [0,27r)), where 8 is the polar angle with respect to the axis in direction of v - w, and tf; is an azimuthal angle. For r 2: ro, let ¢;v,w(r,/1)
correspond to a grazing collision, and therefore Tv,w on a set of measure 1 - b.t J k dn.
= 0
tf; = /1. These angles ¢;v,w(r, /1) = v; this happens
For r < ro, the collision result is nontrivial. We set again tf;(r, /1) 8(r, /1) = 8( r) is defined as the inverse of a function r( 8) which satisfies
= /1,
Clearly 7r/2
(i) =2b.t J k(lv-wl, 8) sin 8d8 = r5, and b.tJt.p(v')k dn =J J'P &v,w(8(r), /1V rdrd/1. 0 0 0
This completes the proof.
Now define probability measures dfLj = By the lemma, (9) reduces to
Ii dv, and let 'l1( v, w, y) =
Tv,wo 1, possess weak solutions which may be obtained as a limit of vortex "blobs"; i.e., the vorticity is approximated by a finite sum of cores of prescribed shape which are advected according to
the corresponding velocity field. If the vorticity is instead a finite measure of bounded support, such approximations lead to a measure-valued solution of the Euler equations in the sense of DiPerna
and Majda [7]. The analysis is closely related to that of [7]. Key words. incompressible flow, Euler equations, weak solutions, vortex methods AMS(MOS) subject classifications. 76C05
Introduction. There has been renewed interest lately in the development of singularities in weak solutions of the Euler equations of two-dimensional, incompressible flow. Two examples are boundaries
of patches of constant vorticity and vortex sheets. (See [16J for a general discussion of both.) In the latter case the vorticity is a measure concentrated on a curve or sheet which we take to be
initially smooth. At later time, a singularity in the sheet may develop, and the nature of solution past the singularity formation is unclear. Such questions for vortex sheets were dealt with at this
conference in talks by Cafiisch, Krasny, Majda, and Shelley. We focus here on discrete approximations of weak solutions of the 2-D Euler equations of the sort used in computational vortex methods.
The vorticity is approximated by a finite sum of "blobs", or cores of prescribed shape, which are advected according to the corresponding velocity field. For the 2-D Euler equations with initial
vorticity in LP, p > 1, with bounded support, we show here that weak solutions in the usual sense are obtained in the limit as the size and spacing of the blob elements go to zero. (Weak solutions
are known to be unique only if the vorticity is in Loo.) If the initial vorticity is instead a finite measure of bounded support, as would be the case for a vortex sheet of finite length, a limit is
obtained which is a measure-valued solution of the Euler equations in the sense of DiPerna and Majda [7J. The analysis is closely related to that of [7J. For both results, the number of vortex
elements needed is very large compared to the radius of the vortex core, in contrast to the case of smooth flows. This seems at least qualitatively consistent with observed behavior in calculations
of Krasny [13,14J and others, in which a regularization analogous to the blob elements is used to modify the vortex sheet evolution so that calculations can be continued past the time when the sheet
develops singularities. Shelley and Baker [19J have used a different regularization, in which the vortex sheet is replaced by a layer of finite thickness. Both calculations are suggestive of weak
solutions of the Euler equations past the time of first singularity (cf. [17]). Mathematical treatments of tDepartment of Mathematics, Duke University, Durham, NC 27706. Research supported by
D.A.R.P.A. Grant N00014-86-K-0759 and N.S.F. Grant DMS-8800347.
24 the possible nature of measured-valued solutions in 2-D, such as might occur after the singularity in the vortex sheet, have been given in [8,10]. A convergence result for vortex element
approximations to vortex sheets, somewhat complementary to the results presented here, has been given by Caflisch and Lowengrub [3,15]. They show that discrete approximations to a sheet converge in a
much more specific sense for a time interval before the singularity formation, provided the sheet is analytic and close to horizontal. Recently Brenier and Cottet have obtained another convergence
result with vorticity in LP, p > 1. In their case the spacing of the vortex elements can be comparable to the radius of the core, unlike the first result presented here.
It is a pleasure to thank A. Majda for suggesting this investigation and for arranging a visit to the Applied and Computational Mathematics Program at Princeton University, during which this work was
carried out. 1. Discussion of Results. In [6] DiPerna and Majda introduced a notion of measure-valued solution for the Euler equations of three-dimensional incompressible fluid flow, based on the
conservation of energy, which was intended to incorporate possible oscillation and concentration in nonsmooth solutions. They showed that measure-valued solutions exist and can be obtained as a limit
under regularization, but they may not be unique. For the two-dimensional case they used a more special definition of measure-valued solution of the Euler equations [7] which takes into account the
conservation of vorticity as well as energy. They showed in [7] that various regularizations of the 2-D Euler equations converge to measure-valued solutions provided the initial vorticity is a
measure on R2 of bounded support and finite total mass. This includes the important case of vortex sheets. They also showed that certain regularizations produce classical weak solutions (in the
distributional sense) in 2-D provided the initial vorticity is in LP for some p > 1. One regularization studied was an approximation by a finite number of vortex "blobs". They showed that a class of
vortex blob approximations converges to a measurevalued solution of the 2-D Euler equations, provided that the total circulation is zero, but they did not determine whether classical weak solutions
could be obtained in this way when the vorticity is more regular than L1.
In this work we give another treatment of the vortex blob approximations to the 2-D Euler equations. It is similar to that of [7] but more straightforward and direct. With a slightly different choice
of parameters, we show that vortex blob approximations converge to measure-valued solutions, again for initial vorticity which is a measure of bounded support and finite mass. The total circulation
is arbitrary. (In two dimensions the total energy is infinite if the total circulation is nonzero.) In the case of initial vorticity in LP for some p > 1, we show that a classical weak solution is
obtained. A unified treatment is given for the two results; in fact, as is evident in the analysis of [7], the essential points to verify in either case are bounds for the approximate vorticity and
energy and a kind of weak consistency with the Euler equations. For smooth solutions of the Euler equations in two or three dimensions, the blob approximations of vortex methods converge with rates
determined by the two
25 length parameters, the radius of the blob elements (which can be thought of as a smoothing radius) and the spacing of the elements; the radius is usually taken larger than the spacing. Such a
result was proved in two dimensions in [11]. For a summary of the theory, see, e.g., [1,2]. It has recently been shown in [9] that for smooth flows convergence is possible even with point vortices in
place of the blob elements. In the nonsmooth case considered here, however, our results require that the spacing of the blobs is quite small relative to the core radius, and correspondingly the
number of blob elements is large. A result of the sort presented here was given for vorticity in L oo in [18], as well as a treatment of stochastic differential equations of particle paths as a
discretization of viscous flow. Again for vorticity in L oo , it has been shown [4,5] that blob approximations converge for short time with the radius comparable to the spacing. In [5] Cottet
describes an elegant and appealing approach to the consistency of these methods for weak solutions in 2-D; his approach could be applied to the situation studied here. We now describe the
two-dimensional vortex blob approximation to be used. The formulation and notation correspond to Section 2 of [7]. For 2-D incompressible, inviscid flow, the vorticity w is conserved along particle
paths; this is expressed in the equation Wt
+ V· \7w = 0,
where W = V2,1 - Vl,2 is the (scalar) curl of the velocity v. We will approximate the vorticity field at a given time by a sum of vortex "blobs", i.e., translates of a core function with specified
shape. The flow is then simulated by advecting these elements according to the velocity field determined by the approximate vorticity. For this purpose, we choose a core function 1 and has bounded
support. For each E > 0, let fj, XJ(t), ve(x, t)be determined by the vortex blob approximation described above for 0 ~ t ~ T. Assume the parameters are chosen so that 5(E) = E" for some a with 0 < a
< 1/4, and h(E) ~ CE 4 exp( -COC 2 ) for a certain constant Co and any C. Then a subsequence of {v e } converges, as E -+ 0, to a classical weak solution of the Euler equations with initial velocity
Vo and vorticity Wo. The convergence takes place strongly in L2 of any bounded region in space-time with 0 ~ t ~ T. It will be evident below that if p > 2 the initial smoothing is not necessary,
i.e., we may use Wo rather than wg in (1.9). In fact, in the argument below we assume p < 2.
If Wo is a measure, the vortex blob approximation already described leads to a measure-valued solution. We do not give the definition of the measure-valued solution here, but refer instead to [7]. We
now state our result in this case (cf. the result of [7], Section 2). THEOREM 2. Suppose Wo is a Radon measure R2 of finite mass and bounded support, and suppose that the corresponding velocity field
is locally L2 on R2. Define vortex blob approximations as before for 0 ~ t ~ T, with 5(E) = E" for some a with 0 < a ~ 1/7, and choose h(c:) ~ Cc: 6 exp(-CoC 2 ). Then a subsequence of {v e }
converges as c: -+ 0 to a measured-valued solution of the Euler equations with specified initial condition. The convergence is strong in U(n) for 1 < P < 2, and weak in L2(n) , for any bounded region
n of space-time with 0 ~ t ~ T. For a more detailed description of the limit measure-valued solution and the nature of the convergence, see Theorem 1.1 of [7] and the discussion preceding it. It will
be seen below that we obtain Theorem 2 by a simple modification of the proof of Theorem 1. 2. Proof of Theorem 1. To begin the proof of Theorem 1, we discuss bounds on various quantities related to
the vorticity. We will need an estimate for wg in L3. Since wg = j6 * Wo we have 4 3 It is evident that
(2.1 ) so that, assuming p (2.2) with
< 3,
1 p
28 We will use the passive transport w' (x, t) of wg by the flow determined by the blob approximation, i.e., the solution of
V' . v' = 0 the flow is area-preserving, and thus
(2.4) (2.5) Next we consider the sum (1.1) as a discretization of a convolution. Let X! be the flow determined by the blob vorticity; for an initial point a E R2, x(t) = X;(a) is the solution of
~: = v'(x, t),
= a,
with V' given by (1.3). We will need a crude bound for the Jacobian oX!/oa. U sing the above, we have
= "" oK. (x _ xe)r· . J ~ ax J J ,
It is easily seen from (1.4) that
L Ifjl S L
Thus for t
= I.
~ CC;-2. Moreover
Iw81dx =
S CIWolL"
(2.7) Now suppose g(a) is a C 1 function; we compare
g(a)wg(a)da with Lg(aj)rj. J
On R j we have Ig(a) - g(aj)1 ~ hlV'glLoo so that
g(a)wg(a)da - g(aj)rjl
(g(a - g(aj»wg(a)dal
~ hlV'glLoo IwglL'(R;)'
Summing over j gives (2.8)
I g(a)wg(a)da - L g(aj)rjl ~ hlV'glLoo IWolL" J
29 We use this to compare (1.1) with the corresponding integral. Since
we may apply (2.8) with g( 0')
w'(x, t)
= ¥'~( x -
J =J =
X:( 0')). We obtain
¥,,(x - X;(O'))wg(O')dO' ¥,,(x - y)w'(y, t)dy
+ EI
+ EI
= (¥" * w')( x, t) + EI with
IE1(x,t)l:::; hlwol£1IV¥',lu",IJIL= :::; Chc;-3Iwo ILl exp( C1 c;-2Iwo 1£1 T).
The error EI will be small if we choose h small enough relative to c;, as in Theorem 1. Next we show that w'(·,t) is uniformly bounded in LP. We saw above that w' is uniformly close to ¥" * w'. We
know from (2.4) that w', and therefore ¥', * w', is uniformly bounded in LP. First we have the simple estimate (2.10) using (1.9). Now since w' is uniformly bounded in LI, the measure of the set {x:
Iw'(x,t)1 2: I} is also uniformly bounded. On this set, w'(·,t) is close in Loo, and therefore in LP, to ¥" * w'(·,t), which is bounded in LP. Thus the LP norm of w'(·,t) on this set is bounded. On
the remaining set, where Iw'l :::; 1, we have Iw'IP :S lwei, so that the LP norm is bounded in terms of the LI norm. In summary, (2.11)
Iw'(·, t)ILP :S C, c; > 0, 0:::; t :::; T,
the constant depending on Iwo ILp and Iwo 1£1. In just the same way we can argue using (2.5) that (2.12) It follows from (2.11), (2.12) and the Calderon-Zygmund inequality that
IVv'(·,t)ILP :::;
IVv'(·, t)IL3 :::: C5- i3 •
In the first inequality we have used the fact that p we then have (2.15)
Iv'(., t)b· :::: C,
> 1. From Sobolev's inequality
provided p < 2. In order to check the consistency of the vortex blob approximation with the Euler equations as a c; ~ 0, we examine the error E in satisfying the vorticity evolution equation,
E= - at
+v· ·Vw· .
Differentiating (1.1) and substituting from (1.6), we have, as in [7], equation (2.37), (2.16)
E(x, t) = 2)v'(x, t) - v'(X;, t)]. V'P.(x =
V . { 2)v'(x, t) - v'(X;, t)]'P.(x - XJ)rj} == V . F(x, t) )
We will estimate F in LI of space by (2.17)
Fj(x,t) = [v«x,t) -v'(X;,t)]'P.(x -XJ)
= ==
+ (1 -
x)XJ)· (x - XJ)'P«X - XJ)ds
We set Il.(z) = Z'P.(z), so that
We will estimate the x-integral of CIIl(Z/C;),
Iii I using
Holder's inequality. Since Il.( z)
which is small if r < 2. We choose r = 3/2 and bound the other factor in L3. We saw in (2.14) that IVv'IL3 ::; C 8-(3 with some f3 ::; 4/3. Thus after rescaling, the L3 norm of VV'(sx + (1 - 8)XJ),
as a function of x, is bounded by C8- 2 / 3 8-(3. Therefore,
and integrating in s, (2.18)
Combining the last inequality with (2.17), we have
If {j = giT, we have a power of provided (1 < 1/4, and we have
of 1/3 - (1(3
> (1 - 4(1)/3 ==
a; this is positive
1F(',t)ILI :SCg a , some a>O.
We now use (2.19) to check the weak consistency of the vortex blob approximation as g -+ 0; that is, we show that for suitable test functions (x, t) with div = 0,
1 T
(2.20) as
g -+
+ v'· (v'· \l) 2R, each ]{, is just K, and we can write
= L[J(x -
aj) - K(x)]r j .
It is easy to see that this is bounded by
L Ci x l-
2 lrjl
::; CIIwoll Ixl- 2
for Ixl
> 2R, and thus bounded in L2. This completes the verification of (3.2).
It remains to verify the Lipschitz continuity of v e in time. We return to (2.25), which was a consequence of Lemma 2.1. We will choose q = 3 and s = 2, so that eI> E W·,q implies VeI> E L oo and
therefore V'-P E Lr with 3 ::; r ::; 00. We use this fact to estimate the integral on the right in (2.25). We had v' bounded in L2, and v E L3, for example. Thus the product v' . v' is a sum of terms
in LP for 1 ::; p ::; 3/2. Since VeI> is bounded in the dual spaces for such £P, we may estimate the integral using Holder's inequality. We conclude as before that
Iv:(" i)1 IV-',3/>
W- 2 ,3/2.
so that v' is Lipschitz continuous in We have now verified all the conditions (1)-(5) for the convergence to a measure-valued solution.
37 REFERENCES [1] [2] [3] [4]
[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
[18] [19] [20]
J. T. BEALE AND A. MAJDA, High order accurate vortex methods with explicit velocity kernels, J. Comput. Phys., 58 (1985), pp. 188-208. J. T. BEALE AND A. MAJDA, Vortex methods for fluid flow in two
or three dimensions, Contemp. Math., 23 (1984), pp. 221-229. R. CAFLISCH AND J. LOWENGRUB, Convergence of the vortex method for vortex sheets, preprint. J. P. CHOQUIN, G. H. COTTET, AND S.
MAS-GALLIC, On the validity of vortex methods for nonsmooth flows, in Vortex Methods (C. Anderson and C. Greengard, editors), Lecture Notes in Mathematics, Springer-Verlag, pp. 56-67. G.H. COTTET,
These d'Etat, Universite Pierre et Marie Curie. R. DIPERNA AND A. MAJDA, Oscillations and concentrations in weak solutions of the incompressible fluid equations, Commun. Math. Phys., 108 (1987), pp.
667-689. R. DIPERNA AND A. MAJDA, Concentrations and regularizations for 2-D incompressible flow, Comm. Pure Appl. Math., 60 (1987), pp. 301-45. R. DIPERNA AND A. MAJDA, Reduced Hausdorff dimension
and concentration- cancellation for 2-D incompressible flow, J. Amer. Math. Soc., 1 (1988), pp. 59-95. J. GOODMAN, T. Hou, AND J. LOWEN GRUB, Convergence of the point vortex method for the 2-D Euler
equations, preprint. C. GREENGARD AND E. THOMANN, On DiPerna-Majda concentration sets for two-dimensional incompressible flow, Comm. Pure Appl. Math., 41 (1988), pp. 295-303. O. HALO, The convegence
of vortex methods, II, SIAM J. Numer. Anal., 16 (1979), pp. 726-755. R. KRASNY, Desingularization of periodic vortex sheet roll-up, J. Comput. Phys., 65 (1986), pp. 292-313. R. KRASNY, Computation of
vortex sheet roll-up in the Trefftz plane, J. Fluid Mech., 184 (1987), p. 123. R. KRASNY, Computation of vortex sheet mil-up, in Vortex Methods (C. Anderson and C. Greengard, editors), Lecture Notes
in Mathematics, Springer- Verlag, pp. 9-22. J. LOWENGRUB, Convergence of the vortex method for vortex sheets, Thesis, New York University, 1988. A. MAJDA, Vorticity and the mathematical theory of
incompressible fluid flow, Comm. Pure Appl. Math., 39 (1986), pp. SI87-S220. A. MAJDA, Mathematical fluid dynamics: the interaction of nonlinear analysis and modern applied mathematics, to appear in
the Proc. of the Centennial Celebration of the Amer. Math. Society. C. MARCHIOIW AND M. PULVIRENTI, Hydrodynamics in two dimensions and vortex theory, Commun. Math. Phys., 84 (1982), pp. 483-503. M.
SHELLEY AND G. BAKER, On the relation between thin vortex layers and vortex sheets: Part 2, numerical study, preprint. R. TEMAM, The Navier-Stokes Equations, North-Holland, Amsterdam, 1977.
CHEN GUI-QIANG* Abstract. We are concerned with the limit behavior of approximate solutions to hyperbolic systems of conservation laws. Several mathematical compactness theories and their role are
described. Some recent and ongoing developments are reviewed and analyzed. AMS(MOS) subject classifications. 35-02, 41-02, 35B25, 35D05, 35L65, 46A50, 46G10, 65M10.
1. Introduction. We are concerned with the limit behavior of approximate solutions to hyperbolic systems of conservation laws. The Cauchy problem for a system of conservation laws in one space
dimension is of the following form:
(1.1) (1.2)
+ f(u)",
g(x, t,u),
u It=o=uo(x),
where u = u(x, t) E Rn and both f and 9 are smooth nonlinear functions Rn to Rn. The system is called strictly hyperbolic in a domain 'D ifthe Jacobian \1 f( u) has n real and distinct eigenvalues
(1.3) at each state u E 'D. If the Jacobian \1 f( u) has n real and indistinct eigenvalues A;(u)(i = 1,2, ... ,n) in 'D, one calls the system nonstrictly hyperbolic in 'D. An eigenfield corresponding
to Aj is genuinely nonlinear in the sense of Lax [LA2] if A/S derivative in the corresponding eigendirection never vanishes, i.e.,
(1.4) where The system is called genuinely nonlinear if all of its eigenfields are genuinely nonlinear. Otherwise, one calls the system linearly degenerate. The quasilinear systems of conservation
laws result from the balance laws of continuum physics and other fields (e.g., conservation of mass, momentum, and energy) and, therefore, describe many physical phenomena. In particular, important
*Partially supported by U.S. NSF Grant # DMS-850403, by CYNSF, and by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract
W-31-109-Eng-38. Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, NY 10012 U.S.A. Current address: Department of Mathematics, The University of Chicago, Chicago, IL 60637.
examples occur in fluid dynamics (see Section 5), solid mechanics (see Section 4), petroleum reservoir engineering (see Section 4), combustion theory and game theory
[CR]. Since f is a nonlinear function, solutions of the Cauchy problem (1.1)-(1.2) (even starting from smooth initial data) generally develop singularities in a finite time, and then the solutions
become discontinuous functions. This situation reflects the physical phenomenon of breaking of waves and development of shock waves. For this reason, attention focuses on solutions in the space of
discontinuous functions, where one cannot directly use the classical analytic techniques that predominate in the theory of partial differential equations of other types. To overcome this difficulty,
one constructs approximate solutions u« x, t) to the following perturbations: a. Perturbation of equations: One of the perturbation prototypes is the viscosity method; that is, u«x, t) are generated
by the corresponding parabolic system of the form
= g(x, t, u)
u It=O
+ E(D(u)ux)x,
where D is a properly selected and nonnegative matrix. Usually one chooses D to be the unit matrix. b. Perturbation of Cauchy data: u«x, t) are generated by the following Cauchy problem:
= g(x, t, u),
u t=o
c. Perturbation of both equations and Cauchy data: Besides the viscosity method with perturbated Cauchy data u t=O = uij(x) (see (1.5», another perturbation prototype is the difference method; that
is, u«x,t) (E = .6.x, space step length) are generated by the difference equations
Dtu + Dxf(u)
= g«x, t, u),
u t=O
= uo(X; E),
and then one studies limit behaviors of the approximate solutions u«x, t) as E --+ 0: convergence and oscillation. Examples of this approach are the Lax-Friedrichs scheme [LAl], the Glimm scheme
[GL], the Godunov scheme [GO], higher-order schemes (e.g., [LW], [SZ], [TAl) and the fractional step schemes (e.g., [DCL2l). The motivation for using approximate solutions comes from continuum
physics, numerical computations, and mathematical considerations. The system of gas dynamics generally involves viscosity terms, although the viscosity coefficient is very small and can be ignored;
the initial value function is determined only by using statistical data and some averaging methods described by a weak topology. Numerical computations of systems of conservation laws are limited to
calculations of
difference equations and discrete Cauchy mesh data. In game theory with non-zero sum, derivative functions of the stochastic game values and the deterministic game values satisfy systems of
conservation laws with and without viscosity terms, respectively. Therefore, studying the relationship between the stochastic game and the deterministic game when "noise" disappears is equivalent to
studying the limit behavior of the approximate solutions as f -+ 0 (see [CR]). Moreover, one expects to use "good" Cauchy data (e.g., total variation functions) to approximate "bad" Cauchy data
(e.g., LOO functions) to obtain a solution to the Cauchy problem with "bad" Cauchy data. Thus, such a study enables us to understand how the behavior of the system at the microscopic level affects
the behavior of the system at the macroscopic level and, therefore, understand the well-posedness of the Cauchy problem (1.1)-(1.2) in a weak topology. The remainder of this paper has the following
organization. Section 2 focuses on compactness theories. Sections 3 and 4 discuss the limit behavior of approximate solutions to the Cauchy problem for the scalar conservation law and for hyperbolic
systems of conserVation laws, respectively. For concreteness, we focus our attention on homogeneous systems (i.e., h(u) == 0) in Sections 3 and 4. Section 5 focuses on approximate solutions generated
by the Lax-Friedrichs scheme, the Godunov scheme, and the viscosity method for the homogeneous system of isentropic gas dynamics and on the fractional-step Lax-Friedrichs scheme and Godunov scheme
for the inhomogeneous system of isentropic gas dynamics. Section 6 concludes our review with some remarks about distinguishing features of multidimensional conservation laws. The techniques and
strategies developed in this direction should be applicable to other interesting problems of nonlinear analysis and their regularizations. This paper is dedicated to the memory of Ronald J. DiPerna.
His life and his work is an inspiration to the author. 2. Compactness Theories. One of the main difficulties in studying nonlinear problems is that, after introducing a suitable sequence of
approximations, one needs enough a priori estimates to ensure the convergence of a subsequence to a solution. This argument is based on compactness theories. Here we describe several important
compactness theories that have played a significant role in the field of conservation laws.
2.1. Classical Theories The two important compactness theorems provide natural norms for the field of conservation laws in classical theories of compactness: BV and L1 compactness theorems. 2.1.1. BV
Compactness Theorem THEOREM 2.1. There exists a subsequence converging pointwise a.e. in any function sequence that has uniform control on the L OO and total variation norms.
41 In the context of a strictly hyperbolic system of conservation laws, the LOO norm and the total variation norm provide a natural pair of metrics to study the stability of approximate solutions in
the sense of LOO. The LOO norm serves as an appropriate measure of the solution amplitude, while the total variation norm serves as an appropriate measure of the solution gradient. The role of these
norms is indicated by Glimm's theorem [GL] concerning the stability and convergence of the Glimm approximate solutions, provided that the total variation norm of the initial data Uo (x) is
sufficiently small for systems of conservation laws, and by results of Oleinik rOLl, Conway and Smoller [CS], and others concerning the stability and convergence of the Lax-Friedrichs and the Godunov
approximate solutions with large initial data uo( x) for the scalar conservation law. The families of approximate solutions {u'} are stable in the sense that {
lu'(" t)loo
~ const·luol oo ,
TVu'(" t)
const. TV uo,
where constants are independent of € and depend only on the flux function f. Furthermore, there exists a subsequence that converges pointwise a.e. to a globally defined distribution solution u. Until
the end of the 1970s, almost all results concerning the stability and convergence of approximate solutions for conservation laws were obtained with the aid of the BV compactness theorem (e.g., [BA,
DZ, LLO, NI, NS, SRI, TEl, ZG]).
2.1.2. LI Compactness Theorem A more general compactness framework for conservation laws is the LI compactness theorem.
THEOREM 2.2. A function sequence {u'(x)} C LI(n), n cc R n , is strongly compact in LI if and only if (i) Ilu'll£' ~ M, M is independent of f. (ii) {u' (x)} is equicontinuous in the large, i.e., 'if
h all u' E {u'(x)},
> 0, :l5( h) > 0 such that for
10 lu'(x + y) - u'(x)ldx ~ h if only Iyl
< 5(h).
In the context of conservation laws, the role of the LI norm is indicated by Kruskov's theorem [KR] concerning the stability and convergence of the viscosity approximate solutions and the uniqueness
of generalized solutions for the scalar conservation law and by Temple's theorem [TE3] concerning the weak stability of generalized solutions with respect to the initial value for systems of
conservation laws.
2.2. The Theory of Compensated Compactness Weak topology has played an important role in studying linear problems where weak continuity can be used; however, lack of weak continuity in nonlinear
problems has long restricted the use of weak topology. The theory of compensated compactness established by Tartar [TI-T3] and Murat [MI-M4] is intended to render weak topology more useful in solving
nonlinear problems. In other words, the theory deals with the behavior of nonlinear functions with respect to weak topology, for instance, the weak continuity and the weak lower semi continuity of
nonlinear functions. Here we restrict our attention to that partion of the theory relating to conservation laws. As is well known, it is difficult to clarify the conditions to ensure weak continuity
and weak lower semicontinuity for general nonlinear functions (e.g., [DA, MI-M4, TI-T4]). However, for a 2 x 2 determinant, a satisfactory result can be obtained [MI-M2, T2]. THEOREM 2.3. Let ncR x
R+ = R~ be a bounded open set and u< : n -+ R4 be measurable functions satisfying in L~(n),
Then there exists a subsequence (still labeled) u< such that
Iu 0 constant, satisfies
One can construct admissible solutions u« x, t) (e.g., [CH3]) satisfying (4.4)
0< G
A detailed discussion of general ¢> can also be found in [CH4].
Example 2. This example involves a system arising in the polymer flooding of an oil reservoir [PO],
(4.5) where 0 ~ Ul ~ 1, 0 ~ U2 ~ Ul are the concentration of water and the overall concentration of a polymer at any x and t, respectively, and f(Ul,C) is a smooth function such that f(Ul,C)
increases from zero to one with one inflection point where c is constant and such that f( Ul, c) decreases with increasing c for fixed Ul (see [TEl]). The essential feature of system is that the
strict hyperbolicity fails along a certain curve in state space. Using Temple's theorem [TEl], we have a global solution sequence u'(x, t) satisfying (4.2) and (4.5) with the Cauchy data u~(x) of
bounded variation. THEOREM 4.7. [CH3]. The initial oscillations will still propagate along the linearly degenerate field for the nonstrictly hyperbolic system (4.2) and (4.5). Thus if the initial
data sequence u~( x) is a highly oscillatory sequence, the exact solution sequence u'(x, t) is a highly oscillatory sequence, too. One cannot expect convergence of the Glimm approximate solutions
with highly oscillatory initial data.
Remark. The arguments in Frameworks (A)-(C) could be extended to LP approximate solutions. We refer the reader to [DAF1, SH, LPj.
5. The System of Isentropic Gas Dynamics. Here we describe the limit behavior of the approximate solutions u'(x, t), generated from the fractional step Lax-Friedrichs scheme and Godunov scheme to the
inhomogeneous system of isentropic gas dynamics in the Euler coordinate (5.1)
= U(p, u; x, t),
+ (pu)x
+ (pu 2 + p(p))x
(p,U) t=o
= V(p,
u; x, t),
= (po(x),uo(x)),
51 where u, p, and p are the velocity, the density, and the pressure, respectively. For a polytropic gas, p(p) = P p"l, where k is the constant and I > 1 is the adiabatic exponent (for usual gas, 1 <
I ::; 5/3). The system (5.1) with (U, V) #- (0,0) is a gas dynamics model of nonconservative form with a source. For instance, (U, V) = (0, a(x, t)p), where a(x, t) represents a body force (usually
gravity) acting on all the fluid in any volume. An essential feature of the system is a nonstrict hyperbolicity; that is, a pair of wave speeds coalesce on the vacuum p = 0. We also describe the
limit behavior of the approximate solutions u«x,t) (especially generated from the Lax-Friedrichs scheme, the Godunov scheme, and the viscosity method ) to the homogeneous system of isentropic gas
dynamics. The homogeneous system of (5.1) is
+ (pu)x (pu)t + (pu 2 + p(p))x
=0, = 0.
For the Cauchy problem of the homogeneous system (5.3), many existence theorems of global solutions have been obtained [RI, ZG, BA, NI, NS, DZ, LLO, DI2]. The first large-data existence theorem of
global solutions was established by Nishida [NI] for I = 1 by using the Glimm method [GL]. DiPerna [DI2] established a large data existence theorem for 1 +2/(2m+ 1), m ~ 2 integers, by using the
viscosity method and the theory of compensated compactness. Both results assume that the initial density po(x) is away from the vacuum. In this section we describe recent achievements for the
5.1. Compactness Framework THEOREM
5.1. Suppose that the approximate solutions
v«x, t)
= (p«x, t), m«x, t)) = (p«x, t), p«x, t)u«x, t))
to the Cauchy problem (5.1)-(5.2) (1 < I ::; 5/3) satisfy the following framework. (i) There is a constant C >
°such that
0::; p«x,t)::; C,
lu«x,t)l::; C,
(ii) On any bounded domain n c R~ and for any weak entropy pair (1], q) (i.e., 1](0, u) = 0), the measures
Then there exists a subsequence (still labeled) v< such that (p«x, t), m«x, t))
(p(x, t), m(x, t)),
52 This compactness framework is established by an analysis of weak entropy and a study of regularities of the family of probability measure {vx,tl(x,/)ERt' which corresponds to the approximate
solutions. The basic motivation is that the commutativity relation (2.1) represents an imbalance of regularity: the operator on the left is more regular than the one on the right as a result of
cancellation, which forces the measure Vx,1 to reduce to a point mass. We recall that the derivative of a Radon measure in the Lebesque sense vanishes except at one point, implying that the measure
is a point mass. The challenge is to choose the entropy pairs whose leading term is coercive with respect to the 2 x 2 determinant and to show that the coercive behavior guarantees that the
derivative of Vx,1 vanishes except at one point. The essential difficulty is that only a subspace of entropy pairs, weak entropy pairs, can be used in the relation (2.1). The strategy is fulfilled in
[CHl-CH2, DCLl-DCL2]. 5.2. Convergence of the Lax-Friedrichs Scheme and the Godunov Scheme
Using Theorem 5.1 and making several estimates, we obtain the following theorem. THEOREM 5.2. [DCLl, CHl-2]. Suppose that the initial data vo(x) = (Po(x), po(x)uo satisfy
o ~ Po(x) ~ M,
J~oo('I).(vo(x» - 'I).(v) - V''I).(v)(vo(x) - v»dx ~ M,
for some constant state v and the mechanical energy 'I). = pu 2 + "Y( ~~l}' Then there exists a convergent subsequence in the Lax-Friedrichs approximate solutions and the Godunov approximate
solutions vf(x,t) (e = D.x, the space step length), respectively, that have the same local structure as the random choice approximations of Glirnm [GL}, such that
Define u(x,t) = m(x,t)/p(x,t), a.e. Then the pair offunctions (p(x,t),u(x,t» is a generalized solution of the Cauchy problem (5.2)-(5.3) satisfying
o ~ p(x,t)
Im(x, t)1 < C lu(x , t)1 < - p(x, t) - ,
5.3. Convergence of the Viscosity Method Consider the viscosity approximations v«x, t) determined by
= €pxx,
(p, m)
1 / =0
= (p~(x), m~( x»,
= (pHx), mHx» is an approximate sequence of the initial data vo(x) (po( x), Po( x )uo( x».
where vo(x)
53 LEMMA 5.1. Suppose that the initial data (po(x), uo(x)) satisfy
(po(x) - p,uo(x) - u) E L2 n Loo, Po(x) 2:
Then there is an approximate sequence vO( x) satisfying
v~(x) -
in L2,
v E CJ(-oo,oo),
O:s pO(x):S Mo,
such that there exist global solutions (pf, u f) to the Cauchy problem (5.2)-(5.3) satisfying
(pf(., t) - p, uf(., t) - u) E C 1 n Hl,
o :S p' :S M,
lufl:S M,
where both Mo and M are constants independent of €. THEOREM 5.3. [CH1]. Suppose that the initial data vo(x) = (po(x), po(x )uo(x)) satisfy (5.4). Then there exists a convergent subsequence in the
viscosity approximations vo(x, t) such that
Define u(x, t) = a.e. Then the pair of functions (p(x, t), m(x, t)) is a generalized solution to the Cauchy problem (5.2)-(5.3) satisfying
0), however, there are usually no bounded invariant regions. Nevertheless, we use two difference schemes-the fractional-step Lax-Friedrichs scheme and Godunov scheme (see [DCL2]), which are
generalizations of those of Lax-Friedrichs [LA1] and Godunov [GO]-to construct approximate solutions v f (x, t) (€ = ~x, the space step length). If the inhomogeneous terms satisfy conditions C1°-C3°
(see [DCL2]), which contains cases of (0, a(x, t)p), (0, a(x, t)pu), (a(x, t)p, a(x, t)pu), and (0, a(x, t)puln(lul+ 1)), a(x, t) E C(R X R+), we can overcome the difficulty by analyzing the solution
of a nonlinear ordinary differential equation for the fractional-step Lax-Friedrichs scheme and Godunov scheme.
54 THEOREM 5.4. [DCL2j. Suppose that the inhomogeneous term satisfies conditions C1°-C3° (see [DCL2j) and the initial data (po(x), uo(x)) satisfy (5.4). Then there exists a convergent subsequence in
the approximations v«x, t) such that
(p.. - shock. For unattached wave configurations (i.e., R is not attached to the wedge corner), a>.. -shock - which is a characteristic feature of shock wave/boundary layer interactions - often (but
not always) forms in the corner region in experiments. These solutions differ dramatically from the corresponding Euler solutions, but only locally near the corner; in particular, numerical/
experimental comparisons seem to indicate that the important Mach stem region is not affected.
86 (4) Small scales. No matter how small the viscosity, it will destroy any complicated inviscid structure which exists at small enough length scales. The relevance of this factor is debatable. At
this point, no numerical simulation has been carried out with sufficient resolution to capture viscous length scales for the corresponding experimental test gas. For relatively simple flowfields such
as RR and some SMR cases it is very unlikely that any complex structure exists at scales smaller than those already computed for either equations (1) or (3). However, numerical experimentation with
adaptive mesh refinement has revealed considerable small-scale structure for DMR flowfields at low I and high Ms. One is led to wonder whether or not relevant inviscid length scales might become
arbitrarily small as I -+ 1.0 at high Ms. In summary, the evidence indicates that the zero viscosity limit is singular for equations (1), but the significance of this fact is not clear. Stability of
self-similar flowfields with respect to unsteady perturbations has already been considered for the Kelvin-Helmholtz instability of the main slip surface in DMR flowfield calculations at high enough
resolution. This result alone indicates that a wide variety of self-similar solutions are not stable to unsteady perturbations. A more interesting scenario concerns the RR-MR transition. It is
possible that equations (3) have multiple solutions near the transition curve comprising a hysterisis loop with stable and unstable branches; a possible mechanism for jumping between branches in an
experiment or CFD simulation might be a judiciously chosen unsteady perturbation. Finally, very small scale structures in complex situations such as low I DMR flow might well be created or destroyed
by small perturbations in an experiment or calculation. To distinguish such events from true self-similar structure one would have to continue the experiment/calculation long enough so that the
events were no longer small scale. 4. Conclusions. From the discussion of the preceeding section it is seen that the questions of existence and uniqueness for self-similar oblique shock wave
reflection have no obvious answers. However, certain regions of parameter space can be selected where the situation is more clear; we refer to RR cases away from the transition line and to weak SMR
cases. Here, there is no reason to believe other than that solutions exist and are unique. The RR situation is somewhat easier to consider from the PDE point-of-view, primarily because the slip
surface of MR is absent. SMR cases for which the slip surface terminates at the wall boundary stagnation point should not be significantly more difficult. Future CFD studies are indicated in several
areas. For example, the possibility of an 'organizing center' should be carefully looked into. Another area would be to try unsteady perturbations near the RR-MR transition line. Also, the study of
RR-MR transition would be greatly facilitated by a high resolution NavierStokes capability for modelling the boundary layer; it is quite likely that a few such calculations would substantially clear
up some of the ambiguities in the experimental record. Finally, more work is needed in the low I regime.
87 REFERENCES [1]
[2] [3] [4] [5] [6] [7] [8] [9]
[10] [11] [12] [13] [14] [15]
[18] [19]
[23] [24]
S. ANDO, Pseudo-stationary oblique shock-wave reflections in carbon dioxide - domains and boundaries, University of Toronto Institute for Aerospace Studies (UTIAS) Tech. Note No. 231 (1981). T.V.
BAZHENOVA, V.P. FOKEEV AND L.G. GVOZDEVA, Regions of various forms of Mach reflection and its transition to regular reflection, Acta Astronautica, 3 (1976), pp. 131-140. T.V. BAZHENOVA, L.G. GVOZDEVA
AND Yu.V. ZHILIN, Change in the shape of the diffracting shock wave at a convex corner, Acta Astronautica, 6 (1979), pp. 401-412. T.V. BAZHENOVA, L.G. GVOZDEVA AND M.A. NETTLETON, Unsteady
interactions of shock waves, Prog. Aero. Sci., 21 (1984), pp. 249-33l. G. BEN-DoR, Regions and transitions of nonstationary oblique shock-wave diffractions in perfect and imperfect gases, UTIAS Rep.
No. 232, (1978). G. BEN-DoR, Steady, pseudo-steady and unsteady shock wave reflections, Prog. Aero. Sci., 25 (1988), pp. 329-412. G. BEN-DoR AND 1.1. GLASS, Domains and boundaries of non-stationary
oblique shock-wave reflexions. 1. Diatomic gas, J. Fluid Mech., 92 (1979), pp. 459-496. G. BEN-DoR AND 1.1. GLASS, Domains and boundaries of non-stationary oblique shock-wave reflexions. 2. Monatomic
gas, J. Fluid Mech., 96 (1980), pp. 735-756. G. BEN-DoR, K. TAKAYAMA, AND T. KAWAUCHI, The transition from regular to Mach reflexion and from Mach to regular reflexion in truly non-stationary flows,
J. Fluid Mech., 100 (1980), pp. 147-160. G. BEN-DoR AND K. TAKAYAMA, Analytical prediction of the transition from Mach to regular reflection over cylindrical concave wedges, J. Fluid Mech., 158
(1985), pp. 365-380. M. BERGER AND P. COLELLA, Local adaptive mesh refinement for shock hydrodynamics, J. Compo Phys., 82 (1989), pp. 64-84. P. COLELLA, Multidimensional upwind methods for hyperbolic
conservation laws, Lawrence Berkeley Laboratory Rep. LBL-17023 (1984). P. COLELLA AND H.M. GLAZ, Efflcient solution algorithms for the Riemann problem for real gases, J. Compo Phys., 59 (1985), pp.
264-289. P. COLELLA AND L.F. HENDERSON, The von Neumann paradox for the diffraction of weak shock waves, Lawrence Livermore National Laboratory Rep. UCRL-100285 (1988). A.Yu. DEM'YANOV AND A.V.
PANASENKO, Numerical solution to the problem of the diffraction of a plane shock wave by a convex corner, Fluid Dynamics, 16 (1981), pp. 720-725. Translated from the original Russian. R.L.
DESCHAMBAULT AND 1.1. GLASS, An update on non-stationary oblique shock-wave reflections: actual isopycnics and numerical experiments, J. Fluid Mech., 131 (1983), pp. 27-57. J.M. DEWEY AND D.J.
McMILLIN, Observation and analysis of the Mach reflection of weak uniform plane shock waves. Part 1. Observations and Part 2. Analysis, J. Fluid Mech., 152 (1985), pp. 49-8l. 1.1. GLASS, Some aspects
of shock-wave research, AIAA J., 25 (1987), pp. 214-229. See also AIAA Report AIAA - 86 - 0306 with the same title and author. H.M. GLAZ, Numerical computations in gas dynamics with high resolution
schemes, in Shock Tubes and Waves, Proc. Sixteenth Inti. Symp. on Shock Tubes and lVaves, H. Gronig, editor, VCH Publishers (1988), 1988, pp. 75-88. H.M. GLAZ, P. COLELLA, I.I.GLASS AND
R.L.DESCHAMBAULT, A numerical study of oblique shock-wave reflections with experimental comparisons, Proc. R. Soc. Lond., A398 (1985), pp.117-140. H.M. GLAZ, P. COLELLA, I.I.GLASS AND
R.L.DESCHAMBAULT, A detailed numerical, graphical, and experimental study of oblique shock wave reflections, Lawrence Berkeley Laboratory Rep. LBL-20033 (1985). H.M. GLAZ, P.A. WALTER, 1.1. GLASS AND
T.C.J. Hu, Oblique shock wave reflections in SF6: A comparison of calculation and experiment, AIAA J. Prog. in Astr. and Aero., 106 (1986), pp. 359-387. H.M. GLAZ, P. COLELLA, J.P. COLLINS AND R.E.
FERGUSON, Nonequilibrium effects in oblique shock-wave reflection, AIAA J., 26 (1988), pp. 698-705. L.G. GVOZDEVA, T.V. BAZHENOVA, O.A. PREDVODITELEVA AND V.P. FOKEEV, Mach reflection of shock waves
in real gases, Astronautica Acta, 14 (1969), pp. 503-508.
88 [25] [26] [27] [28] [29] [30]
[34] [35] [36] [37] [38] [39] [40]
[41] [42] [43] [44] [45] [46] [47] [48] [49] [50]
L.F. HENDERSON, Regions and boundaries for diffracting shock wave systems, Z. angew. Math. Mech., 67 (1987), pp. 73-86. L.F. HENDERSON AND A. LOZZI, Experiments on transition of Mach reflexion, J.
Fluid Mech., 68 (1975), pp. 139-155. L.F. HENDERSON AND A. LOZZI, Further experiments on transition to Mach reflexion, J. Fluid Mech., 94 (1979), pp. 541-559. R.G. HINDMAN, P. KUTLER, AND D.
ANDERSON, A two-dimensional unsteady Euler-equation solver for flow regions with arbitrary boundaries, AIAA Rep. 79-1465 (1979). H. HORNUNG, Regular and Mach reflection of shock waves, Ann. Rev.
Fluid Mech., 18 (1985), pp.33-58. H.G. HORNUNG AND J.R. TAYLOR, Transition from regular to Mach reflection of shock waves. Part 1. The effect of viscosity in the pseudosteady case, J. Fluid Mech.,
123 (1982), pp. 143-153. T .C.J. Hu AND 1.1. GLASS, Pseudostationary oblique shock-wave reflections in sulphur hexafluoride(SF6}:interferometric and numerical results, Proc. R. Soc. Lond., A408
(1986), pp. 321-344. M. IIMURA, H. MAEKAWA AND H. HONMA, Oblique reflection of weak shock waves in carbon dioxide, in Proceedings of the 1988 National Symposium on Shock Wave Phenomena, K. Takayama,
editor, Tohoku University, Japan, 1989, pp. 1-10. T. IKUI, K. MATSUO, T. AOKI, AND N. KONDOH, Mach reflection of a shock wave from an inclined wall, Memoirs of the Faculty of Engineering, Kyushu
University, 41 (1981), pp. 361-380. V.P. KOROBEINIKOV, ED., Nonstationary interactions of shock and detonation waves in gases, Nauka, Moscow, USSR, 1986. In Russian. P. KUTLER AND V. SHANKAR,
Diffraction of a shock wave by a compression corner:Part I regular reflection, AIAA J., 15 (1977), pp. 197-203. C.K. LAW AND 1.1. GLASS, Diffraction of strong shock waves by a sharp compressive
corner, C.A.S.1. Trans., 4 (1971), pp. 2-12. J.-H. LEE AND 1.1. GLASS, Pseudo-stationary oblique-shock-wave reflections in frozen and equilibrium air, Prog. in Aero. Sciences, 21 (1984), pp. 33-80.
V.N. LYAKHOV, V.V. PODLUBNY, AND V.V. TITARENKO, Influence of Shock Waves and Jets on Elements of Structures, Mashinostroenie, Moscow, 1989. In Russian. A. MAJDA, Compressible Fluid Flow and Systems
of Conservation Laws in Several Space Variables, Springer - Verlag, 1984. K. MATSUO, T. AOKI, N. KONDOH, AND S. NAKANO, An experiment on double Mach reflection of a shock wave using sulfur
hexafluoride, in Proceedings of the 1988 National Symposium on Shock Wave Phenomena, K. Takayama, editor, Tohoku University, Japan (ISSN 0915 - 4884), 1989, pp. 11-20. H. MIRELS, Mach reflection
flowfields associated with strong shocks, AIAA J., 23 (1984), pp. 522-529. D.C. PACK, The reflexion and diffraction of shock waves, J. Fluid Mech., 18 (1964), pp. 549-576. A. SAKURAI, L.F. HENDERSON,
K. TAKAYAMA, AND P. COLELLA, On the von Neumann paradox of weak reflection, Fluid Dynamics Research, 4 (1989), pp. 333-345. G.P. SCHNEYER, Numerical simulation of regular and Mach reflections,
Physics of Fluids, 18 (1975), pp. 1119-1124. A.N. SEMENOV, M.P. SYSHCHIKOVA, AND M.K. BEREZKINA, Experimental investigation of Mach reflection in a shock tube, Soviet Physics - Technical Physics, 15
(1970), pp. 795-803. V. SHANKAR, P. KUTLER, AND D. ANDERSON, Diffraction of a shock wave by a compression corner: Part II - single Mach reflection, AIAA J., 16 (1978), pp. 4-5. M. SHIROUZO AND 1.1.
GLASS, Evaluation of assumptionss and criteria in pseudostationary oblique shock-wave reflections, Proc. R. Soc. Lond., A406 (1986), pp. 75-92. J.T. URBANOWICZ, Pseudo-stationary oblique-shock-wave
reflections in low gamma gasesisobutane and sulphur hexafluoride, UTIAS Tech. Note No. 267,1988. J.T. URBANOWICZ AND 1.1. GLASS, Oblique-shock-wave reflections in low gamma gases sulfurhexafluoride
(SF6) and isobutane [CH(CHa)aj, preprint, 1989. J.M. WHEELER, An interferometric investigation of the regular to Mach reflection transition boundary in pseudostationary flow in air, UTIAS Tech. Note
No. 256,1986.
JAMES GLIMM*H Abstract. The subject of nonlinear hyperbolic waves is surveyed, with an emphasis on the discussion of a number of open problems. Key words. Conservation Laws, Riemann Problems. AMS
(MOS) subject classifications. 76N15, 65M99, 35L65
1. Introduction. The questions which modern applied science asks of the area of nonlinear hyperbolic equations concern an analysis of the equations, a search for effective numerical methods and an
understanding of the solutions. The analysis of equations involves mathematical questions of existence, uniqueness and regularity. It is the special features of nonlinear hyperbolic equations which
give these standard mathematical concerns a broader scientific relevance. The basic conservation laws of physics are hyperbolic, and fit into the discussion here. They do not in general have regular
solutions. An exact classification of the allowed singularities (i.e. the nonlinear waves) is an open problem in the presence of realistic or complex physics, chemistry, etc. and/or higher spatial
dimensions. Existence and uniqueness of solutions is not a consequence of the fact that the equations "come from physics" and thus "must be O.K." The equations come from degenerate simplifications of
physics in which all length scales have been eliminated. Serious mathematical work remains to determine the formulations of these equations which have satisfactory mathematical properties, and thus
provide a suitable starting place for effective numerical computation and for scientific understanding. We next discuss the modification of equations. Simplified versions of complex equations capture
the essential difficulties in a form which can be analyzed conveniently and understood. Equations are also rendered more complex through the inclusion of additional physical phenomena. Such steps are
important for experimental validation and for applications. Often the complication involves additional terms or equations containing a small parameter, and the limit as the parameter tends to zero is
of great interest. This interest in small parameters can be traced back to the physics, where events on very different length and time scales arise in a single problem. Another approach to this
hierarchy of equations, physics, and length and time scales is to analyze asymptotic properties (including intermediate time scales) of the solution as t --> 00. Effective computational measures
address basically the same difficulties and issues, but with different tools and approaches. The presence of widely varying *Department of Applied Mathematics and Statistics, SUNY at Stony Brook,
Stony Brook, NY, 11794-3600. tSupported in part by the Applied Mathematical Sciences Program of the DOE, grant DEFG02-88ER25053 :j:Supported in part by the NSF Grant DMS-8619856
90 length and time scales and in some cases of underspecified physics can degrade the accuracy and the resolution of numerical methods. Three dimensional and especially complex or chaotic solutions
are typically underresolved computationally. The search for effective numerical methods can be broken into two main themes: concerns driven by computer hardware and concerns driven by features of the
solution. Effective numerical methods which are driven by solution features, i.e. by physics, depend on the study of nonlinear waves. Most modern numerical methods for the solution of nonlinear
conservation laws employ Riemann solvers, i.e. the exact or approximate solution of nonlinear wave interactions, as part of the numerical algorithm. Equally important is the use of limiters to avoid
Gibbs phenomena overshoots and oscillations associated with the discrete approximation to discontinuous solutions. Of the many possible issues associated with the analysis of solutions, we focus on
chaos. By chaos, we mean a situation in which the microscopically correct equations are ill posed or ill conditioned (through sensitive dependence on initial conditions) for the time periods of
interest and must be interpreted stochastically. The stochastic interpretation leads to new equations, useful on larger length and time scales. Further background on the topics discussed here can be
found in recent general articles of the author and in references cited there [21-23,26]. 2. Nonlinear Wave Structures. The nonlinear waves which arise in many of the examples of complex physics
(elastic plastic deformation, magneto hydrodynamics, chemically reactive fluids, oil reservoir models, granular material, etc.) are currently being explored. Striking, novel and complex mathematical
phenomena have recently been discovered in these examples, including crossing shocks, bifurcation loci, shock admissibility dependence on viscosity, and inadmissible Lax shocks. The class of solved
Riemann problems continues to increase, as a result of examining these physically motivated examples in detail. The general theories in which these new phenomena are imbedded are to a large extent a
subject for future research. Perhaps the most pressing question in this circumstance is, having found the trees, to discover the forest. Wave interactions are technically related to the subject of
ordinary differential equations in the large. This fact suggests an approach to the construction of general theories for Riemann solutions. We also ask whether these novel wave structures have
counterparts in experimental science. We believe the answer will be positive, and to the extent that this is the case, mathematical theory is ahead of experiment, in making predictions about nature.
The motivating example of three phase flow in oil reservoirs is not a promising place to resolve this question. Three phase flow experiments are difficult, inconclusive and seldom performed. The
equations themselves are not known definitively, and for this reason, the topological argument of Shearer [46] that an umbilic point must occur on topological grounds for any plausible three phase
flow equation is significant. Ting has argued that umbilic points also occur in elastic plastic flow [47]. Gariazar has examined common constitutive laws for common
metals [15] and found that the umbilic point will occur in uniaxial compression, at compressions within the plastic region [16]. It remains to be determined whether these umbilic points are an
artifact of a consititutive law or whether they reflect a true property of nature. In view of the considerable progress which has been made with non strictly hyperbolic conservation laws at the level
of wave interactions and Riemann solutions, it is a very interesting question of examine these same equations from the point of view of the general theory. This means considering general Cauchy data,
not just Riemann (scale invariant) data. We mention two recent results of this type. A general existence theorem for one of the conservation law systems with quadratic flux and an isolated umbilic
point was proved [34] using the method of compensated compactness. At the umbilic point, the entropy functions required by this method have singularities. It was shown that for a restricted subclass
of entropies, the singularity was missing, and that the proof could be completed using this restricted class of entropies. For another system in this class, the stability to perturbations of finite
amplitude of a viscous shock wave was demonstrated [38]. On the basis of these examples and the related work of others, it appears that the general theory of conservation laws will admit extensions
to allow a loss of e.g. strict hyperbolicity or genuine nonlinearity. Since many conservation laws arising in science appear to have such features, such extensions would be of considerable interest.
On the basis of the above discussion, answers to the following questions might help to define the overall structure of nonlinear hyperbolic wave interactions: 1. A complete study of bifurcations. A
classification of generic unfoldings of
the underresolved physics of Riemann problems would be both interesting and useful. A set of bifurcation loci for Riemann problem left states was proved to be full (for left and mid states in the
complementary set, there is no bifurcation resulting from variation of the left state) in [13,14]. Remaining problems concern bifurcation as the conservation law or its viscous terms (used even in
the case of zero viscosity to define admissibility) are varied. Moreover, bifurcation for wave curves passing through the resonant (umbilic) set has yet to be addressed. A classification of the
bifurcation unfoldings which result from left states (or conservation laws, etc.) located on the bifurcation loci has not been given. 2. The local theory of multiple resonant eigenvalues (higher
order umbilic points). For a higher dimensional state space, the resonant (umbilic) set is a manifold with singularities. The local behavior of Riemann solutions near the resonant set will depend on
the dimension and the codimension of the resonant set, and on its local singularity structure as a subset of the state space. Beyond this, there will be some number of "bifurcation parameters" which
partition the Riemann solutions into invariance classes. 3. A topological characterization of removable vs. nonremovable resonance. For what class of problems is a resonance required? Is there an
example where it can be observed experimentally? 4. Asymptotics, large systems with small parameters, rate limiting subsystems
and "stiff" Riemann problems. 5. A resolution of entropy conditions. The physical entropy principle is not only that entropy will increase across a shock, but that the admissible solution will be the
one which maximize3 the rate of entropy production. Entropy is defined in many physical circumstances. For example the equations of two fluid Buckley-Leverett flow in porous media are described by a
single nonconvex conservation law. Entropy, in the sense of thermodynamics can be understood in this context [1], and yields the well known entropy condition of Oleinik in this example. However, to
obtain the Oleinik entropy condition, the above strong form of the physical entropy condition is needed. The equations for three phase flow in porous media inspired much of the recent work on Riemann
solutions, in which it was realized that a number of mathematically motivated entropy conditions were inadequate. Thus it would be worthwhile to return to the first principles of physics and to
formulate a physically based entropy condition for three phase flow and for the quadratic flux Riemann problems. 6. Nonuniqueness and nonexistence of Riemann solutions. Symmetry breaking (non scale
invariant or higher dimensional) solutions for Riemann data. There is enough evidence that this phenomena will occur, but we have neither enough examples nor enough theory to predict under what
circumstances it should be expected. 7. Discretized flux functions. Both the qualitative (wave structure) and the quantitative (convergence rates) aspects of convergence are of interest. How should
flux discretization be performed in order to preserve some specific aspect of the Riemann solution wave structure? 8. Special classes of subsystems containing important examples. E.g. mechanical
systems, with a state space given as a tensor product of configuration space and a momentum space, or more generally as a cotangent bundle over a configuration space manifold. 9. The use of known
Riemann solutions as a test for numerical methods. 3. Relaxation Phenomena. The internal structure of a discontinuity refers to any modification of the equation and the underlying physics which
replaces a discontinuous solution by a continuous one (having a large gradient). The internal structure is of interest partly as a test of admissibility of the discontinuity and partly because of the
more refined level of resolution and physics which is described from this approach. The conservation law is scale invariant, and thus has no length scales in it, while the internal structure
necessarily has at least one length scale (its width) and may have more. For example consider chemically reactive fluid dymanics. With even moderately complex chemistry, there will be multiple
reactions, and reaction zones, each with individual length scales (the width of an individual reaction). In the conservation law, the relative speeds of the interactions are lost, as all times and
lengths have been set to zero. It is in this way that the physics described by the conservation law becomes underspecified. There are two approaches to the internal structure of a discontinuity.
93 new equations are added to enlarge the system or new terms are added to the original system, without a change in the number of dependent variables. There are other and more complex possibilities,
such as the fluid equations giving way to the Boltzmann equation, in which an infinite number of new variables are used and the old variables are not a subsystem of the new, but are only recovered
through an asymptotic limit. Such situations, while they do occur, are outside the scope of the present discussion. Common examples of approximate discontinuities with internal structure are shock
waves, chemical reaction fronts, phase transitions, and plastic shear bands. The internal structure involves concepts from nonequilibrium thermodynamics. The use of higher order terms in the
equations is the simplest and most familiar way to introduce internal structure into a discontinuity. The coefficients (viscosity, heat conduction, reaction rates etc.) in these terms necessarily
have a dimension. The coefficients are known as transport coefficients; they are defined in principle from nonequilibrium thermodynamics. Once the coefficients are known, equilibrium thermodynamics
is used exclusively. The other approach, which in many examples is more fundamental, is to enlarge the system. We regard the nonequilibrium variables and reactions as divided into fast and slow. This
division is relative to the region internal to the discontinuity; even the slow variables could be fast relative to typical fluid processes. Then the fast variables are set to their instantaneous
equilibrium, relative to the specified values of the slow variables. This describes an approximation in which the ratio of the fast to slow time scales becomes infinite. Another description would be
to say that the fast variables are at thermodynamic equilibrium, relative to constraints set by the values taken on by the slow variables. The slow variables are governed by differential equations
derived from nonequilibrium thermodynamics applied to this limiting situation. A typical equation for the slow variable is an ordinary differential equation, i.e. a Lagrangian time derivative set
equal to a reaction or relaxation rate source term. The lower order source terms have a dimension and introduce the length scale which characterizes the internal structure of the discontinuity. The
equations for chemically reacting fluids have exactly this form, and can be regarded as a completely worked out example of the point of view proposed here. A comparison of these two approaches has
been worked out by T.-P. Liu [37], and is summarized in his lectures in this volume. Liu considers the lowest order nonequilibrium contribution to the internal energy of a (diatomic) gas, namely the
vibrational energy in the lowest energy state of the molecule. Thus there are now two contributions to the internal energy, this one vibrational mode and all remaining internal energy contributions.
The vibrational energy has a preferred value as a function of the other thermodynamic variables, namely its equilibrium value. There is also a relaxation rate, defined in principle from statistical
physics, but in practice determined by mearurement, for return of the vibrational energy to this preferred value. The result is an enlarged system, with vibrational energy as the new dependent
variable. Liu's asymptotic analysis, as the relaxation rate goes to infinity, leads to the smaller system, augmented with a higher order viscosity
term, and a computation of the viscosity coefficient in terms of the nonequilibrium relaxation process. For a quantitatively correct description of rarefied gas dynamics, this model is too simple,
and the full chemistry of N 2 , O2 , CO2 , H 2 0, etc., including free radicals, dissociation and partially ionized atoms, is needed. Realistic models of chemistry for rarefied gas dynamics and
internal shock wave structure can involve up to 100 variables. Such systems are typically very stiff, are still approximate, and depend on rate laws which are not known precisely. Liu's analysis
assumes that the original system of conservation laws is strictly hyperbolic and genuinely nonlinear. In a neighborhood of a phase transition and especially along a phase boundary, genuine
nonlinearity typically fails for the conservation laws describing gas dynamics [40J. Presumably dissociation and ionization have similar effects on the convexity of the rarefaction and shock Hugoniot
wave curves, and hence on the structure of the Riemann solutions. The metastable treatment of dissociation assumes that species concentrations are dependent variables, and that their evolution is
governed by rate laws. In the equilibrium description, all reactions have been driven to completion and all concentrations are at equilibrium values. Thus these two descriptions differ in the number
of dependent variables employed. It would be of interest to extend Liu's analysis to a wider range of cases, and to remove the restrictive hypotheses in it. Caginalp [5, 7J considers phase
transitions on the level of the heat equation alone. The simple system describes the Stefan problem, and the augmented system includes a Landau-Ginsberg equation. Caginalp discusses a number of
asymptotic limits of the augmented system [8, 9J, and gives physical interpretations ofthe assumptions on which these limits are based. Anisotropy is important in this context, as it provides the
symmetry breaking which initiates the dendritic growth of fingers [6]. Rabie, Fowles and Fickett [43J replace compressible fluid dyamics by Berger's equation and their augmented system then has two
equations. They examine the wave structure, and compare it to detonation waves, a point of view carried further by Menikoff [41J. Efforts to describe metastable phase transitions by the addition of
higher order terms in the compressible (equilibrium thermodynamics) fluid equations have led to solutions in qualitative disagreement with experiment, as well as with physical principles. It should
be recalled that for common materials and for most of the phase transition parameter space (excepting a region near critical points), the influence of a phase boundary is felt for a distance of only
a few atoms from the phase boundary location. On the length scale of these few atoms, the continuum description of matter does not make a lot of sense. Thus the view, sometimes expressed, that on
philosophical grounds, there should be a gradual transition between the phases, is valid, if at all, only within the context of quantum mechanics and statistical physics. In this case the continuous
variable is the fraction of intermolecular bonds in the lattice, or the quantum mechanical probability for the location of the bonding electrons, etc. Correlation functions for particle density are
studied in this approach. The mathematical structure associated with metastability is further clouded by the occurrence of elliptic regions in some formulations of the equations. According
95 to a linearized analysis, the equations are then unstable, and presumably unphysical. They are at least ill posed in the sense of Hadamard. Detailed mathematical analyses of a Riemann problem with
an elliptic region [31,32,35] did not reveal pathology which would disqualify these equations for use in physical models. The theory in these examples is not complete, and especially the questions of
admissibility of shock waves and the stability of wave curves should be addressed. Examples of computational solutions for equations with elliptic regions are known to have solutions without obvious
pathology as well [4, 18]. There are examples, such as Stone's model for three phase flow in porous media, where the elliptic region appears to have a very small influence on the overall solution. In
most cases, the elliptic regions result from the elimination of some variable in a larger system. In this sense they are not fundamentally correct. Whether they are acceptable as an approximation in
a specific case seems to depend on the details of the situation. A correct theory should predict a number which can be verified by experiment. For conservation laws, the wave speed is a basic
quantity to be predicted. In the case of metastable phase transitions, this task is complicated, for some parameter values, by the occurrence of interface instabilities, which lead to fingers
(dendrites) and which produce a mixed phase mushy zone. The propagation speed of this dynamic mushy zone is not contained in a one dimensional analysis using microscopically correct thermodynamics
and rate laws. In other words, there are no physcially admissible Riemann solutions to the one dimensional conservation laws in such cases. The equations of chemically reactive fluids may also fail
to have physically admissible one dimensional Riemann solutions. For some parameter ranges, the wave front may lose its planar symmetry and become krinkled or become fully three dimensional, through
the interaction with chaotically distributed hot spot reaction centers. Recent progress on this issue has been obtained [39]; older literature can be traced from this reference as well. The question
of complex, or chaotic internal interface structure suggests the following point of view. In such cases, the question of physical admissibility is a modeling question, i.e. a judgement to be made on
the basis of the level of detail desired in the model equations. The admissible solutions for microscopic physics and for macroscopic physics need not be the same. A change in admissibility rules is
really a change in the meaning of the equations, i.e. a change in the equations themselves. This point of view can be taken further, and of course we realize that there is no need for the equations
of microscopic and macroscopic physics to coincide, even when they are both continuum theories. The relation between these two solutions (or equations) is the topic of the next section. Specific
questions posed by the above discussion include: 1. In various physical examples of relaxation phenomena, it would be desirable to determine correct equations, mathematical properties of the
solutions, including the structure of the nonlinear waves, and asymptotic limits giving relations between various distinct descriptions of the phenomena. 2. Which properties of a larger system lead
to elliptic regions in an asymptotically limiting subsystem? 3. Is there a principle, similar to the Maxwell construction, which will replace
96 a system of conservation laws having an elliptic region with a system having an umbilic line, or surface of codimension one in state space? Does this construction depend on additional physical
information, such as the specification of the pairs of states on the opposite sides of the elliptic region joined by tie lines, as in the case of a phase transition? Is there a physical basis for
introducing tie lines in the case of the elliptic region which arises in Stone's model? 4. What is the proper test for dimensional symmetry breaking of a one dimensional Riemann solution? Symmetry
breaking should be added to the admissibility criteria, and when the criteria fails, there would be no admissible (one dimensional) Riemann solution. The same comments apply to the breaking of scale
invariance symmetry. 5. The mathematical theory of elliptic regions needs to be examined more fully, especially to determine the importance of viscous profiles and conditions for uniqueness of
solutions. 4. Surface Instabilities. The nonlinear waves considered in the two previous sections are one dimensional, and in three dimensions, they define surfaces. Sometimes the surfaces are
unstable, and when this occurs, a spatially organized chaos results. Examples are the vortices which result from the roll up (Kelvin-Helmholtz instability) of a slip surface, and the fingers which
result from a number of contexts: the Taylor-Saffman instability in the displacement of fluids of different viscosity in porous media, the Rayleigh-Taylor and Richtmyer- Meshkov instabilities
resulting from the acceleration of an interface between fluids of different densities, the evolution of a metastable phase boundary giving rise to the formation of dendrites and a multiphase
transitional mushy zone between the two pure phases. Instabilities in chemically reactive fronts were referred to in the previous section. Surface instabilities give rise to a chaotic mixing region,
which can be thought of as an internal layer between two distinct phases, fluids, or states of the conservation law. In the case of vortices, the mixing occurs first of all in the momentum equation,
and for this reason is modeled at the simplest level by a diffusion term in this equation. The coefficient of the diffusion term is viscosity, and the required viscosity to model the turbulent mixing
layer is larger than the microscopically defined viscosity; it is called eddy viscosity to distinguish it from the latter. Similarly the simplest model of fingering induced mixing is a diffusion term
in the conservation of mass equation. Again it has a much larger coefficient than the mass diffusion terms of microscopic physics. We call these simple mixing theories the effective diffusion
approximation. In the language of physics, they provide a renormalization, in which bare, or microscopically meaningful parameters are replaced by effective or macroscopically meaningful ones. For
many purposes the effective diffusion approximation does not give a sufficiently accurate description of the mixing layer. The effective diffusion approximation gives a smeared out boundary in
contrast to the often observed sharp boundary to the mixing region. The theories which set the effective diffusion parameter (the eddy viscosity, etc.) are phenomenological and tend to be very
context dependent.
For this reason, the key parameter in this theory is known with assurance only if it has been experimentally determined. Of even greater importance, the effective diffusion approximation contains no
length scales beyond the total width of the mixing region. It represents an approximation in which all mixing occurs at a microscopic scale. The internal structure of the mixing layer is more
complicated. It is less well mixed and somewhat lumpy, as we now explain. The initial distribution of unstable modes (vortices or fingers) on the unstable interface is governed by the theory of the
most unstable mode. The pure conservation laws are unstable on all length scales, with the shortest length scales having the most rapid growth. For this reason, these equations must be modified by
the inclusion of length dependent terms (interface width, surface tension, (microscopic) viscosity, curvature dependent melting points, etc.) which stabilize all but a finite number of wave lengths.
Of the remaining unstable modes, the one with the fastest growth rate is called the most unstable. That mode (or that range of wave lengths) is presumed to provide the initial disturbance to the
interface, in the absence of some explicit initialization containing other length scales. However initialized, the modes grow and interact. There is a significant tendency for merger and growth of
wave lengths. Presumably this is due to our picture of the mixing region as a thin layer or thickened interface, and to the well known tendency in two dimensional turbulence for length scales to
increase. In any case the merger of modes and the growth of length scales produces a dynamic renormalization in the dimensionality of the equation, and a change in the algebraic growth rate of the
interface thickness. The distribution of length scales in the mixing layer can be thought of as a random variable. It is time dependent, and ranges from the minimum size of the most unstable wave
length (or what is typically nearly the same, the smallest unstable wave length) up to a possible maximum value of the current interface thickness. The distribution of length scales is also typically
spatially dependent, and is a function of the distance through the mixing layer. Thus the mixing layer need not be homogeneous, but may contain distinct sublayers, with different statistical
distributions of length scales within each layer. This statistical distribution of spatially and temporally dependent length scales is completely missing in the effective diffusion approximation. We
now specialize the discussion to the Rayleigh-Taylor instability and we consider only one aspect of the spatially dependent length scale distribution, namely the interface width as a function of
time. According to experiment [44J, the interface thickness, or height, h(t), has the form h = agAt2, where 9 is the accelerational force on the interface, t is time, and A = ~+is the Atwood number
characterizing PI P2 the density contrast at the interface, with Pi, i = 1,2 denoting the densities in the two fluids. The first computations of the unstable interface which show quantitative
agreement with the experimental value of a for a time regime up to and beyond bubble merger are reported in [25J. Control of numerical diffusion through a front tracking algorithm appears to be the
essential requirement in obtaining this quantitative agreement with experiment. See also the paper of Zhang in this volume, where the computations are discussed in more detail and also see the
related com-
putations of Zufiria [49], who also obtains agreement with the experimental value of Q!, for a more limited time and parameter range. Because of the sensitivity of the unstable interfaces to
modifications of the physics, the computations are no doubt also sensitive to details in the numerical methods. For this reason it is very desirable to present not only carefully controlled analyses
of each method, but also of carefully controlled comparisons between the methods. The outer edge of the Rayleigh-Taylor mixing region adjacent to the undisturbed heavy fluid is dominated by bubbles;
for this reason we refer to it as the bubble envelope. Now we adapt the language of multi phase flow and consider the transition from bubbly to slug flow. These two bubbly and slug flow regimes have
distinct equations, or constitutive laws, but are both derived in principle from the same underlying physics, namely the Euler or Navier-Stokes equations. In this sense the regimes can be thought of
as phases in the statistical description of the flow in terms of bubbles and droplets. Taking this point of view, the transition between the regimes is a phase transition. The order paramater of this
transition is the void fraction; for small void fraction, bubbly flow is stable and for large void fraction, slug flow is stable. The metastable process for the bubble to slug flow transition to take
place is bubble merger, which is exactly the dominant process at the Rayleigh-Taylor bubble envelope. From this point of view, the role and importance of statistical models for the bubble merger
process [17,24,25,45,50] becomes clear. These models have as their goal to yield rate laws and constitutive relations for the metastable transition regions. In particular they should yield the
internal structure of the Rayleigh-Taylor bubble envelope. Major questions concerning the theory of unstable interfaces and chaotic mixing layers are open. 1. The importance of microscopic length
scales, viscosity, surface tension, in-
terfacial thickness, (mass) diffusion or compressibility, has been an area of active research. However, the exact role of these features in regularizing the equations to the extent that the solutions
are well defined for all time has not been established. Without regularization, the solutions are known or suspected of containing essential singularities in the form of cusps, which appear to
preclude existence beyond a limited time. 2. Does the mixing zone have a constant width or grow as some power ot t? Usually the power of t is known with some level of assurance, but the coefficient
in front of the power and the dimensionless groups of variables it depends upon may not be known, depending on the specific situation. 3. Mode splitting, coupling, merging and stretching are the
important ingredients of the dynamics of mixing layer chaos. Theories for the rates governing these events are needed. 4. Distributions of length scales within the mixing zone are needed. Stable
statistical measures of quantities which are reproducible, both experimentally and computationally are needed. Point measures of solution variables are not useful in the description of chaos, while
statistical correlation functions have proven useful in the study of turbulence, for example.
99 5. Does the idea of fractal dimension, or of a renormalization group fixed point have a value in this context? 6. Chaotic mixing layers are very sensitive to numerical error and difficult to
compute. An analysis of the accuracy of numerical methods for these problems would be very useful. For the same reason, comparison to experiment is important. 5. Stochastic Phenomena. The issues to
be discussed here are similar to those raised in §4. The main difference is that stochastic phenomena do not always concern mixing and whether or not it is concerned with mixing, they do not have to
have to be concentrated in or caused by instabilities of thin layers. To illustrate this point, as well as to introduce the next section, we refer to the problem of determining constitutive relations
and properties of real materials. It is well known that the atomic contribution to material strength will give properties of pure crystals, which are very different from (normal) real materials.
Common materials are not pure crystals, but have defects in their crystal lattice structure, impurities, voids, microfractures and domain walls, each of which can be modeled on a statistical basis,
in terms of a density. Similarly, the heterogeneities in a petroleum reservoir occur on many length scales. Some heterogeneities are not accessible for measurement, and can be infered on a
statistical basis. Others, such as the vertical behavior in the vicinity of a well bore, can be measured at a very fine scale, but it is not practical to use this detail in a computation, so again a
statistical treatment is called for. Weather forecasting data also illustrates the point that the available data may be too fine grained to be usable in a practical sense, and averaged data,
including the statistical variability of averaged data may be a more useful level of description of the problem. 6. Equations of State. The equation of state problem extends beyond the fluid
equilibrium thermodynamic equations of state, to elastic moduli, constitutive relations, yield surfaces, rate laws, reaction rates and other material dependent descriptions of matter needed to
complete the definition of conservation laws. It is the portion of the conservation law which is not specified from the first principles of physics on the basis of conservation of mass, momentum,
etc. The comments of this section apply as well to the transport coefficients, which are the coefficients of the higher order terms which are added to the conservation laws, such as the coefficients
of viscosity, diffusion, thermal conductivity, etc. There are two aspects to this problem. The first is: given the equation of state, to determine its consequences for the solution of the
conservation laws, the nonlinear wave structure, and the numerical algorithms. This problem is the topic of §2. The second problem is to determine the equation of state itself. With the increasing
accuracy of continuum computations, we may be reaching a point where errors in the equation of state could be the dominant factor in limiting the overall validity of a computation. Equations of state
originate in the microphysics of subcontinuum length scales, and their specification draws on subjects such as statistical physics, many body theory and quantum mechanics at a fundamentalleve!.
100 Thus an explanation is needed to justify the inclusion of this question in an article oriented towards a continuum mathematics audience. Although the equations of state originate in the
subcontinuum length scales, for many purposes the problems do not stay there. In many cases, there are important intermediate structures which have a profound influence on the equation of state, and
which are totally continuum in nature. This is exactly the point of the two previous sections, which we are now repeating using different language. Thus, for example, one could use a continuum theory
to study cracks in an elastic body, and then, in the spirit of statistical physics, combine the theories of individual cracks or groups of them in interaction, to give an effective theory for the
strength of a material with a given state of micro-crack formation. In other words, important aspects, and in a number of cases, the most important aspects, of the determination of the equation of
state are problems of continuum science. To further illustrate the point being made, consider the example of petroleum reservoir simulation. Here the relative permeability !'Ind the porosity are
basic material response functions, in the sense of the equation of state as discussed above. Measurements can be made on well core samples, typically about six inches long. This defines the length
scale of the microscopic physics for this problem. (We do not enter into the program of predicting core sample response functions from physics and rock properties at the scale of rock pores, i.e. the
truly microscopic physics of the problem.) The next measurable length scale is the inter-well spacing, about one quarter mile. However, information is needed on intermediate scales by the
computations. On the basis of statistics and geology, one reconstructs plausible patterns of heterogeneity for the intermediate scales. This is then used to correct the measured relative permeability
functions. The modified relative permeability functions are known as pseudo-functions, and they are supposed to contain composite information concerning both the intermediate heterogeneities and the
permeabilities as measured from core samples. This range of questions concerning the scale up of predictions and measurements from the microscopic to the macroscopic levels is of basic importance to
petroleum reservoir engineering and is an area of considerable current activity. 7. Two Dimensional Wave Structures. The wave interaction problem is a scattering problem [20]. The data for a Riemann
problem is by definition scale invariant, and thus defines the origin as a scattering center. At positive times, elementary waves (defined by the intersection of two or more one dimensional wave
fronts) propagate away from the scattering center. The elementary waves are joined by the one dimensional wave fronts which, through their intersections, define these elementary waves. At large
distance from the scattering center, the solution is determined from the solution of one dimensional Riemann problems. Going to reduced variables, 1] = 7' ( = 7, the time derivatives are eliminated
from the equations, and in the new variables, the system is hyperbolic at least for large radii, with the radially inward direction being timelike. It has known Cauchy data (at large radii). However,
in general there are elliptic regions at smaller radii, when the solution is considered in the reduced variables, or partially elliptic regions, where
101 some but not all of the characteristics are complex. Analysis of any but the simplest problems of this type in two dimensions will require the type of functional analysis estimates and
convergence studies which are needed for the analysis of general data in one space dimension. The study of a single elementary wave uses ideas similar to those found in the study of one dimensional
Riemann problems, with the distinction that here the nonlinearities and state space complications tend to be more severe. In this context, the wave curves are known as shock polars, and the analysis
of a single elementary wave involves the intersection of various shock polars, one for each one dimensional wave front belong to the elementary wave. The intersections may be nonunique, or may fail
to exist, indicating that given wave configurations may exist only over limited regions of parameter space, and that the possibility of non-uniqueness is more of a problem for higher dimensional wave
theory than it is in typical one dimensional wave interactions. As in the earlier sections, non-uniqueness, admissibility, entropy conditions and internal structure are closely related topics. Not
very much is known about the internal structure of higher dimensional elementary waves, and so we indicate two approaches which might be fruitful. Characteristic energy methods were developed by Liu
[36] and extended by Chern and Liu [12] to study large time asymptotic limits and to develop the theory of diffusion waves in one space dimension. The proof of convergence of the Navier-Stokes
equation to the Euler equation [30] also uses characteristic energy methods, as well as an analysis of an initial layer, and the evolution of an initial shock wave discontinuity as Navier-Stokes
data. For the purposes of the present discussion, we note that within the initial layer, there are three nonlinear waves, which are geometrically distinct, but still in interaction. The mechanism of
their interaction and time scale for the duration of the initial layer is set by the diffusive (parabolic) transport of information between the distinct waves. These initial layer and characteristic
energy techniques may be useful in two dimensions for the study of internal structure of two dimensional elementary waves. The classic approach to internal structure for a single wave in one
dimension is through the analysis of ODE trajectories which describe the traveling wave in state space. To apply this method to Riemann problems it is necessary to join such curves, each nearly equal
to a trajectory for a single such traveling wave. In the approximation for which each one dimensional wave is exactly a traveling wave or jump discontinuity, the method of intersecting shock polars
gives a geometric construction of the solution. The method of formal matched asymptotic expansions have also proved useful for the study of two dimensional wave interactions. Here the matching is
used to join the distinct one dimensional waves, while the formal asymptotic expansions describe the single waves in the approximation of zero wave strength (the acoustic limit). This method was
recently applied to the study of the kink mode instability in a shear layer discontinuity at high Mach number. The kink mode wave pattern was known from shock polar analysis, see [10,11,33] and from
computations [48]. The expansions showed the instability of the unperturbed shear flow and thus gave a theory of the initiation of this wave configuration from
102 an unperturbed shear flow state [2]. An extension of this analysis concerned the bifurcation diagram of the shock polars [3]. Matched asymptotic expansions have also been used in a rigorous
theoretical analysis for the large time limit in one space dimension [27]. On this basis, we mention asymptotic methods for use in mathematical proofs, in the study of two dimensional wave
interaction problems. The above techniques succeed in joining one dimensional elementary waves in regions where the solution is slowly changing and the waves themselves are widely separated. The
problem we pose has waves meeting at a point, so the juncture occurs where the solution is rapidly changing. For this reason one should not rule out the occurrence of new phenomena. A solved problem
for the interaction of viscous waves is the shock interaction with a viscous boundary layer [42]. This interaction produces a lambda shock, which is a structure which would not be predicted either
from the inviscid theory of a shock interacting with a boundary, or from the viscous theory of a single shock wave. A one dimensional analog problem with similar mathematical difficulties would be to
understand the internal structure (viscous shock layers) associated with the crossing point of two shock waves. The questions discussed in the previous sections will all be important for higher
dimensional wave interactions as well. In addition, we pose a few specific questions. 1. Generalize the classification of [19] for two dimensional elementary waves to
general equations of state, as formulated by [40]. 2. Prove an existence theorem for the oblique reflection of a shock wave by a ramp. The case of regular reflection is easiest and is the proper
starting point. Weak waves can be assumed if this is helpful. A more detailed discussion of this problem, including ideas for the construction of an iteration scheme to prove existence are presented
in [19]. The major interest in this problem derives from unresolved differences between proposed bifurcation criteria or possible nonuniqueness for the overlap region in which both regular reflection
and Mach reflection are possible on the basis of simple shock polar analyses. 3. Determine bifurcation criteria for two dimensional elementary waves. There is a large amount known concerning this
problem. See [29] for background information and a deeper discussion of this area. A recent paper of Grove and Menikoff involves bifurcations in non localized wave interactions arising from
noncentered rarefaction waves [28], an issue which is part of the general bifurcation problem. 4. What is the role of scale invariance symmetry breaking for higher dimensional elementary waves? 5.
The correct function space for a general existence theory for higher dimensional conservation laws depends on the equation of state, because local singularities are allowed, and occur in centered
(cylindrical) waves. The order of the allowed singularity, and the Lp space it belongs to is limited by the equation of state. This relation has not been worked out, and so the existence theory and
large time asymptotics for radial solutions would be of interest.
103 References
1. A. Aavatsmark, To Appear, "Capillary Energy and Entropy Condition for the Buckley-Leverett Equation," Contemporary Mathematics.
2. M. Artola and A. Majda, 1987, "Nonlinear Development of Instabilities in Supersonic Vortex Sheets," Physica D 28, pp. 253-281. 3. M. Artola and A. Majda, 1989, "Nonlinear Kink Modes for Supersonic
Vortex Sheets," Phys. Fluids. 4. J. B. Bell, J. A. Trangenstein, and G. R. Shubin, 1986, "Conservation Laws of Mixed Type Describing Three-Phase Flow in Porous Media," SIAM J. Appl. Math. 46, pp.
1000-1017. 5. G. Caginalp, 1986, "An Analysis of a Phase Field Model of a Free Boundary," Archive for Rational Mechanics and Analysis 92, pp. 205-245. 6. G. Caginalp, 1986, "The Role of Microscopic
Anisotropy in the Macroscopic Behavior of a Phase Field Boundary ," Ann. Phys. 172, pp. 136-146. 7. G. Caginalp, To Appear, Phase Field Models: Some Conjectures on Theorems for their Sharp Interface
8. G. Caginalp, To Appear, Stefan and Hele-Shaw Type Models as Asymptotic Limits of the Phase Field Equations 9. G. Caginalp, To Appear, "The Dynamics of a Conserved Phase Field System: Stephan-like,
Hele-Shawand Cahn-Hilliard Models as Asymptotic Limits," IMA J. Applied Math ..
10. Tung Chang and Ling Hsiao, 1988, The Riemann problem and Interaction of Waves in Gas Dynamics (John Wiley, New York). 11. Guiqiang Chen, 1987, "Overtaking of Shocks of the same kind in the
Isentorpic Steady Supersonic Plane Flow," Acta Math. Sinica 7, pp. 311-327. 12. I-Liang Chern and T.-P. Liu, 1987, "Convergence to Diffusion Waves of Solutions for Viscous Conservation Laws," Comm.
in Math. Phys. 110 , pp. 503-517. 13. F. Furtado, 1989, "Stability of Nonlinear Waves for Conservation Laws," New York University Thesis. 14. F. Furtado, Eli Isaacson, D. Marchesin, and B. Plohr, To
Appear, Stability of Riemann Solutions in the Large 15. X. Garaizar, 1989, "The Small Anisotropy Formulation of Elastic Deformation," Acta Applicandae Mathematica 14, pp. 259-268. 16. X. Garaizar,
1989, Private Communication
104 17. C. Gardner, J. Glimm, O. McBryan, R. Menikoff, D. H. Sharp, and Q. Zhang, 1988, "The Dynamics of Bubble Growth for Rayleigh-Taylor Unstable Interfaces," Phys. of Fluids 31, pp. 447-465. 18.
H. Gilquin, 1989, "Glimm's scheme and conservation laws of mixed type," SIAM Jour. Sci. Stat. Computing 10, pp. 133-153. 19. J. Glimm, C. Klingenberg, O. McBryan, B. Plohr, D. Sharp, and S. Yaniv,
1985, "Front Tracking and Two Dimensional Riemann Problems," Advances in Appl. Math. 6, pp. 259-290. 20. J. Glimm and D. H. Sharp, 1986, "An S Matrix Theory for Classical Nonlinear Physics,"
Foundations of Physics 16, pp. 125-141. 21. J. Glimm and David H. Sharp, 1987, "Numerical Analysis and the Scientific Method," IBM J. Research and Development 31, pp. 169-177. 22. J. Glimm, 1988,
"The Interactions of Nonlinear Hyperbolic Waves," Comm. Pure Appl. Math. 41, pp. 569-590. 23. J. Glimm, Jan 1988, "The Continuous Structure of Discontinuities," in Proceedings of Nice Conference. 24.
J. Glimm and X.L. Li, 1988, "On the Validation of the Sharp- Wheeler Bubble Merger Model from Experimental and Computational Data," Phys. of Fluids 31, pp. 2077-2085. 25. J. Glimm, X. 1. Li, R.
Menikoff, D. H. Sharp, and Q. Zhang, To appear, A Numerical Study of Bubble Interactions in Rayleigh-Taylor Instability for Compressible Fluids 26. J. Glimm, To appear, "Scientific Computing: von
Neumann's vision, today's realities and the promise of the future," in The Legacy of John von Neumann, ed. J. Impagliazzo (Amer. Math. Soc., Providence). 27. J. Goodman and X. Xin, To Appear, Viscous
Limits for Piecewise Smooth Solutions to Systems of Conservation Laws 28. J. W. Grove and R. Menikoff, 1988, "The Anomalous Reflection of a Shock Wave through a Material Interface," in preparation.
29. 1. F. Henderson, 1988, "On the Refraction of Longitudinal Waves in Compressible Media," LLNL Report UCRL-53853. 30. D. Hoff and T.-P. Liu, To Appear, "The Inviscid Limit for the Navier-Stokes
equations of Compressible, Isentropic flow with shock data," Indiana J. Math .. 31. H. Holden, 1987, "On the Riemann Problem for a Prototype of a Mixed Type Conservation Law," Comm. Pure Appl. Math.
40, pp. 229-264.
105 32. H. Holden and 1. Holden, To Appear, "On the Riemann problem for a Prototype of a Mixed Type Conservation Law II," Contemporary Mathematics. 33. Ling Hsiao and Tung Chang, 1980Acta Appl. Math.
Sinica 4, pp. 343-375. 34. P.-T. Kan, 1989, "On the Cauchy Problem of a 2 x 2 System of Nonstrictly Hyperbolic Conservation Laws," NYU Thesis. 35. B. Keyfitz, To Appear, "Criterion for Certain Wave
Structures in Systems that Change Type," Contemporary Mathematics. 36. T.-P. Liu, 1985, "Nonlinear stability of shock waves for viscous conservation laws," Memoir, AMS:328, pp. 1-108. 37. T.-P. Liu,
1987, "Hyperbolic Conservation Laws with Relaxation," Comm Math Phys 108, pp. 153-175. 38. T.-P. Liu and X. Xin, To Appear, Stability of Viscous Shock Wave Asociated with a System of Nonstrictly
Hyperbolic Conservation Laws 39. A. Majda and V. Roytburd, To Appear, "Numerical Study of the Mechanisms for Initiation of Reacting Shock Waves," Siam J. Sci Stat Compo 40. R. Menikoff and B. Plohr,
1989, "Riemann Problem for Fluid Flow of Real Materials," Rev. Mod. Phys. 61, pp. 75-130. 41. R. Menikoff, 1989, Private Communication 42. R. vo~ Mises, 1958, Mathematical Theory of Compressible
Fluid Flow (Academic Press, New York). 43. R. 1. Rabie, G. R. Fowles, and W. Fickett, 1979, "The Polymorphic Detonation," Phys. of Fluids 22, pp. 422-435. 44. K. I. Read, 1984, "Experimental
Investigation of Turbulent Mixing by RayleighTaylor Instability," Physica 12D, pp. 45-48. 45. D. H. Sharp and J. A. Wheeler, 1961, "Late Stage of Rayleigh-Taylor Instability," Institute for Defense
Analyses. 46. M. Shearer, 1987, "Loss of Strict Hyperbolicity in the Buckley-Leverett Equations of Three Phase Flow in a Porous Medium.," in Numerical Simulation in Oil Recovery, ed. M. Wheeler
(Springer Verlag, New York). 47. Z. Tang and T. C. T. Ting, 1987, "Wave Curves for the Riemann Problem of Plane Waves in Simple Isotropic Elastic Solids," Int. J. Eng. Science 25, pp. 1343-1381. 48.
P. Woodward, 1985, "Simulation of the Kelvin-Helmholtz Instability of a Supersonic Slipsuface with a Piecewise Parabolic Method," Proc. INRIA Workshop on Numerical Methods for Euler Equations, p.
106 49. J. A. Zufiria, , "Vortex-in-Cell Simulation of Bubble Competition in RayleighTaylor Instability," Preprint, 1988. 50. J. A. Zufiria, 1988, "Bubble Competition in Rayleigh-Taylor Instability,"
Phys. of Fluids 31, pp. 440-446.
THE GROWTH AND INTERACTION OF BUBBLES IN RAYLEIGH-TAYLOR UNSTABLE INTERFACES JAMES GLIMMa,b,c, XIAO LIN LIc,d, RALPH MENIKOFF e,/, DAVID H. SHARp e,/ AND QIANG ZHANGc,g Abstract. The dynamic behavior
of Rayleigh-Taylor unstable interfaces may be simplified in terms of dynamics of fundamental modes and the interaction between these modes. A dynamic equation is proposed to capture the dominant
behavior of single bubbles and spikes in the linear, free fall and terminal velocity stages. The interaction between bubbles, characterized by the process of bubble merger, is studied by
investigating the motion of the outer envelope of the bubbles. The front tracking method is used for simulating the motion of two compressible fluids of different density under the influence of
gravity. Key words. Bubble, Rayleigh-Taylor Instability, Chaotic Flow. AMS(MOS) subject classifications. 76-04, 76N10, 76T05, 76E30
1. Introduction. The Rayleigh-Taylor instability is a fingering instability between two fluids with different density. Although the system is in equilibrium when the light fluid supports the heavy
fluid by a flat interface with its normal direction parallel to the direction of gravity or external forces, such equilibrium is unstable under the influence of these forces. Any small perturbation
will drive the system out of this unstable equilibrium state. Then an instability develops and bubbles and spikes are formed. A bubble is a portion of the light fluid penetrating into the peavy fluid
and a spike is a portion of the heavy fluid penetrating into the light fluid. At a later stage of the instability, spikes may pinch off to form droplets.
The problem of mixing of two fluids under the influence of gravity was first investigated by Rayleigh [1] and later by Taylor [2]. Since then, various methods have been used to study this classical
problem, such as nonlinear integral equations [3,4], boundary integral techniques [5,6], conformal mapping [7], modeling [8,9], vortex-in-cell methods [10,11], high order Godunov methods [12], front
tracking [13,14,15] etc. Most of this work has been carried out in the limit of incompressible fluids or in the limit of single component systems. (The other component is a vacuum.) For a review of
Rayleigh-Taylor instability and its applications to science and engineering, see reference [16]. We present here the results of our study on the development of single mode Rayleigh-Taylor
instability, i.e. the development of spikes and bubbles, and on the interactions between the bubbles in compressible fluids. 4 Department of Applied Mathematics and Statistics, SUNY at Stony Brook,
Stony Brook, NY, 11794-3600. bSupported in part by the Applied Mathematical Sciences Program of the DOE, grant DEFG02-88ER25053 "Supported in part by the NSF Grant DMS-8619856 dDepartment of Applied
Mathematics, New Jersey Institute of Technology, Newark, NJ 07102 "Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 'Supported by the U.S. Department of Energy. 9Courant
Institute of Mathematical Sciences, New York University, New York, NY 10012
For two dimensional compressible, inviscid fluids, the motion of the fluids is governed by two dimensional Euler equations,
o(pu 2 + P) ox
opv ot
opuv ox
at +
o[pu(e + PV)] ox
opuv _ 0 oz - ,
o(pv 2 + P) _ 0 oz -,
o[pv(e + PV)] oz = pvg,
where u is the x-component of the velocity, v is the z-component of the velocity, e is the specific total energy, P is pressure, V is specific volume and g is gravity. Here we have assumed that the
gravity points in the positive z direction. Our systems are characterized by two dimensionless quantities, the Atwood number A = a=.i!J.+- and the compressibility M2 = ~ and by the equation of state.
Here ~ ~ ~ Ph is the density of heavy fluid, PI is the density of light fluid, .x is the wavelength of the perturbation and Ch is the speed of sound in heavy fluid. We used the polytropic equation of
state e = (-r-:)P with 'Y = 1.4 in our simulations. A range of density ratios and compressibility were studied. The numerical data on single mode systems were analyzed by using an ODE which models
the entire motion of the bubble or spike. The results on single mode systems provide a basis for the study of the interaction between bubbles of different modes. We observed that, in chaotic flow,
the magnitude of the terminal velocity of a large bubble exceeds the value for the corresponding single mode system due to the interaction between the bubbles. A superposition hypothesis is proposed
to capture the leading order correction to the bubble velocity. Our simulations show agreement between the superposition hypothesis and numerical results. The agreement is better in with high density
ratio and low compressibility than in systems with low density ratio or high compressibility. The cause of such phenomena will be discussed. We use the front tracking method to study the motion of a
periodic array of bubbles and spikes (i.e. single mode system) and to study the interactions between bubbles of different modes. The front tracking method contains a one dimensional moving grid
embedded in the two dimensional computational grid. It preserves sharp discontinuities and provides high resolution around the areas of interest, i.e. nearby and on the interface between two mixing
fluids. 2. Motion of single mode bubbles and spikes. When two fluids are separated by a flat interface with its normal vector parallel to the direction of gravity or external forces, the solution of
the Euler equations is an exponentially stratified distribution of density and pressure along the direction of gravity or external forces. For systems with small deviations from such a flat
interface, the Euler equations can be linearized in terms of the amplitude of the perturbation [14,17]. When Fourier analysis is applied to the perturbation, the Fourier modes do not couple with each
other in the linearized equations. An analytic solution exists for the linearized Euler equations. In our simulation, we use the solution of the linearized Euler equations to initialize our system
and the full Euler equations with front tracking to update the evolution of the system. In this section, we consider the single mode system, which is a periodic array of bubbles and spikes, or
equivalently a single bubble or spike with periodic boundary conditions. The top and bottom of the computational domain are reflecting boundaries. When a bubble or a spike emerges from a small
sinusoidal perturbation on a flat interface, it follows the stages of linear growth, free fall and terminal velocity. For a single component system, the asymptotic behavior of the spike is free fall.
In the linear regime, the dynamics of the system is mainly governed by the linearized Euler equations. The velocity grows exponentially with time. The exponential growth rate 17 is determined by a
transient equation derived from the linearized Euler equations. In the free fall regime, the acceleration reaches a maximum absolute value, which we call the renormalized gravity gR. The velocity
varies linearly with time in the free fall regime. In the terminal velocity regime, the velocity approaches a limiting value (terminal velocity v oo ) with a decay rate b. A comparison of the
numerical results and the asymptotic behavior of a spike in each regime is given in Fig. 1. Here we use a dimensionless acceleration a dimensionless velocity a dimensionless length
and a dimensionless time 1l.!.. Ch
The entire motion of bubble and spike may be described by an ODE dv dt
+ (1 -
I7v(1 - v:')
..lL.) + [~ Voco 9R
+ VIK)2]..lL.(1 _ 1i Voo
..lL.)' VOX)
which has the solution
1 Vt 1 1 1 2 1 1 v00 t-to=-ln(-)+[--(-+-) -](Vt-vo)--ln( 17
Voo -
Each term on the right hand side of the above expression has a clear physical meaning. The first term is the contribution from the linear regime; the second term is that of free fall regime and the
third term is the contribution from the asymptotic regime. Extensive validation of this model has been performed for a range of Atwood numbers A and compressibility M. The A and M dependence of the
parameters l7,gR,b and Voo have been explored in [18]. In Fig. 2 we show an example of the comparison between the results of numerical simulation of the full two dimensional Euler equations and the
results from fitting the solution given above. In Figs. 3 and 4, we show the interface at successive time steps and density and pressure contour plots for systems with A = and M2 = 0.5. For systems
with small Atwood number A, the interface consists of two interpenetrating fluids of similar shape. Secondary instabilities appears along the side of spike. (See Fig. 3.) As A -+ 0, the pattern of
the two fluids will be symmetric with phase difference 1r. For high density ratio systems, the spike is thinner with less roll up shed off the edge of its tip. (See Fig. 4.) For systems of high
compressibility, the velocity of the bubble or spike becomes supersonic relative to the sound speed in the heavy
i, fr
110 VspikelCh
/ / /
I / / / /
/ /
/ / /
O~--------~/--~------------~----L-----~----------~~ gt/Ch o 2.1
Vspike l g
I I I I / / / / / /
\ ~ \ \ \ \ \ \
0 ' : : - - - - - - - ' - - - - - - - - ' - - - - - - - ' - - - - - - 7 " ' gt I Ch
Figure 1. The comparison of the spike velocity and the spike acceleration of the numerical result to its linear and large time asymptotic behavior for D = 2, M2 = .5 and I = 1.4. The solid lines are
the numerical res11lts obtained by using a 80 by 640 grid in a computational domain 1 X 8.
,V~SP~i~ke~/_C~h____. -__________, -__________. -________- .
Numerical result
Fitting result
OL-________~__________~________~________~~ gt/Ch
O~V~b~~b~le~/~Ch~__. -________- .__________. -________- .
Numerical result -.28l,-________--'-__________-'--________- - l ._ _ _ _ _ _ _ _----,~ g tI Ch o 21
Figure 2. Plots of spike velocit.y and bubble belocity versus time are compared with the best three parameter fit to the solution of the ODE superimposed, for the values A = 1/3, M2 = 0.5, 'Y = 1.4.
The numerical results are obtained by using a 80 by 640 grid in a computational domain 1 X 8.
material at the late times, but it remains subsonic relative to the sound speed in the light material. The effects of grid size, the remeshing frequency, the amplitude of perturbation and boundary
effects at the top and bottom of the computational domain have been tested and studied. We refer to reference [14] for the details of these studies.
gt/Ch == 0
gt /Ch == 3.2 gtlC/z == 4.2 gtlCh == 5.3
Figure 3: Plots of the interface position, density and pressure contours for A = 1/5, M2 = 0.5, i = 1.4 in a computation domain 1 x6 with a 40 by 240 grid. Only the upper two thirds of the
computational region is shown in the plot because nothing of interest occures in the remainder of the computation. (a) The interface position for successive time steps. (b) The densi ty contour plot.
(c) The pressure contour plot. From a dimensional argument, the terminal velocity of bubble should be proportional to ..;>:9, i.e. Voo = Cl
is constant of proportionality and it is a function of the dimensionless
parameters A,M and I only. In Fig. 5 we plot Cl for a range of Atwood number A and compressibility M. It shows that Cl has a strong dependence on M and for a given value of M2, the dependence on A is
approximately v'A in systems with low compressibility. Since we used the same value (1.4) for I in all of our simulations, the dependence of cIon I is not explored in this study.
l)\ gtlCh =0
g t 1Ch = 1.1 gtfCh = 1.6 gtlCh =2.6
Figure 4: Plots of the interface position, density and pressure contours for A = 0.01, }o.1 2 = .5, I = 1.4 in a computation domain 1 x 10 with a 20 by 200 grid. Only the upper four fifths of the
computational region is shown in the plot because nothing of interest occurs in the remainder of the computation. (a) The interface position for successive time steps. (b) The density contour plot.
(c) The pressure contour plot.
rC~l______________. -______________. -__--.
M2 =0.5
o M2 =0.2 o M 2 =0
+ +
++ o
D 0.0'>.Jgt/Ch
Figure 7: The plot of bubble velocities vs. time for the two bubble merger simulation. The result shows that the small bubble is accelerated at the beginning and is then decelerated after about gt/Ch
= .4.2. The small bubble is washed downstream after its velocity is reversed. The large bubble is under constant acceleration. The smooth curves represent the bubble motion as predicted by the
superposition hypothesis.
The superposition hypothesis has been compared with the experimental data of Read [19] and with the results of our numerical simulations of the full Euler equations. The relative error between
superposition theory and the results of numerical simulations or experimental data is less than 20% for systems with A > ~ and M2 ::; .1, and about 30% for systems with small density ratio or large
compressibility. In the latter case, the superposition principle is valid only for a finite time interval. This time interval can be understood as resulting from a nonlinearity in the bubble
interaction due to density stratification [15].
Figure 8: The interface evolution of a five bubble simulation. The compressibility in this case is M2 = 0.1 and the Atwood number is A = 1/11. The velocity analysis showed that the superposition
model is applicable to the largest bubble within an error of 15%. In Fig. 6, we show the interface between two fluids at successive times in a two bubble merger process. The comparison of the result
of the superposition hypothesis and the numerical result of Fig. 6 is given in Fig. 7. The behavior at small bubble velocities indicates clearly the contribution from the envelope. Initially, the
mode bubble velocity dominates the total velocity since the envelope has small growth rate due to its long wavelength. When contributions from the single bubble and the envelope have the same
magnitude but opposite signs, the bubble stops accelerating. After that point, the velocity of the envelope dominates the total velocity. Then the small bubble de-accelerates and washed downstream
quickly. Similar behavior shows up in a simulation for a system of five bubbles. (See Figs. 8 and 9.)
ghld .00 r=---=---u----r-->
-.15 -.17
-.20 '--_ _ _--"-_ _ _ _-"--.J(gtl Ch)2 o 1.4 2.8
'--_ _--'-_ _ _.L:...L._ _- - '
gtl Ch
Figure 9: The left plot displays dimensionless bubble heights vs. dimensionless t 2 in a simulation with 5 initial bubbles. The Atwood number in this case is A = 1/11, and the compressibility is M2 =
0.1. The right picture shows the dimensionless velocity vs. dimensionless t in the same case. The superposition model of the bubble velocity is valid up to gt/Ch = 0.9. By dimensional arguments, one
expects that the position of the bubble will be proportional to time t. However, for chaotic flow, the radius of a large bubble will
Figure 10: Plots of interfaces in the random simulation of Rayleigh-Taylor instability. The density ratio is A = 1/3 and the compressibility is M2 = 0.1. The acceleration of the bubble envelope is in
good agreement with the experiment of Read for 1~ generations of bubble Illerger. The acceleration decreases after this time due to the multiphase connectivity, which is different in the exactly two
dimensional computation and the approximately two dimensional experiments.
increase due to interactions between the bubbles. Consequently, the terminal velocity of the large bubble increases. By taking this into account, one can show that the position of the bubble is
proportional to t 2 , i.e. z = aAgt 2 • Read reported a range of values for a in his experiments [19], with a = .06 being a fairly typical value. Values of a in the range of 0.04 ~ 0.05 and 0.05 ~
0.06 were reported respectively
120 by Young [12] and by Zufuria [10] on the basis of their numerical simulations. In our study, we found that 0: is not a constant. 0: fall in the range 0.055 '" 0.065 at early stages of interaction
and in ranges 0.038 '" 0.044 at late stage of the simulations [15]. In Fig. 9, the slope of the large bubble curves corresponds to the value of 0:. We observe that the reduction of 0: from about .06
to about .04 is due to the multi-connectivity of the interface in the deep chaotic regime. In Young's numerical simulations, the interface between two fluids was not tracked [12]. Therefore effective
multi-connectivity occurred during early stages of his simulations. We propose this as a possible explanation for the small values of 0: which be observed. The discrepancy between the value of Q
observed at late times in our numerical simulations and the value observed in experiments [19] results from the difference between exact two dimensional numerical simulations and an approximately two
dimensional experiment. For example, the ratio of thickness to width is 1:6 in Read's experiments [19]. The computationally isolated segments of fluids in the x - Z plane may be connected in the
third dimension (y direction) in experiments. Such discrepancies may be resolved in three dimensional calculations which will provide a more realistic approximation to the experimental conditions.
The interface configuration of a random system at the initial and final times of simulation is shown in Fig. 10. We see that the small structures (bubbles) merge into large structures. Due to the
exponential stratification of the density distribution of the unperturbed fluid, the effective Atwood number decreases as the bubble moves into the heavy fluid. The reduction of Atwood number results
in a non-monotonicity of the bubble velocity. A turnover of bubble velocity is observed in our numerical simulation. Since such turnover phenomena have not been taken into account in the single mode
theory, the superposition theory is not applicable when the effective Atwood number has been reduced substantially. To get a better understanding of the phenomenon of velocity turnover in a single
mode system and the failure of the superposition hypothesis in a umlti-mode system, we use the initial density distribution of light and heavy fluid to approximate the dynamic effective Atwood number
Aeffective' For a flat interface, the density distribution is
Pi(Z) = Pi(O)Cl'p(Y;Z), i = l,h. c·
When a bubble reaches the position z, we approximate the effective Atwood number as
Ph(Z) - PI(Z) Ph(Z)+PI(Z)
+ A)exp(-yM2-Yxf) -
(1- A)
= (I+A)exp(-yM21~AAf)+(I-A)'
For a single mode system, the turnover phenomenon should occur before the effective Atwood number Aeffective vanishes. For a multi-mode system, the superposition theory is applicable as long as Aef
fcclive ~ A = Aef fective(Z = 0). In fig. 11, we plot the approximate effective Atwood number VB. f. Since Aef fective decreases more rapidly in a system of small density ratio or large
compressibility, the superposition theory fails at small value of f in these systems.
Figure 11: The plot of approximate effective Atwood number as bubble reaches position z. Aeffective decreases more rapidly in the system with small initial Atwood number or large compressibility than
in the system with large Atwood number and small compressibility. The decreasing of the effective Atwood number is the source the turnover phenomenon in single mode system and the failure of
superposition theory in multi-mode system. One should not confuse the turnover of the bubble velocity in a single mode system with the turnover of the velocity of the small bubble in the multi-mode
system. The former is due the stratified density distribution and latter is due to the interactions between bubbles, i.e. the contribution of the envelope velocity to the total velocity of the small
4. Acknowledgement. We would like to thank the Institute of Mathematics and its Applications for providing us on a CRAY-2 for portions of our study of the single mode problem.
122 REFERENCES
[3] [4] [5]
[6] [7] [8] [9] [10]
[11] [12] [13] [14] [15] [16] [17] [18] [19]
LORD RAYLEIGH, Investigation of the Character of the Equilibrium of An Incompressible Heavy Fluid of Variable Density, Scientific Papers, Vol. II (Cambridge Univ. Press, Cambridge, England, 1900),
p200. G. I. TAYLOR, The instability of liquid surfaces when accelerated in a direction perpendicular to their plan'es. I, Proc. R. Soc. London Ser. A 201,192 (1950). G. BIRKHOFF AND D. CARTER, Rising
Plane Bubbles, J. Math. Mech. 6, 769 (1957). P. R. GARABEDIAN, On steady-state bubbles generated by Taylor instability, Proc. R. Soc. London A 241, 423 (1957). G.R.BAKER, D.I.MEIRON AND S.A.OIlSZAG,
Vortex simulation of the Rayleigh-Taylor instability, Phys. Fluids 23, 1485 (1980). D.I.MEIRON AND S.A.ORSZAG, Nonlinear Effects of Multifrequency Hydrodynamic Instabilities on Ablatively Accelerated
thin Shel1s, Phys. Fluids 25, 1653 (1982). R.MENIKOFF AND C.ZEMARK, Rayleigh-Taylor Instability and the Use of Conformal Maps for Ideal Fluid Flow, J. Comput. Phys. 51, 28 (1983). D.H.SHARP AND
J.A.WHEELER, Late Stage of Rayleigh-Taylor Instability", Institute for Defense Analyses, (1961). J.GLIMM AND X.L.LI, Validation of the Sharp-Wheeler Bubble Merge Model from Experimental and
Computational Data, Phys. of Fluids 31, 2077 (1988). JUAN ZUFIRIA, Vortex-in-Cel1 simulat,ion of Bubble competition in Rayleigh-Taylor instability, Phys. Fluids 31, 440 (1988). G. TRYGGVASON,
Numerical Simulation of The Rayleigh-Taylor Instability, Journal of Computational Physics 75, 253 (1988). D. L. YOUNGS, Numerical Simulation of Turbulent Mixing by Rayleigh-Taylor Instability,
Physica D 12, 32 (1984). J.GLIMM, O.McBRYAN, R.MENIKOFF AND D.ILSHARP, Front Tracking applied to RayleighTaylor Instability, SIAM J. Sci. Stat. Comput. 7, 177 (1987). C.L.GARDNER, J.GLIMM,
O.McBIlYAN, R.MENIKOFF, D.H.SHARP AND Q.ZHANG, The dynamics of bubble growth for Rayleigh- 'll.ylor unstable interfaces, Phys. Fluids 31,447 (1988). J .GLIMM, R.MENIKOFF, X.L.LI, D.II .SnAIlP AND Q.
ZHANG, A Numerical Study of Bubble Interactions in Rayleigh-Taylor Instability for Compressible Fluids, to appear. D.H.SHARP, An Overview of Rayleigh-Taylor Instability, Physica D 12, 32 (1984).
I.B.BERNSTEIN AND D.L.BoOK, Effect of compressibility on the Rayleigh-Taylor instability, Phys. Fluids 26, 453 (1983). QIANG ZHANG, A Model for the Motion of Spike and Bubble, to appear. K.I.READ,
Experimental Investigation of Turbulent Mixing by Rayleigh-Taylor, Physica D 12,45 (1984).
FRONT TRACKING, OIL RESERVOIRS, ENGINEERING SCALE PROBLEMS AND MASS CONSERVATION JAMES GLIMM,* BRENT LINDQUISTt AND QIANG ZHANGt Abstract. A critical analysis is given of the mechanisms for mass
conservation loss for the front tracking algorithm of the authors and co-workers in the context of two phase incompressible flow in porous media. We describe the resolution to some of the
non-conservative aspects of the method, and suggest methods for dealing with the remainder. Key words. front tracking, mass conservation AMS(MOS) subject classifications. 76T05, 65M99, 35L65
1. Introduction. Two phase, incompressible flow in porous media is described by a set of PDEs consisting of a subsystem of hyperbolic equations, which describe conservation of the fluid components
that thermodynamically combine into the two distinct flowing phases, coupled to a subsystem of equations of elliptic type. The parametric functions in these equations describe the physical properties
of the reservoir (petrophysical data) and the physical/thermodynamic properties of the flowing fluids (petrofluid data). Engineering scale problems involve the use of tabulated petrophysical and
petrofluid data applicable to real reservoir fields. Such data includes discontinuous rock properties in addition to smooth variations.
We adopt an IMPES type solution method for this set of equations; namely the two subsystems are treated as parametrically coupled, and each subsystem is solved in sequence using highly adapted
methods. For the hyperbolic subsystem we use the front tracking algorithm of the authors and co-workers; for the elliptic subsystem we use finite elements. In the original form of the method the
solution conserves mass only in the limit of arbitrarily small numerical discretization. We have performed a critical analysis to understand the mechanisms of conservation loss and present here a
brief discussion of our conclusions as well as corrections that have been or are in the process of being implemented. Our goal is a front tracking method for flow in porous media that is conservative
on all length scales of numerical discretization. *Department of Applied Mathematics and Statistics, SUNY at Stony Brook, Stony Brook, NY, 11794-3600. Supported in part by the Army Research
Organization, grant DAAL03-89-K0017; the National Science Foundation, grant DMS-8619856; and the Applied Mathematical Sciences subprogram, U.S. Department of Energy, contract No. DE-FG02-88ER25053.
tDepartment of Applied Mathematics and Statistics, SUNY at Stony Brook, Stony Brook, NY, 11794-3600. Supported in part by the Applied Mathematical Sciences subprogram, U.S. Department of Energy,
contract No. DE-FG02-88ER25053. tCourant Institute of Mathematical Sciences, New York University, 251 Mercer St., New York, NY, 10012. Supported in part by the Applied Mathematical Sciences
subprogram, U.S. Department of Energy, contract No. DE-FG02-88ER25053.
124 1.1 The system of equations. Consider as example the flow of two immiscible, incompressible fluid phases. The system of equations which describe the flow of these two phases in a porous medium
a(x)(x) ~:
+ \7. a(x)V(x)f(s) = \7. a(x)ii(x)
= 0,
= -A(s,x)K(x)· \7P'
Equation (1.1a) is a single equation representing the conservation of volume of the two incompressible phases, phase 1 occupying fraction s of the available pore volume in the rock and phase 2
occupying 1 - s of the available pore volume. In general a fraction Se of phase 1, and a fraction Sr of phase 2 are inextricably bound to the rock, therefore s varies between Se and 1- Sr. A region
of the reservoir in which s is constant and equal to one of its limiting values is a region of single phase flow. In a region in which s varies smoothly, and lies strictly within its bounding limits,
both phases are flowing. The discontinuity waves that occur in the solution of (1.1a) describe discontinuous transitions in s. The first of equations (LIb) expresses the incompressibility of the
flowing fluids; the second, Darcy's law, relates the total fluid velocity ii to the gradient of the pressure field P in the reservoir. In (1.1a), f(s)ii(x) is the fraction of the total fluid velocity
carried by phase 1. For simplicity of presentation, we neglect gravitational terms in (1.1) though they are included in the analysis. For simplicity, we also neglect point source and sink terms which
describe point injection and production sites (wells) for the fluid phases, and appear, especially in two dimensional calculations, on the left hand sides of (lola) and the first of (LIb). The
effects of other neglected terms such as surface tension, chemical reactions, compressibility, etc. present in more complex flows are not included in our analysis. The other parameters in (1.1)
specify the petrophysical data (PPD) and petrofluid data of the reservoir:
a(x) is a geometrical factor accounting for volume affects not specifically accounted for by the independent spatial variables in (1.1);
1) a( x) is the cross-sectional area of a 1 dimensional reservoir. 2) a( x) is the thickness in the third dimension for a two dimensional reservOIr.
3) a( x) = 1 for a fully three dimensional calculation. ( x) is the porosity (volume fraction of pore space) of the rock medium.
K( x) is the rock permeability tensor describing the connectedness of the geometrical pathways through the rock pores.
A( s, x) is the total relative fluid transmissibility describing how the presence of phase 1 affects the flow of phase 2 and vice-versa. It has explicit x dependence as its functional form may differ
according to the local rock type. In an engineering scale problem, this data is usually specified in tabulated form and may contain information on the location of sharp transitions across faults,
layer structures, and barrier regions.
125 1.2 Conservation form for hyperbolic equations. Consider the system of conservation laws
+ F(s)x = O.
The conservative formulation for a finite difference scheme is based on the integral (weak) formulation of (1.2)
lxi+l/2 xi-l/2
over a numerical grid block centered at
dx(st Xi,
+ F(s)x) =
and can be expressed in the form
where S is the volume integrated mass in the mesh block centered at Xi. The numerical flux G, defined over a stencil of p + q + 2 grid blocks, must satisfy G(S, ... ,S)
= F(S),
and, more trivially, but important for our considerations, (1.5) 1.3 Conservation form for elliptic equations. DEFINITION. A solution if of the elliptic system (l.lb) is conservative with respect to
a grid oflines G, if it satisfies if· ftd£ = 0 for every closed path consisting of lines of G.
n if it
A solution if of the elliptic system (l.lb) is conservative in a region
i if· ftd£ = 0 for every closed path in n.
1.4 The solution method and mass conservation. The front tracking scheme of the authors and co-workers [7,8] for solving (1.1a) consists of a conservative scheme of the form (1.4) defined on a
regular rectangular, two dimensional grid G H, in conjunction with moving one dimensional grids (curves) to track the evolving discontinuity surfaces. The propagation of the tracked curves (the front
solution) is achieved via spatially local Riemann problem solutions in the direction normal to the curve. The method also takes into account flow of fluid tangential to the discontinuity curves. The
solution away from the tracked curves (the interior solution) is obtained using the grid G H with an upwind scheme of the form (1.4). These front and interior schemes are coupled, the tracked curves
providing 'boundary values' for the interior solution and the Riemann problem data taking into account interior solution values.
The elliptic system (LIb) is solved by combining the two equations into a single elliptic equation for the pressure field P, and solving by finite elements [7,8,11]. The
finite element mesh G E is a mixture of rectangles and triangles, whose edges match all discontinuity surfaces in the solution and in the PPD. The finite elements are standard; tensor product
Lagrangian basis functions on the rectangles, and triangle basis elements. The velocity field is obtained by analytic differentiation of the basis function representation of the pressure field. Five
items have been identified as responsible for loss of mass conservation of the method. These items are 1) the discretization of the medium properties, 2) the physical limits for the solution variable
sc:S s :S 1 -
3) the implementation of the conservation form (1.4) near physical discontinuities in the medium properties (faults, layers and barriers), 4) the conservative properties of the velocity field V, 5)
the tracking of moving fluid discontinuities. As these items interconnect, it is difficult to state precisely their relative order of importance; in our test example the first three issues are more
crucial for mass conservation achievement than the last two. In the remainder of this paper we consider each of these five items separately. 2. Discretization of the medium properties. It is
important in the front tracking method, for both mass conservation and resolution of the phase discontinuity behavior, that the PPD be represented in a smooth (CO) fashion away from faults rather
that in a piecewise constant (block centered) manner. Consider the first order Enquist-Osher scheme for (1.4). It has the form (in one space dimension)
The unspecified arguments (.) in (2.1) include the explicit x dependence of F required for (1.1a),
F(S,x) = a(x)v(x)f(S). Using block centered PPD, a logical choice might be
Fi-1(') = F(Si-l,Xi-d, Fi(·) = F(Si,Xi). However, with cell centering, the requirement (1.5) of cell boundary continuity on the numerical fluxes implies that, for intervals (Si-1, Si) over which aF
jaS changes sign, the evaluation of the last term in (2.1) be done as [S(X'_l/') } S'_l
laF(S(X),x)ldS+ [s,
} S(X'_l/')
This in turn requires a map of the interval (Xi-1,Xi) onto (Si-1,Si) due to the discontinuity in the PPD at xi-1/2' While it is possible to devise ways to achieve this, such a choice is equivalent to
an ad hoc smoothing of the data.
127 Further, the use of block centered PPD requires the specification of greater amounts of data in order to perform calculations on refined meshes. While the specification of such additional data
can be automated, this again results in an ad hoc smoothing method that is intimately coupled to the grid used in the solution of (lola). In addition, as not all PPD remains smooth as the numerical
discretization lengths t.x --+ 0; these physical discontinuities are an inherent aspect of the PPD data that must be discretized accurately on all length scales. The use of block centered PPD can
also play a role in introducing spurious instabilities for front tracking algorithms. In Figure 1 we show a tracked curve passing through two mesh blocks. If point PIon the tracked curve is
propagated using the PPD from mesh block B I , and P2 is propagated using the PPD from B 2, a kink will develop in the front, which under physically unstable flow regimes will grow.
Figure 1 A jump discontinuity in the PPD, as in the specification of piecewise constant data, between mesh blocks BI and B2 can result in numerically based discontinuous propagation behavior of the
two points PI and P2 on the tracked curve. It is relatively easy to provide a representation of the PPD that is continuous in the appropriate regions, and resolves the discontinuous structure as
well. We illustrate an automatic discretization method which achieves this. The method has the additional feature that it discretizes the PPD on a grid that is independent of the grids G H and G E.
This allows a representation of the PPD that can be held fixed while mesh refinement studies are done for the solution methods used on (1.1a) and (LIb). While the idea behind this discretization
method is not new [12,14}, we reiterate that smooth representation of the PPD is necessary for use in conjunction with the front tracking method.
This discretization method is illustrated in Figure 2. Figure 2a depicts the two dimensional areal plan of an inclined reservoir bed. The reservoir contains two fault lines F 1 and F 2. The two
dimensional slice follows the local inclination of the middle
Y=Y2 Y=Y2
Y=Y3 (2b)
Figure 2 a) An areal view (x vs y) of a reservoir field having two fault lines F 1 and F 2' b) Three vertical plans (x vs z) through the reservoir field. c) A demonstration of the placement (dark
circles) of given field petrophysical data. d) An enlargement of a region in c) showing a possible choice of points where additional field data is required in order to fit the fault and boundary
~~ ~ ~ ~ ~ ~ \/ ~ /
I \/ ~ II ~V II ~v '-II I\) ~ II -;; r\
V~ ~K/ IIIDk1
II IDk1 II 16k1 16 II k1 1/
~ ~ ~ ~ ~I ~ ~ ~
I II II II II II II II II II II
Figure 2e A tessellation of the geophysical structure of the reservoir to produce Co continuity of the petrophysical data.
thickness (reconstructed)
Figure 3 Comparison of algorithmically smoothed reservoir thickness data with field measured data. The numerical smoothing algorithm used field data defined on a rectangular 9 by 12 grid and
knowledge of the fault locations to produce a smooth approximation to the reservoir thickness.
of the reservoir bed; three vertical plans of the bed are depicted in Figure 2b. PPD was specified from field readings at the corners (black points in Figure 2c) of a rectangular grid GR. To obtain
the required representation of this data, additional PPD is required along each side of the fault lines, and along the boundaries of the computational domain. The unshaded points in Figure 2d (a
close up of a small area of Figure 2c) demonstrate one possible placement for specification of this additional data. A tessellation T of the grid G R into a mesh of rectangles and triangles is
achieved by triangulating those rectangles of G R that are cut by faults, or lie next to the computational boundary, in such a manner that - the faults are coincident with triangle sides, - triangle
nodes lie either at the corners of G R, on the fault lines, or on the boundaries. Such a tessellation is shown in Figure 2e. CO smoothness of the PPD away from the discontinuities is then achieved by
employing, for example, linear (bilinear) interpolation on the triangles (rectangles) of T. The efficacy of such a discretization is demonstrated in Figure 3. Figure 3b shows initially specified
contours of a( x ) for the reservoir under discussion in Figure 2. PPD were specified on the corners of the rectangular 9 by 12 grid G R of Figure 2c. Data on each side of the fault lines and at the
boundaries of the reservoir were obtained by constant extrapolation from the closest point of GR. The resultant piecewise continuous discretization of the data is shown as a contour plot in Figure
3a. In spite of the coarseness of the grid G R, the resultant piecewise continuous discretization of the PPD on T agrees extremely well with the measured data in the large area A3 and in A 2 . The
representation in the small triangular region Al is not as good. However, in this particular calculation, the active region of the reservoir was constrained (by specification of the rock permeability
values) to lie only in A3, so no effort was made to improve the representation of the data in AI. While T is useful for providing interpolation of the PPD, it is inappropriate to compute numerical
derivatives of this data directly from the linear/bilinear representation it provides (which would result in piecewise constant/linear derivatives). This is due to the extreme aspect ratios that may
develop for some triangles. Rather, derivatives such as those required to compute the local gravitational strength (neglected in (1.1)), can be achieved by usual finite differences. Figure 4
illustrates this. The gradient d5/ dx of some petrophysical quantity 5 at the point PO can be obtained by central differencing over a distance of 2h (Figure 4a). The values of 5 at P_I and PI are
obtainable by interpolation on the tessellation T. This finite difference scheme must be modified near faults and boundaries in an appropriate one-sided manner as illustrated in Figures 4b,c and d.
In Figure 4b, the centered difference at Po is based on an irregular stencil of length hI + h 2 . If Po lies exactly on a horizontal fault as in Figure 4c, two derivatives are required, one on each
side of the fault. If the fault kinks, it may be necessary to resort to a difference based on a triangle for one of the sides of the fault, as illustrated in Figure 4d.
3. Physical solution limits. For incompressible flow the fluid volume fraction s, and hence the numerical integrated volume fraction S, are bounded above and
. ....0 - - - - -..... ....O------t.~ • P-I Po PI
(4b) (4a)
• •
• •
Po (4c)
• P-I
Figure 4 A centered finite difference stencil a) used to compute derivatives (here we illustrate d/dx) of the petrophysical data must be modified in appropriate one-sided ways b) c) and d) in the
vicinity of discontinuities and the reservoir boundary. below, Furthermore, functional evaluations may be defined only for s (S) lying within this bounded domain. The numerical scheme (1.4), or its
two dimensional extension to irregularly shaped domains, while conservative, provides no guarantee that the numerical solution will remain within these bounds. In fact (1.4) only guarantees
conservation if the numerical flux function G is definable for every numerical value of S generated. In practice only a finite extension of the domain [Se, 1 - Srl is required. Given petrophysical
and petrofluid data based on tabulated experimental data, even limited extension is usually impractical. One is then forced to truncate the solution whenever it reaches its limiting values. In order
to maintain mass conservation, this truncated mass must be reintroduced into the solution in a physically realistic manner. We discuss a fully two-dimensional, unsplit version of (1.4) which includes
the reintroduction of truncated mass. (Directionally split schemes are less preferred as truncated mass must be stored each directional sweep, and the excess masses reintroduced after the final
.. ;
.•.. f , . . - -
..- ....•
.............. .
. ..............
Figure 5 Schematic apportioning of truncated mass stream (indicated by flux direction F) mesh blocks.
into down-
Consider a rectangular mesh block ij removed from faults or moving fronts. Let Sij+l denote the solution obtained for ij by applying (1.4) in an unsplit, two dimensional form,
sn+l = .)
s-n+l ij < SC
s-n+l ij > 1 - Sr
S c,
1 - S T>
S!,+l , )
otherwise .
Let tij == Sij+l - Sij+l represent the clipped mass for block ij that must be restored to the solution. For tij > 0 « 0), we apportion the clipped mass into appropriate downstream (upstream) blocks
in proportion to the carrying capacities of these blocks. This is indicated schematically in Figure 5. If no appropriate downstream (upstream) blocks are available, or they have insufficient carrying
capacity, any unallocated clipped mass is accumulated until it can be distributed. For mesh blocks cut by faults, a clipped mass is calculated for each polygonal area into which the mesh block is cut
by the fault. Distribution of the clipped mass again takes place, but now into the appropriate polygon areas. The algorithm consists of two passes over the mesh. On the first pass the tij are stored;
on the second the clipped mass is distributed. If the number of mesh blocks having t of 0 is dense in the mesh, (i.e. on the average, any given mesh block lies downstream from several mesh block
containing excess mass) the apportioning of the tij becomes a constrained optimization problem amongst coupled mesh blocks. If
134 however, the number of mesh blocks in which Eij # 0 is sparse, such that downstream mesh blocks receiving mass are in one-to-one correspondence with mesh blocks having excess mass, the
distribution of this mass can be done by a direct sweep through the mesh treating one mesh block at a time. Based on our early experience we expect the number of mesh blocks containing truncated mass
to be relatively sparse; therefore we have implemented the latter, simpler scheme for distributing the truncated mass. 4. The conservative form in the vicinity of faults. With the PPD smoothly
discretized on a fixed grid, we now turn to the solution of (l.la) in the vicinity of faults. A common implementation of (3.1) in the vicinity of faults is to stair-case the faults to conform to the
grid G H, the stair-casing becoming finer as GH is refined. However this choice leads to spurious bending of the tracked discontinuity surfaces and the potential for spurious fingering in unstable
flow regimes. This is illustrated in Figure 6 where a tracked interface traveling obliquely to a stair-cased fault encounters a series of corners. The front movement around these corners results in
Fault line
Figure 6 Bending of a tracked propagating discontinuity wave D(t), (ti > tj for i > j), traveling obliquely to a fault line represented in a stair-case fashion. One is then constrained to exact
representation of the faults (as in Figure 7) which is achieved in the front tracking scheme by representing them as tracked, unmoving waves. The conservative scheme (3.1) must be modified to handle
those mesh blocks of G H cut by such faults. An appropriately volume averaged solution value must be stored for each of the separate regions produced (Figure 7). (3.1) is modified in the obvious way
as a sum of fluxes flowing normally through each of the sides of the irregularly shaped polygons thus formed. The problem one is now forced to solve is the restriction due to the CFL condition which
reduces the allowed maximum timestep by the ratio of the smallest area of all such polygons
Figure 7 Modification of grid and solution representation required for exact representation of fault lines. formed to that of the regular rectangles on G H, min {Apolgon} Aregular rectangle Several
authors (Leveque [9,10], Chern and Colella [5], Berger and Leveque [1]) have treated this problem for the Euler equations of compressible fluid dynamics in order to overcome the CFL restrictions. It
is a common feature of such conservative interface methods to allow excess entropy production, resulting in shock wall heating, slip line heating, or fluid mixing and entrainment in compressible
fluid flow. The approach we take here is similar to that of Chern and Colella, in that conservation is restored by placement of mass in adjacent cells when CFL limits are encountered. See §3. 5.
Conservative properties of v. As the subsystem (l.la) contains the non-hyperbolic variable V, it is necessary that this velocity field be conservative with respect to the grid G H to ensure that an
algorithm of the form (3.1) remains conservative when applied to (l.la). However, in order to avoid an undesirable coupling between the grids G Hand G E, it is then desirable that be conservative in
the complete computational domain, i.e. must be divergence free everywhere.
The finite element method currently in use for front tracking calculations [7,8,11] does not have this conservative property. The velocity field it produces is, in general, not conservative in the
region of computation, and can have spurious source/sink regions especially near corners and boundaries. Raviart and Thomas [13] have developed a mixed finite element method for solving
v = j,
Figure 8 Schematic illustration of issues to be dealt with in deriving a conservative formulation for propagating tracked waves. Solutions for v and u are developed in two separate spaces, Vi. and
Uh, of polynomial elements. Through judicious choice of the properties of the basis functions in these two spaces the numerical solution v solves (6.1a) exactly. Chavant, Jaffre et. al. [2,3,4] and
Douglas, Ewing and Wheeler [6] have adapted this mixed finite element approach to two phase incompressible flow in two dimensions. This body of work is characterized by also solving (l.la) by a
Galerkin procedure. We are in the process of implementing this mixed finite element method for the solution of (l.lb) in combination with front tracking for the solution of (l.la).
6. Tracking of moving fronts. The tracking curves are propagated [7,8] in a non-conservative manner. As mentioned in §1.4, these one dimensional grids are composed of piecewise linear bonds. The
movement of the tracked curve is achieved by propagating the end points of each bond via information from Riemann solutions. Figure 8a illustrates the propagation of a bond of length ~£. The movement
of the bond's two end points results in the movement of an amount of mass along the
137 entire bond. This method of propagating the front will conserve mass only in the limit M --,t O. We are in the process of investigating different approaches for achieving a conservative front
propagation algorithm. The most likely approach would be one consistent with the integral formulation (1.3). One such bond oriented version is indicated by the integration path (dashed line) shown in
Figure 8b. However, the front propagation is not usually as straightforward as suggested by Figure 8b. One possible complication is depicted in Figure 8c. Further complications exist at points where
two or more tracked fronts join (Figure 8d), or when separate tracked curves interact. In addition the details of the coupling of a cO,nservative approach for the front to the method (3.1) used in
obtaining the solution away from the front remain to be worked out. Preferably, any proposed conservative scheme for the fronts should be extendible to systems and compressible flow.
7. Example calculations. Figure 9 shows the results of a calculation for the areal plan of the reservoir field shown in Figure 2. The PPD (top of formation, Q, 1/>, rock permeability K) were
discretized according to §2, based upon a 9 by 12 rectangular grid GR. Local gravitational strengths were calculated using finite differences as discussed in §2. The hyperbolic equation (lola) was
solved using front tracking. The faults F 1 and F 2 were represented as tracked, unmoving waves. Water was injected at constant rate into well 11, and fluid pumped at constant rates from wells PI and
P2. The interface between the resultant two phase, water swept region and the single phase, undisturbed oil region was tracked. The solution in the region away from the tracked discontinuities was
calculated using the first order Enquist-Osher scheme on a 9 by 12 regular rectangular grid. Since the Raviart-Thomas based mixed finite element method has not as yet been completely installed, (LIb)
was solved with the original finite element method described in §1.4. Linear/bilinear basis functions were used on the mesh G E . This mesh adapts to the moving interfaces, hence it changes each
timestep. There is no correlation between the grids G E and G H. Figure 9 shows the tracked phase discontinuity interface at selected times during the first 33 years of the calculation. Figure 10
shows the percentage mass balance error for the water component as a function of time. The mass balance error is defined as (7.1 )
EM == Mpresent - Minitial - Minjected + Mproduced Minjected
where M represents water mass. Note that EM is a 'forgiving' dimensionless measure of mass balance error since, as .Mil1jected typically increases in time, EM can decrease in spite of an increase in
the absolute magnitude of the numerator of (7.1). The calculation was performed using the mass conservation corrections discussed in §§2, 3 and 4. The velocity field obtained was not conservative
(§4) over the entire domain of the calculation, and no correction was applied for the front movement (§6).
Figure 9 Calculation for the reservoir field described in figures 2d and 3. The phase discontinuity delineating the two phase region swept by the injected water from the single phase, undisplaced oil
region is shown at selected times.
tilDe (yrs)
Figure 10 Water mass balance error EM for the calculation of Figure 9. The resultant mass balance error after 33 years of simulation time is ~ -6%. The mass balance error is indeterminate at t = 0 as
the denominator of (7.1) goes
to zero. The initial mass balance errors are dominated by three things: 1) the nonconservative front propagation scheme (§6), 2) the lack of correction of mass balance errors for the scheme (1.4) in
mesh blocks cut by the initial phase discontinuity interface (which encloses an area smaller than the size of a grid block for a range of initial times), 3) the inability of the finite element method
implementation to resolve the velocity field around the point source. This last cause is the most critical and has long been of problematic concern in reservoir simulation (see for example the
treatments in [4] and [6]). An analytic treatment of the velocity divergence in the vicinity of wells has been included in this calculation, but match-up with the finite element solution is
problematic. The initial error is also amplified by the smallness of the denominator. At late times, the error is primarily due to the nonconservative use of the velocity field.
Acknowledgements. The authors wish to thank Statoil, Norway for supplying the realistic petrophysical and petrofluid field data used in this study, and for their support of the development of front
tracking for reservoir calculations. We also gratefully acknowledge the continuing support of the Institute for Energy Technology, Norway. REFERENCES [1]
[2] [3]
[5] [6]
[7] [8]
[9] [10] [11] [12] [13]
M. BERGER AND R. J. LEVEQUE, An adaptive cartesian mesh algorithm for the Euler equations in arbitrary geometry., AlA A 89-1930, 9th Computational Fluid Dynamics Conference, Buffalo, NY, June 1989.
G. CHAVENT AND J. JAFFRE, Mathematical Models and Finite Elements for Reservoir Simulation, North Holland, Amsterdam, 1986. G. CIIAVENT, G. COHEN, J. JAFFRE, M. DuPUY, AND 1. RIBERA, Simulation of
two dimensional waterflooding by using mixed finite elements, Soc. Pet. Eng. J., 24 (1984), pp. 382-389. G. CHAVENT, J. JAFFRE, R. EYMARD, D. GUERILLOT, AND L. WEILL, Discontinuous and mixed finite
elements for two-phase incompressible flow, SPE 16018, 9th SPE Symposium on Reservoir Simulation, San Antonio. I-L. CHERN AND P. COLELLA, A conservative front tracking method for hyperbolic
conservation laws, J. Computational Physics (to appear). J. DOUGLAS, JR., R. E. EWING, AND M. F. WHEELER, The approximation of the pressure by a mixed method in the simulation of miscible
displacement, R.A.1.R.O. Analyse numerique, 17 (1983), pp. 17-33. J. GLIMM, E. ISAACSON, D. MARCHESIN, AND O. McBRYAN, Front tracking for hyperbolic systems, Adv. Appl. Math., 2 (1981), pp. 91-119.
J. GLIMM, W. B. LINDQUIST, O. McBRYAN, AND L PADMANABHAN, A front tracking reservoir simulator, five-spot validation studies and the water coning problem, SIAM Frontiers in Appl. Math., 1 (1983), pp.
107-135. R. J. LEVEQUE, Large time step shock-capturing techniques for scalar conservation laws, SIAM J. Numer. Anal., 19 (1982), pp. 1091-1109. , A large time step generalization of Godunov's method
for systems of conservation laws, SIAM J. Numer. Anal., 22 (1985), pp. 1051-1073. O. McBRYAN, Elliptic and hyperbolic interface refinement, in Boundary Layers and Interior Layers - Computational and
Asymptotic Methods, J. Miller (ed.), Boole Press, Dublin, 1980. L PADMANABHAN. Chevron Oil Field Research, private communication. P. A. RAVIART AND J. M. THOMAS, A mixed finite element method for
second order elliptic problems, in Mathematical Aspects of Finite Element Methods, Lecture Notes in Mathematics 606, Springer-Verlag, New York, 1977, pp. 292-315. Y. SHARMA. Cray Research Inc.,
private communication.
J. M. GREENBERG*
Introduction. In this note we shall examine special collisionless solutions to the four velocity Broadwell equations. These solutions are new and seem to have gone unnoticed by other investigators
who have worked in this areal. These solutions are apparently stable; that is in numerical simulations they appear as the asymptotic state of the evolving system. The basic quantities of interest are
particle densities
r, i, u,
r(x, y, t) represents the number of particles per unit area at (i:, iJ) at time l travelling
with velocity weI. The densities i, u, and d have a silnilar interpretation except that the particles travel with velocities -weI, we2, and -we2 respectively. The evolution equations for the
densities are
~i+ w: x = e } Ii - wl x = e ul +wu y =-e di-wdy =-e
where the collision term
e is given by e = 1«ud - i'l).
Dimensional consistency implies that
d· (1 a and rt
+ rx = 0,
a < x < (a
+ t) and t > 0
and thus the interaction region was the strip -a < x < a and t > O. The principal result of [2] was that in the interval -a < x < a:
lim (r(x,t),I(x,t),(ud)(x,t»
= (0,0,0),
= max( u(x, 0) -
d(x, 0), 0),
t .... =
lim u(x, t)
t ....
142 and (1.14)
lim d(x, t)
= max(O, d(x, 0) -
u(x, 0».
Additional exponential decay estimates were obtained for the more customary form of the Broadwell equations
where now
p = r
+ 2c + I.
The equations (1.8) and (1.9) reduce to this system on the manifold u == d ~f c. In this note we shall confine our attention to special collisionless solutions of the full two-dimensional system
(1.7) and (1.8). That such solutions were possible was suggested by computations we performed on (1.7) and (1.8) for a varietyor' initial and boundary conditions. These computations suggested that
the long time behavior of the system was characterized by such solutions and motivated our trying to establish that the system did in fact support such solutions. The collisionless solutions are
nonconstant, positive solutions to (1.7) and (1.8) which satisfy the additional identity that ud - rl = O. In section 2 we demonstrate that (1.7) and (1.8) does indeed support such solutions and in
section 3 we shall demonstrate how these solutions emerge and characterize the long time behavior of the system. It should be noted that these collisionless solutions are valid for both the collision
term considered in our simulations, namely e = (ud - rl)j p, and the more customary collision term e = (ud - rl). 2. Collisionless Solutions. In this section we shall exhibit a class of nonconstant,
positive solutions to (1. 7) and (1.8) which satisfy the additional constraint:
ud - rl == O.
(2.1) Such solutions must be of the form
rex, y, t) = R (x - t, y)} 1
= Ll(X +t,y)
= U1 (x,y -
d(x,y,t) = D 1 (x,y+t)
or equivalently
r(x,y,t) = R2(x+ y -t,x_ y _t)} l(x,y,t)=L2(x+y+t,x-y+t) .
u(x, y, t) = U2(X + y - t, x - y + t) d(x,y,t) = D2(x + y + t,x - Y - t)
The fact that (2.1) must hold implies that R 2,L 2,U2, and D2 must satisfy
+ y + t, x - Y + t) Y + t)D 2(x + y + t, x -
R2(X + Y - t, x - y - t)L 2(x = U2(x
+y -
t, x -
Y - t).
If we let
R3 = InR2} L3 = InL 2
U3 =lnU2
= InD2
then (2.4) is equivalent to
R3(X + Y - t, x - Y - t) = U3(x
+y -
t,x -
+ L3(x + y + t, x - Y + t) Y + t) + D3(X + y + t,x -
Y - t).
Moreover, if we let
B1 = x
+y -
t, B2 = X - Y - t, B3 = X + y + t,
and B4 = x - y + t,
and insist that R 3, L 3, U3, and D3 satisfy
for all B1, B2 , B3, and (J4, then the functions R 3, L 3, U3, and D3 will also satisfy (2.6). The last relation implies that if R 3, L 3, U3, and D3 are C2, then
or equivalently that
R3 = h(Od + !2(02)} L3 = h(03) + 14(04) U3 = 15(Od + 16(04) .
D3 In order that the functions 11 -
h, 16
= h(03) + IS(02) Is
satisfy (2.8), we must also have
= 14
,h =!J
If we now let
Fl = exp(h), F2 = exp(!2), F3 = exp(!J), and F4 = exp(h) ,
then the collisionless solution, (2.3), reduces to
= Fl(X + Y - t)F2(x - Y - t)} = F3(X + y + t)F4(x - Y + t) u(x, y, t) = Fl(x + Y - t)F4(x - Y + t) rex, y, t) lex, y, t)
d(x, y, t) = F3(X and the density, p = r
+ y + t)F2(x - Y - t)
+ 1+ u + d, is given by
+Y -
+ F3(X + y + t))(F2(x -
Y - t) + F4(X - Y + t))
where Fl - F4 are arbitrary positive functions. The solutions given by (2.13) satisfy no obvious boundary conditions. Of somewhat more interest are collisionless solutions which satisfy
reO, y, t) = 1(0, y, t) and r(l, y, t) = 1(1, y, t) ,0 < y < 1,
= d(x,O,t)
and u(x,l,t)
= d(x,l,t)
,0. = 3 while in simulation 2 we take Ul = 5, U2 = 0.2, and>' = 3. For simulation 2 the density is initially constant and equal to 7.2. For each simulation we show the initial stages of the motion at
times t from to 1.75 in increments of .25 and the latter stages at times t from 46 to 49.75 in increments of .25. We also show for each simulation four summary diagnostics graphs which demonstrate
that the solutions being computed converge to collisionless solutions of the type described in Section 2 (see (2.20)).
Each summary graph has the following layout. In the upper left hand corner is a graph of r(O, 0, t) versus time over the interval 46 ::; t ::; 50; in the upper right hand corner is a graph of p(O, 0,
t)/4 = (r(O, 0, t) + 1(0, 0, t) + u(O, 0, t) + d(O, 0, t))/4 versus time over the interval 46 ::; t ::; 50; in the bottom left hand corner we show error(t)
~f max[lr(O, 0, t) -
1(0,0, t)l, Ir(O, 0, t)l, -u(O, 0, t)l, Ir(O, 0, t) - d(O, 0, t)1l
versus time over 46 ::; t ::; 50, and finally in the bottom right hand corner we show maxcollision(t) = max I( ud - rl)(x, y, t)1 versus time over 46 ::; t ::; 50. It is the (x,y)
structure of the last graph which demonstrates that our solutions have converged to the collisionless waves described in (2.20). REFERENCES
[1] [2]
T. PLATKOWSKI AND R. ILLNER, Discrete Velocity Models of the Boltzmann Equation: A Survey of the Mathematical Aspects of the Theory, SIAM Review, 30, (1988), pp. 213-255. J. M. GREENBERG AND L. L.
AIST, Decay Theorems for the Four Velocity Broadwell Equations, submitted to Arch. Rat. Mech. and Anal.
148 SIMULATION I
t== 0.00 0
t== 0.250
== 0.500
== 1.000
t ==
== 1.500
== 1.750
149 SIMULATION I
t = 46.250
t= 46.500
t= 46.750
t = 47.000
t = 47.250
t= 47.750
150 SIMULATION I
48.0 00
= 48.500
== 49.000
== 49.500
t== 48.250
= 48.750
== 49.250
t== 49.750
151 SIMULATION I
0.5 '-------'------'-----'------' 46 47 48 49 50 1 xlO-4
0.5 '-------'----'----'------' 47 48 49 50 46
152 SIMULATION II
t ==
t ==
t ==
t ==
t ==
t == 1.250
== 1.500
== 1.750
153 SIMULATION II
t::: 46.250
== 46.750
== 47.250
== 47.750
50 t ~ 48.2
== 48.500
== 48.75
t == 49.2
0 == 49.75
155 SIMULATION II
6 xlO-4
0.Q15 .--_--,-----=:m;:;:ax::.:;c::..:o;=:lli="s"-,io",,n'--T_ _-,
ANOMALOUS REFLECTION OF A SHOCK WAVE AT A FLUID INTERFACE* JOHN W. GROVEt AND RALPH MENIKOFF:j:
Abstract. Several wave patterns can be produced by the interaction of a shock wave with a fluid interface. We focus on the case when the shock passes from a medium of high to low acoustic impedance.
Curvature of either the shock front or contact causes the flow to bifurcate from a locally self-similar quasi-stationary shock diffraction, to an unsteady anomalous reflection. This process is
analogous to the transition from a regular to a Mach reflection when the reflected wave is a rarefaction instead of a shock. These bifurcations have been incorporated into a front tracking code that
provides an accurate description of wave interactions. Numerical results for two illustrative cases are described; a planar shock passing over a bubble, and an expanding shock impacting a planar
contact. Key words. anomalous reflection, front tracking AMS(MOS) subject classifications. 76-06 76L05
1. Introduction.
The collision of a shock wave with a fluid interface produces a variety of complicated wave diffractions [1,2,12). In the simpliest case these consist of pseudostationary self-similar waves that can
be described by solutions to Riemann Problems for the supersonic steady-state Euler equations. In more complicated cases and in particular when one or both of the colliding waves is curved, these
regular diffraction patterns can bifurcate into complex composites of individual wave interactions between the scattered waves. The purpose of this analysis is to understand the particular
bifurcation behavior of the collision of a shock in a dense fluid with an interface between the dense fluid and a much lighter fluid. Two basic cases will be considered. The collision of a shock in
water with a bubble of air, and the diffraction of a cylindrically expanding underwater shock wave with the water's surface. It will be seen that initially these interactions produce regular shock
diffractions with reflected Prandtl-Meyer waves. Subsequently these regular waves bifurcated to form anomalous waves that are analogous to non-centered Mach reflections whose reflected waves are
rarefactions. We will describe a method to include this analysis into a front tracking numerical method that allows enhanced resolution computations of these interactions. *This article is a
condensed version of reference [9] which will appear elsewhere. tDepartment of Applied Mathematics and Statistics, State University of New York at Stony Brook, Stony Brook, NY 11794. Supported in
part by the U. S. Army Research Office, grant no. DAAL03-89-K-0017. tTheoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545. Supported by the U. S. Department of Energy.
157 2. The Equations of Motion.
In the absence of heat conduction and viscosity, the fluid flow is governed by the Euler equations that describe the laws of conservation of mass, momentum and energy respectively.
OtP + V' . (pq) = 0,
Ot(pq) + V' . (pq @ q) + V' P
= pg,
Ot(pf) + V'. pq(f + VP) = pq. g.
Here, p is the mass density, q is the particle velocity, g is the gravitational acceleration, f = Iql2 + E is the total specific energy, E is the specific internal energy, and P is the pressure.
Gravity will be neglected since the interactions considered here occur on short time-scales. The equilibrium thermodynamic pressure P(V, E), where V = II p is the specific volume, is referred to as
the equation of state and describes the fluid properties.
It is well known that system (2.1) is hyperbolic, and the characteristic modes correspond to the propagation of sound waves and fluid particles through the medium. The sound waves propagate in all
directions from their source with a velocity c with respect to the fluid, where the sound speed c satisfies c2 = oP I op at constant entropy. Another important measure of sound propagation is the
Lagrangian sound speed or acoustic impedance given by pc.
3. Elementary Wave Nodes and the Supersonic Steady State Riemann Problem. An elementary wave node is a point of interaction between two waves that is both stationary and self-similar [7]. It can be
shown [6, 13 pp. 405-409] that there are four basic elementary nodes. These are the crossing of two shocks moving in opposite directions (cross node), the overtaking of one shock by another moving in
the same direction (overtake node), the splitting of a shock wave due to interaction with other waves or boundaries to produce a Mach reflection (Mach node), and the collision of a shock with a fluid
interface (diffraction node). All of these waves are characterized by the solution of a Riemann problem for a steady state flow, where the data is provided by the states behind the interacting waves.
We will primarily be concerned with the last of these interactions, but bifurcations in this node will lead to the production of all of the other elementary nodes. For a stationary planar flow,
system (2.1) reduces to a 4 x 4 system that is hyperbolic in the restricted variables provided the Mach number M = Iql/c is greater than one, i.e., the flow is supersonic. The streamlines or particle
trajectories define the time-like direction. The hyperbolic modes in this case are associated with two families of sound waves, and a doubly linearly degenerate characteristic family. If () and q are
the polar coordinates of the particle velocity q, then the sonic waves have characteristic directions with polar angles () ± A, where A is the Mach angle, sin A = M- 1 • Waves of these families are
either stationary shock waves or steady state centered rarefaction waves also called Prandtl-Meyer waves. Waves of the degenerate family are a combination of a contact discontinuity and a vortex
158 sheet across which the pressure and flow direction (J are continuous while the other variables may experience jumps. Following the general analysis of systems of hyperbolic conservation laws
[14], we see that the wave curve for a sonic wave family consists of two branches corresponding to either a shock or a simple wave. The shock branch is commonly called a shock polar [4, pp. 294-317]
and actually forms a closed and bounded loop where the two sonic families meet at the point where the stationary shock is normal to the incoming flow. If we let the state ahead of the wave be denoted
by the subscript 0, a straightforward derivation of the Rankine-Hugoniot equations for the system (2.1) shows that the thermodynamics of the states on either side of the shock are related by the
Hugoniot equation (3.1)
= Eo + -2-(Vo - V).
A similar derivation applied to the steady state Euler equations shows that the flow velocities on either side of a stationary oblique shock satisfy
(3.2) where H by (3.3)
= E + PV is the specific enthalpy. tan( (J - (Jo)
The jump in the flow direction is given
= ± [ poqo2 P- -(PPo-
D) cot f3 .
Here f3 is the angle between the incoming streamline and the shock wave, and is given by sinf3 = u/qO, where u = Vom is the wave speed of the shock wave with respect to the fluid ahead and m is the
mass flux across the shock, m 2 = -!::'P /!::. V. The difference between the flow direction on either side of the shock is called the turning angle of the wave. The same analysis when applied to the
simple wave curves shows that the entropy is constant inside a Prandtl-Meyer wave. The flow speed and flow direction are related by (3.2) where H = H(P, So) and (3.4)
= (Jo 1=
COSAI dP pcq
In analogy to the shock polar defined by (3.1)-(3.3) we will call this locus of states a rarefaction polar. It is easily checked that the two branchs of (3.4) are respectively associated with the (J
± A characteristic directions in the sense of Lax. Similarly it can be shown [8] that for most equations of state, the two branches of (3.3) are also associated with the (} ± A characteristics in the
sense of Lax provided the state downstream from the shock is supersonic. Since (} and P are constant across waves of the degenerate middle family, the Riemann problem for a stationary two-dimensional
flow can be
159 solved by finding the intersection of the projections of the wave curves in the () - p phase plane. The are two major differences between the solution to the Riemann problem for a stationary flow
and that of a one-dimensional unsteady flow. The Mach number behind the shock wave is given by 2
m = -(1 + -P cot 2 (3) pc P5
For most equations of state [15] m < pc and is a monotone function of the pressure along the shock Hugoniot. Thus if f3 is sufficiently close to ~ the flow behind the shock will be subsonic and the
steady Euler equations ceases to be hyperbolic. The second reason is that for an normal angle of incidence, the turning angle through the shock is zero. This means that the two branches of the shock
polar meet at this point forming a closed and bounded loop. These two issues together imply a loss of existence and uniqueness for the solution to the two dimensional stationary Riemann problem. The
resolution is that a bifurcation occurs from a stationary solution to a time dependent solution of the full two dimensional Euler equations. The actual shape and properties of the shock and
rarefaction polars depends on the equation of state. We will make no use of a specific choice of equation of state in our analysis, but we will need to assume that the equation of state satisfies
appropriate conditions to guarantee that the shock polar has a unique point at which the state behind the shock becomes sonic, and a unique local extremum in the turning angle. These conditions are
satisfied by most ordinary equations of state, and in particular by the polytropic and stiffened polytropic equations of state used in the numerical examples. 4. Anomalous Reflection.
As was mentioned in the introduction, the simpliest case of shock diffraction is that in which the flow near a point of diffraction is scale invariant and pseudostationary. This will be the case
provided the flow is sufficiently supersonic when measured in a frame that moves with the point [8]. Then the data behind the incoming waves define Riemann data for the downstream scattering of the
interacting waves. A representative shock polar diagram for a regular shock diffraction producing a reflected Prandtl-Meyer wave is shown in Fig. l. Diffractions of these types have been studied
experimentally by several investigators [1,2,11,12]' as well as numerically [3,8]. Longer time simulations of the resulting surface instabilities in the fluid interface (called the RiclItmyer-Meshkov
instability [16,19]) are found in [8,17,20]. One of the interferogrames, Fig. 14 of [12] shows an irregular wave pattern that corresponds to what we call an anomalous reflection. In this wave the
angle between the incident shock and the material interface is such that the state behind the shock has become subsonic. We consider the perturbation of a regular shock diffraction that produces a
reflected Prandtl-Meyer wave. Suppose that initially the state behind the incident shock is close to but slightly below the sonic point on the incident shock polar.
160 (b)
Incident Wave Shock Polar
1.:1 Q'.
fJ:) fJ:)
!:l -
Stream Direction -
Transmil1cd Wave
Reflected Wave Rarefaction Polar
Shock Pol
FIG. 1. A sketch of the wave pattern and polar diagrams for a regular shock-contact diffraction that produces a reflected rarefaction wave.
We allow the incident angle to increase while keeping the other variables constant so that the state behind the incident shock passes above the sonic point. Such a situation might occur as a shock
diffracts through a bubble as illustrated in Fig. 2. When this happens, the solution can no longer be self-similar since a PrandtlMeyer wave can only occur in a supersonic flow. Instead the reflected
wave begins to overtake and interact with the incident shock, Fig. 2c. This interaction dampens and curves the incident shock near its base on the fluid interface allowing the flow immediately behind
the node to return to a supersonic condition. The single point of interaction bifurcates into a degenerate overtake node where the leading edge of the reflected rarefaction overt ales the incident
shock, and a sonic diffraction node at the fluid interface. This interaction is a two-dimensional version of the onedimensional overtaking of a shock by a rarefaction. The composite configuration is
in many ways analogous to a regular Mach reflection. In this case the reflected wave is a Prandlt-Meyer wave and instead of a single point of Mach reflection the interaction is spread over the region
where the rarefaction interacts with the incident shock. The "Mach" stem an be regarded as the entire region from the point where the incident shock is overtaken by the rarefaction to its base on the
fluid interface. If we allow the incident angle to increase further we will eventually see a second bifurcation in the solution, Fig. 2d. As the material interface continues to diverge from the
incident shock, the Mach number near the trailing edge of the reflected rarefaction continues to decrease. The characteristics behind the incident shock are almost parallel to the shock interface
near the base of the anomalous reflection. The flow there becomes nearly one-dimension and the rarefaction wave eventually overtakes the incident shock. If there is a great difference in the acoustic
impedance between the two materials as in the numerical cases studied here, this second bifurcation will occur as the strength of the incident shock at the fluid interface reduces to zero. The now
non-centered rarefaction breaks loose from the fluid interface and begins to propagate away. This second configuration is also analogous to a Mach
161 (a) time 0.0 ~ec
(b) time O. 15 ~sec
incident shock wave
regular diffraction (c) time 0.6 ~ec
(d) time 1.0 ~ec
anomalous reflection
10 L\x = 10 fly FIG. 2. The collision of a shock wave in water with an air bubble. The fluids ahead of the shock are at normal conditions of 1 atm. pressure, with the density of water 1 glcc and air
0.0012 g/cc. The pressure behind the incident shock is 10 Kbar with a shocked water density of 1.195 g/ec. The grid is 60x 60.
reflection. Here the Mach node corresponds to the interaction region between the rarefaction and incident shock, while the Mach stem is the degenerate wave portion from the trailing edge of the
rarefaction to the fluid interface.
5. The Tracking of the Anomalous Reflection Wave. The qualitative discussion of the anomalous reflection in the previous section can be incorporated into a front tracking code to give an enhanced
resolution of the interaction. The tracking of a regular shock diffraction was described in [8]. The first step in the propagation is the computation of the velocity of the diffraction node with
respect to the computational (lab) reference frame. Suppose at time t the node is located at point Poo. The node position at time t + dt is found by computing the intersection between the two
propagated segments of the incident waves. If this new node position is Po, then the node velocity is given by (Po - Poo)/dt. This velocity defines the Galilean transformation into a frame where the
node is at rest. When the state behind the incident shock is supersonic in this frame, it together with the state on the opposite side of the fluid interface provide data for a supersonic steady
state Riemann problem whose solution determines the outgoing waves. The
outgoing tracked waves are then modified to incorporate this solution. A bifurcation will occur if the calculated node velocity is such that the state behind the incident shock is subsonic in the
frame of the node. If the reflected wave is a Prandtl-Meyer wave this will result in an anomalous reflection. The front tracking implementation of this bifurcation is a straightforward application of
the analysis described in the previous section. First the leading edge of the reflected rarefaction is allowed to break loose from the diffraction node. The intersection PI between the propagated
rarefaction leading edge and the incident shock is computed and a new overtake node is installed at PI by disconnecting the rarefaction leading edge from the diffraction node and connecting it to Pl.
If this reflected rarefaction edge is untracked, then PI is found by calculating characteristic through the old node position corresponding to the state behind incident shock and computing the
intersection of its propagated position with propagated incident shock. This characteristic makes the Mach angle A with streamline through the node. Since the bifurcation occurs between times t and t
+ dt, M ~ 1 at time t and A is real. This wave moves with sound speed in its normal direction. In this case no new overtake node is tracked. the the the the
We are now ready to compute the states and position of the point of shock diffraction after the bifurcation. As was mention previously, the rarefaction expands onto the incident shock causing it
weaken. This in turn slows down the node causing the incident shock to curve into the fluid interface. The diffraction node will slow down to the point where the state immediately behind the node
becomes sonic. After this the configuration near the node can be computed using the regular case analysis. The adjusted propagated node position is computed as follows, see Fig. 3. For each number s
sufficiently small, let p( s) be the point on the propagated material interface that is located a distance s from Po when measured along the curve, the positive direction being oriented away from the
node into the region ahead of the incident shock. Let f3( s) be the angle between the tangent vector to the material interface at p( s) and the directed line segment between the points p( s) and Pl.
Let v( s) be the node velocity found by moving the diffraction node to position p( s), and let q( s) be the velocity of the flow ahead of the incident shock in the frame that moves with velocity v(
s) with respect to the lab frame. The mass flux across this shock is given by
= polq(s)lsinf3(s).
Given m( s) and the state ahead of the incident shock, the state behind the shock and hence its Mach number M(s) can be found. The new node position is given by pesO), where s* is the root of the
equation M(s*) = 1. Finally, the state behind the incident shock with mass flux m( SO) together with the state on the opposite side of the contact are used as data for a steady state Riemann problem
whose solution supplies the states and angles of the transmitted shock, the trailing edge of the reflected rarefaction, and the downstream material interface.
163 (a)
incident sImek
water air
FIG. 3. A diffraction node initially at POD bifurcates into an anomalous reflection. The predicted new node position at po yields a Mach number of 0.984 behind the incident shock. The leading edge of
the reflected PrandtlMeyer wave breaks away from the diffraction node to form an overtake node at Pl. The propagated position of the diffraction node is adjusted to return the flow to sonic behind
the node.
The subsequent propagation of the anomalous reflection node is performed in the same way. The bifurcation repeats itself as more of the reflected rarefaction propagates up the incident shock. The
leading edge of the reflected rarefaction wave that connects to the diffraction node is not tracked after the first bifurcation. The secondary bifurcations that occur when the trailing edge of the
rarefaction overtakes the incident shock are detected in a couple of ways. If the incident shock is sufficiently weak, i.e., the normal shock Mach number is close to 1, then it is possible for the
numerically calculated upstream Mach number to be less than one. This is a purely numerical effect since physically the upstream state is always supersonic. However in nearly sonic cases such
numerical undershoot can occur. If such a situation is detected the trailing edge of the reflected rarefaction wave is disengaged from the anomalous reflection node and installed at a new overtake
node on the incident shock. The residual shock strength for the portion of the incident shock behind the rarefaction wave is small and the diffraction node at the material interface reduces to the
degenerate case of a sonic signal diffracting through a material interface.
The second way in which the secondary bifurcation is detected occurs when the trailing edge of the rarefaction overtakes the shock. Here a new intersection between the incident shock and the trailing
edge characteristic is produced. As before the tracked characteristic is disengaged from the diffraction node and a new overtake node is installed at the point of intersection. The residual shock
strength at the node is non-zero so the diffraction at the material interface produces an additional expansion wave behind the original one. This new expansion wave is not tracked. It is possible to
make a few remarks about the amount of tracking required for these problems. Since the front tracking method is coupled to a finite difference method for the solution away from the tracked interface
(the interior solver), there is always an option between tracking a wave or allowing it to be captured. Of course capturing can result in a considerable loss in resolution in the waves as compared to
tracking [5], but it will also simplify the resolution of the interactions. The secondary bifurcations described above are only tracked when the trailing edge of the reflected Prandtl-Meyer wave is
tracked. The current algorithm is structured so that at a minimum the two interacting incoming waves are tracked. At this extreme none of the outgoing waves are tracked and no explicit bifurcations
in the tracked interface occur. More commonly, the material interface separates different fluids and so must be tracked on both sides of the interaction.
Also, instabilities in the finite difference approximation can affect the accuracy of the solution near the node, especially for stiff materials such as water. Tracking the additional waves seems to
considerably reduce these problems. Tracking also allows the use of a much coarser grid, which is important when the diffraction occurs in a small but important zone of a larger simulation. It allows
the entire region of diffraction to extend only over only a fraction of a grid block. These remarks show that the amount of tracking is problem dependent, and a compromise can be made between the
increased accuracy and stability of front tracking, and the simplicity of a capturing algorithm.
6. Numerical Examples. Fig. 4 shows a series of frames documenting the collision of a 10 Kbar shock wave with a bubble of air in water. Note in this case the trailing edge of the reflected
Prandtl-Meyer wave is not tracked. The states ahead of the incident shock are at one atmosphere pressure and standard temperature. Under these conditions, water is about a thousand times as dense as
air. During the initial stage of the interaction regular diffraction patterns are produced. In less than half of a microsecond an anomalous reflection has formed, and by one microsecond the trailing
edge of the rarefaction has also overtaken the incident shock. It is interesting to note that this interaction causes the bubble to collapse into itself. Long time simulations are expected to show
the initial bubble split, and the resulting bubbles going into oscillation as they are overcompressed and then expand. This process is important in the transfer of energy as a shock passes through a
bubbly fluid. The first diffraction considerably dampens the shock, and much of this energy will eventually be returned to the shock wave in the form of compression waves generated by the expanding
(a) time 0.0
(b) time 0.15 ).Lsec
incident shock wave
regular diffraction
anomalous reflection
FIG. 4. Log(l + pressure) contours for the collision of a shock wave in water with an air bubble. The fluids ahead of the shock are at normal conditions of 1 atm. pressure, with the density of water
1 glce and air 0.0012 g/cc. The pressure behind the incident shock is 10 Kbar with a shocked water density of 1.195 glee. The tracked interface is shown in a dark line. The grid is 60 x 60.
Fig. 5 shows the diffraction of an expanding underwater shock wave through the water's surface. Initially a ten Kbar cylindrically expanding shock wave with a radius of one meter is placed two meters
below the water's surface. The interior of the shock wave contains a bubble of hot dense gas. The states exterior to the shock are ambient at one atmosphere pressure and normal temperature. A
gravitational acceleration of one 9 has been added in this case, but due to the rapid time scale on which the diffractions occur the effect of gravity is negligible. Here the entire reflected
Prandtl-Meyer wave is captured rather than tracked. The pressure contour plots show that by six milliseconds an anomalous reflection has developed as indicated in the blowup of Fig. 5b shown in Fig.
6. Another interesting feature of this problem is the acceleration of the bubble inside the shock wave by the reflected rarefaction wave. This causes the bubble to rise much faster than it would
under just gravity. When the bubble reaches the surface it expands into the atmosphere leading to the formation of a kink in the transmitted shock wave between the region ahead of the surfacing
bubble, and the rest of the wave. This kink is an untracked example of the elementary wave called the cross node where two oblique shocks collide.
166 (a) Lime 0.0 msec
(b) Lime 6.0 mse 0 under more restrictive hypotheses on Wo and the state functions p and T. We have: THEOREM 2. Assume tbat p and T satisfy tbe conditions of a "near ideal gas"; tbat is, in addition
to conditions (8), Pv sbould be negative and lTv I sbould be sufficiently small tbat, for values (v,e) E J{, tbe quantity Q appearing in (6) is
n,gat1~. Let W ~ [~l
Th= 1), =d 1,t W,
b, a
,o~t=t v~tor w;,h (B, ') E K' (K' ;,
~ [ : 1h, Cauchy data ,atmyillg'
(a) (vo(x),eo(x)) E J{' a.e.; (b) Vo - iJ E L2 n Ll n BV , Uo - U E L2
n BV,
eo -
e E L2 n Ll ;
(c) tbe Ll, L2, and BV norms indicated in (b) are sufficiently small.
Tben tbe Caucby problem (1)-(2) bas a weak solution defined for all oft> O. Theorem 2 is proved by deriving time-independent estimates for the local solution Wand its various derivatives, starting
from the entropy equality
j J(;~ + ~~)
Here S is the physical entropy defined by Sv = directly from (1) and from the jump condition (3)
, Se
[~x ] = O.
(9) then follows
Time-independent L2
177 bounds for v-v and e-e are then obtained from (9) by expanding S about its value at (v, e) and controlling the first order terms via the hypothesis Vo - v, eo - e ELI. These estimates, together
with the smallness conditions, then enable us to bound various higher derivatives, so as to obtain time-independent pointwise bounds for v and e. We remark that the condition W o( -00) = W o( +00) in
Theorem 2b is an essential one. Indeed, any global analysis of solutions of (1 )-(2) is likely to include information about the asymptotic behavior of the solution, and this behavior can be quite
complicated when W o( -00) =I W o( +00). One result in this direction is that of Hoff and Liu [3], in which we obtain both the asymptotic behavior (t --+ 00) as well as the strong inviscid limit (E
--+ 0) of solutions of the isentropic/isothermal version of (1) with Riemann shock data. The proof of Theorem 2 is given in [2], which also includes the following result concerning continuous
dependence on initial data: THEOREM 3. In addition to the hypotheses of Theorem 2, assume that eo E BV and that p and T satisfy the conditions of an ideal gas, pv = const. T and T = T( e). Then the
solutions constructed in Theorem 2 depend continuously on their initial values in the sense that, given a time to, there is a constant C such
that, ;[ W,
~ [:: 1,i ~ 1,2, = wi" tion, of (1) d~crib'" in Th,=m 2, th,~ &
for t E [0, to],
IIW2(·, t) - WI (', t)IIL2 (10)
s C(IIW2("
sup Var[v2(" t) b-a=I
0) - WI (', 0)IIL2
vI (',
sup Var[v2(', 0) b-a=I
C depends on t o ,]{, and on upper bounds for the norms in Theorem 2b of the solutions WI and W2 •
We remark that the local variation of V2
is included in the norm in (10) in
order to deal with terms arising from the differencing of
(Uvx )xand (~x ) x.
the other hand, given that VI and V2 are discontinuous variables, it would no doubt be useful to prove continuous dependence in the L2 norm alone. Finally, we point out that the existence,
regularity, and continuous dependence results of Theorems 1-3 can be effectively employed in the design and rigorous analysis of algorithms for the numerical computation of solutions of (1 )-(2).
Indeed, Roger Zarnowski [5] has applied the present analysis to prove convergence of certain finite difference approximations to discontinuous solutions of the isentropic/isothermal version of (1).
His scheme can be implemented under mesh conditions essentially equivalent to the usual CFL conditions for the corresponding hyperbolic equations (E = 0 in (1)); and he proves that, for piecewise
smooth initial data, the error is bounded by i3.x I / 6 in the norm of (10). Observe that, while the
convergence rate is somewhat low, the topology is quite strong, dominating the sup norm of the discontinuous variable v. REFERENCES [1)
DAVID HOFF, Discontinuous solutions of the Navier-Stokes equations for compressible flow, (to appear in Arch. Rational Mech. Ana).
DAVID HOFF, Global existence and stability of viscous, nonisentropic flows, (to appear).
DAVID HOFF AND TAl-PING Lw, The in viscid limit for the Navier-Stokes equations of compressible, isentropic flow with shock data, (to appear in Indiana Univ. Math. J).
DAVID HOFF AND JOEL SMOLLER, Solutions in the large for certain nonlinear parabolic systems, Ann. Inst. Henri Poincare, Analyse Non lineare 2 (1985), 213-235.
ROGER ZARNOWSKI AND DAVID HOFF, A finite difference scheme for the Navier-Stokes equations of one-dimensional, isentropic, compressible flow, (to appear).
NONLINEAR GEOMETRICAL OPTICS JOHN K. HUNTER* Abstract. Using asymptotic methods, one can reduce complicated systems of equations to simpler model equations. The model equation for a single, genuinely
nonlinear, hyperbolic wave is Burgers equation. Reducing the gas dynamics equations to a Burgers equation, leads to a theory of nonlinear geometrical acoustics. When diffractive effects are included,
the model equation is the Z[( or unsteady transonic small disturbance equation. We describe some properties of this equation, and use it to formulate asymptotic equations that describe the transition
from regular to Mach reflection for weak shocks. Interacting hyperbolic waves are described by a system of Burgers or Z[( equations coupled by integral terms. We use these equations to study the
transverse stability of interacting sound waves in gas dynamics.
O. Introduction. Geometrical Optics is the name of an asymptotic theory for wave motions. It is based on the assumption that the wavelength of the wave is much smaller than any other characteristic
lengthscales in the problem. These lengthscales include: the radius of curvature of nonplanar wavefronts; the lengthscale of variations in the wave medium; and the propagation distances over which
dissipation, dispersion, diffraction, or nonlinearity have a significant effect on the wave. When this assumption is satisfied, we say that the wave is a short, or high frequency, wave. For short
waves, the wave energy propagates along a set of curves in space-time called rays. This is one reason why geometrical optics is such a powerful method: it reduces a problem in several space
dimensions to a one dimensional problem. For a single weakly nonlinear hyperbolic wave, this one dimensional problem is the inviscid Burgers equation (1.6), as we explain in section 1. When
diffraction effects are important in some part of the wave field, one must modify the straightforward theory of geometrical optics. For linear waves, this modified theory is called the geometrical
theory of diffraction. In section 2, we analyze the diffraction of weakly nonlinear waves. One obtains the ZK equation (2.2), which is a two dimensional Burgers equation. Unfortunately, little is
known about the ZK equation, and this makes it difficult to develop a nonlinear geometrical theory of diffraction. As an example, we use the Z K equation to formulate asymptotic equations which
describe the transition from regular to Mach reflection for weak shocks. Unlike linear waves, nonlinear waves interact and produce new waves. For multiple waves, nonlinear geometrical optics leads to
a coupled system of Burgers equations (3.3). In section 3, we formulate asymptotic equations (3.6) which describe the diffraction of interacting waves. We use these equations to study the transverse
stability of interacting sound waves in gas dynamics. Keller [18] reviews linear geometrical optics. Other reviews of geometrical optics for weakly nonlinear hyperbolic waves are given by Nayfeh
[27], Majda [24], and Hunter [13]. *Department of Mathematics, Colorado State University, Fort Collins, CO 80523. Present Address: Department of Mathematics, University of California, Davis, CA
180 1. Single Waves. 1.1 The eikonal and transport equations. We consider a hyperbolic system of conservation laws in N + 1 space-time dimensions. N
L:/(x,u)",. = O. i=O
Short wave solutions of (1.1) are solutions which vary rapidly normal to a set of wavefronts t/J( x) = constant. We call t/J the phase of the short wave. We look for small amplitude, short wave
solutions of (1.1), with an asymptotic approximation of the form.
(1.2) The amplitude in (1.2) is of the order of the wavelength. We choose this particular scaling because it allows a balance between weakly nonlinear and nonplanar effects. Multiple scale methods
[14] show that the phase in (1.2) satisfies the eikonal equation associated with the linearized version of (1.1), namely
(1.3) In (1.3), Ai(x) = V' ufi(x, 0). We denote left and right null-vectors of the matrix in (1.3) by f!(x, V't/J) and r(x, V't/J) respectively. Associated with the phase is an N -parameter family of
rays or bicharacteristics. The rays are curves in space-time with equation x = X( s;;3) where dXi f! . Ai r. Ts=
Here, s E R is an arclength parameter along a ray, while ;3 E RN is constant on a ray. We assume that the transformation between space-time coordinates x and ray coordinates (s,;3) is smooth and
invertible. This assumption is not true at caustics, and then the simple ansatz in (1.2) does not provide the correct asymptotic solution. Instead, diffractive effects must be included (see section 2
and [23], [15], [13]). The explicit form of the asymptotic solution (1.2) is
(1.4) where the scalar function a( (J, x) is called the wave amplitude. The dependence of a on (J describes the wave-form. For oscillatory wavetrains, a is a periodic or an almost periodic function
of (J; for pulses, a is compactly supported in (J; for wavefronts, the derivative of a with respect to (J jumps across (J = 0, etc. The dependence of a on x describes modulation effects such as the
increase in amplitude caused by focusing and the nonlinear steepening of the wave-form.
Multiple scale methods also imply that the wave amplitude satisfies a nonlinear transport equation,
a, + M aag + Qa = O.
In (1.5), a/as is a derivative along a ray,
aN. a
- = L:e.A'r-.
The coefficient M measures the strength of the waves quadratically nonlinear selfinteraction, and is given by N
M(s,;3) = L:x;e. v u 2 i(x,O)' (r,r). i=O
M is nonzero for genuinely nonlinear waves and M is zero for linearly degenerate waves. The coefficient Q describes the growth or decay of the amplitude due to focusing of the wave and
nonuniformities in the medium. It is given by
= L:e. aXi (Air). ,=0
Since r depends on v, Q involves second derivatives of . It is therefore unbounded near caustics, where the curvature of the wavefronts is infinite. There is one Burgers equation (1.5) for each ray.
Solving them, together with appropriate initial data obtained from initial, boundary, or matching conditions, gives a(O,s,;3). Finally, evaluating 0 at €-l(x) in the result gives the asymptotic
solution (1.4). The transport equation (1.5) can be reduced to a standard form by the change of variables
= E- 1 (s,;3)a(s,;3,O)
x = 0, [=
M (s',;3)E(s',;3)ds',
E(s,;3) We assume that that M (1.6)
= exp
[-1' Q(sl,;3)ds l] .
O. The result is that u(x,l;;3) satisfies Ut
+ UUi = O.
Thus, (1.6) is the canonical asymptotic equation for a genuinely nonlinear, hyperbolic wave. We remark that if weak viscous effects are included, then, instead of (1.6), one obtains a generalized
Burgers equation, (1.7) The viscosity v is constant only for plane waves in a uniform medium. In that case, (1.7) can be solved explicitly by the Cole-Hopf transformation [32]. If v is not constant,
then (1. 7) cannot be solved explicitly, and numerical or perturbation [28] methods are required.
182 1.2 Nonlinear geometrical acoustics. Sound waves in a compressible fluid are a fundamental physical application of the above ideas. The resulting theory is called nonlinear geometrical acoustics
(NGA). For reviews of NGA, see [7], [8], [9].
The equations of motion of an inviscid, compressible fluid are Pt
+ div (pu) =
div (pu@ u - pI)
= pf,
[pGu.u+e)L + div [puGu.u+e) -pu] =0. Here, p is the fluid density, p is the pressure, e is the specific internal energy, and u is the fluid velocity. We include a given body force f(x, t) and we
neglect any heat sources. For simplicity, we consider a polytropic gas for which
e = - - -. ,-1 p
Here, the constant, > 1 is the ratio of specific heats. Similar results are obtained for general equations of state. Suppose that p = Po(x, t), P = Po(x, t), u = uo(x, t)
is a given smooth solution of (1.8). We denote the corresponding sound speed by c = co(x, t). The NGA solution for a sound wave propagating through this medium IS
(1.9) Here, we define the local frequency w, the wavenumber k, and the Doppler shifted frequency Q by w = - ~ (2.22) hyperbolic, e+ ~ < ~ (2.22) elliptic. U
However, (2.22) - (2.25) does not model all features of the gas dynamics problem. Complex Mach reflection and double Mach reflection are not observed for weak shocks, so (2.22) - (2.25) is unlikely
to describe those phenomena. A simple local analysis shows that regular reflection is impossible for 0 < a < 21 / 2 • We can approximate a regularly reflected solution near the point where the
incident and reflected shocks meet the wedge by a piecewise constant solution,
> ay + Vt,
> 0; = 1, v = -a, -j3y + Vt < x < ay + Vt, = UL, V = 0, x < -j3y + Vt, Y > O.
= v = 0,
y > 0;
The jump conditions (2.19) imply that UL
= 1 +-, j3
where j3 is a solution of (2.26) The reflected shock is admissible if j3 > O. Equation (2.26) has two positive roots for j3 when a > 21/2. The equation has no positive roots when 0 < a < 21/2. One
interesting explicit solution of (2.22) can be obtained from (2.14) with
= ±I x 11/2,
U(x,O) =0,
x < 0, x~O.
Taking the minus sign, the corresponding solution for u and v is (2.27) U
= v = 0,
p > O.
190 Taking the plus sign, the solution is
u=1+(1_p)I/2, u = v =
Here, p = ~ + /4. Equation (2.27) describes an outgoing cylindrical expansion wave; (2.28) describes an outgoing cylindrical shock. Equation (2.22), with different boundary conditions, also arises as
a description of weak shocks at a singular ray [10], [12], [34]. This equation may serve as a model equation for two dimensional Riemann problems in general. 3. Diffraction of Interacting Waves.
3.1 Diffraction of interacting waves. The ZK equation is a generalization of Burgers equation that includes diffraction effects. Interacting hyperbolic waves are described asymptotically by a system
of Burgers equations coupled by integral terms. In this section, we generalize these equations to include diffraction. The result is a coupled system of ZK equations. An asymptotic theory for weakly
nonlinear interacting hyperbolic waves is developed in [16], [25]. We shall briefly describe that theory in the simplest case. We consider a hyperbolic system of conservation laws in one space
dimension, (3.1)
+ f(u)z = o.
Suppose that there are three interacting periodic waves which satisfy the resonance condition WI
+W2 +W3 = 0,
+ k2 + k3
= 0,
= 1,2,3.
Here, Wj and kj are the frequency and wavenumber of the jth wave, and Aj is the linearized wave velocity. The asymptotic solution for the interacting waves is then 3
u = € ~.:>j [€-I(kjx - Wjt),t]
+ 0(€2),
as € -> 0 with x, t = 0(1). In (3.2), rj is a right eigenvector of \7 uf(O) associated with the eigenvalue Aj, and the wave amplitudes aj( 8, t) are 271'-periodic functions ofthe phase variable 8.
The amplitudes solve the following system of integro-differential equations,
191 where (j, p, q) runs through cyclic pennutations of (1,2,3). The coefficients are
= V'uAj(O)· r = Pj · V'u 2 f(O)· (rj,rj),
fj = Pj . V'u 2 f(O) . (rp, rq). Here, Pj is a left eigenvector of V'uf(O) associated with Aj. It is nonnalized so that Pj • rj = l. To analyze the effects of wave diffraction, we consider a two
dimensional version of (3.1), namely (3.4)
+ f(u)x + g(u)y = O.
For simplicity, we assume that (3.4) is isotropic, meaning that is is invariant under -+ 0 (x, y) for all orthogonal transformations O. The rays associated with the phase cPj = kjx - Wjt are then cPj
= constant, y = constant. Thus, the transverse variable tP = y is constant on the rays associated with each phase cPj. Complications arise when the transverse variable is not constant on all sets of
rays. This case may occur for anisotropic waves, and we shall not consider it further here.
(x, y)
The generalization of (3.2) that includes weak diffraction in the y-direction is then
=fL 3
aj [f-I(kjx - Wjt), f- I / 2y, t] rj
+ 0(f3/2).
The amplitudes aj( 8,7], t) satisfy
8e{ ajt(8,7],t) (3.6)
+ f j 2~
+ Mjaj(8,7],t)aje(8,7],t)
ap ( -8 -
~, 7], t)aqe(~, 7], t)d~ } + 2t ajqq(8, 7], t) = O.
For solutions which are independent of 7], (3.6) reduces to (3.2), after an integration with respect to 8. For a single wave (a2 = a3 = 0), it reduces to the ZK equation for al. 3.2 Transverse
stability of interacting waves in gas dynamics. There are three wave-fields in one dimensional gas dynamics. They are the left- and rightmoving sound waves, and a stationary entropy wave. According
to the asymptotic theory described in section 3.1, the entropy wave decouples from the sound waves. Consequently, the system of three equations in (3.3) reduces to a pair of equations for the sound
wave amplitudes. These equations describe the resonant reflection of sound waves off a periodic entropy perturbation. After rescaling to remove inessential coefficients, the equations are Ut
+ UU x + 271"1 Jot" K(x -
+ VVx -
Ov(~, t)d~
= 0,
1 f2" 271" K( -x + Ou(~, t)d~ = O.
192 In (3.7), K is a known kernel, which is proportional to the derivative of the entropy wave amplitude. The dependent variables u(x, t) and v(x, t) are proportionaUo the amplitudes of the
right-moving and the left-moving sound waves. The soundwave amplitudes and the kernel are 21l"-periodic, zero-mean functions of the phase variable, x. These equations are derived in [25], and they
are further analyzed in
[26]. Pego [29] found an explicit smooth travelling wave solution of (3.7), in the special case of a sinusoidal kernel, K(x) = sinx. His solution is (3.8)
u = uo(x - ct) = u[e + bf(x - ct; 0:)]' V
= vo(x -
= 0" [e + bf(x -
ct : 0"0:)].
In (3.8), 0" E {-1,+1},0: E [0,1],
f(8;0:) = [1 + 0: cos 8]1/2 ,
(3.9) and
= -1
= -~ 1271"(1 + o:cose)1/2de.
cose(1 + o:cose)1/2de,
There are two families of travelling waves (3.8), depending on the choice of 0". They exist only up to a finite amplitude. The wave of maximum amplitude, corresponding to 0: = 1, has a corner in its
crest or trough. We shall show that the waves with 0" = +1 are unstable to transverse perturbations when 0: is small and when 0: is close to one. We remark that the stability of these waves to one
dimensional perturbations has not been studied. Our stability analysis is essentially the same as the use of the K P equation to study the transverse stability of K dV solitons [1], [17]. The
generalization of (3.7), with K(x) (3.11)
= sinx, that includes weak diffraction is
Ox {Ut + UU x + Xv} + Uyy = 0, Ox {Vt + vVx + Xu} + Vyy
= 0,
where (3.12)
1 1271" sin(x - ou(e, y, t)de. 21l" 0
Xu(x, y, t) = -
The choice K(x) = sinx simplifies some of the subsequent algebra. However, transverse perturbations of interacting waves for general kernels K can be analyzed in a similar way. Let T denote
translation by 1l" in x. Then TX = XT = -X. The change of variables u -4 -u, V -4 -Tv, x -4 -x, Y -4 y, t -4 t maps the solution in (3.8) with 0" = -1 onto the solution with 0" = +1, and it
transforms (3.11) to (3.13)
+ UU x + Xv} Ox {Vt + vVx + Xu} Ox {Ut
Uyy Vyy
= 0, = 0.
193 We shall consider solutions of (3.11) or (3.13) with u = v. (This assumption does not alter the final result.) It therefore suffices to consider transverse perturbations of u = uo(x - ct), where
+ UU x +Xu} + UU yy = o.
We seek an expansion for long-wavelength transverse perturbations of the travelling wave solution (3.8) in the form (3.15) In (3.15), the multiple scale variables are evaluated at B = x - ct - ,
where j is given in (3.9). these integrals are functions of the amplitude parameter a, and they are related by 1
= -3a2
[(a 2 + 2)H - 2M].
In addition, from (3.10), 2
b = - 2 [(a 2 3a
l)H + M],
= -bM.
All these functions can be expressed in terms of complete elliptic integrals of the first and second kinds,
In particular, H =
~(1 + a)-1/2 K (~) , 7r 1 +a
2 + a) 1/2 E = -(1 7r
(2a - - ). l+a
The solution of (3.27) is
(3.31 )
= J2 + bH -
H J' b(J-b) J2 +bH -HJ
Equations (3.26) and (3.31) are the explicit solution of (3.21). Using (3.26), (3.28), and (3.30) in (3.23) implies that (3.32) For general values of a, (3.32) must be evaluated numerically. Here, we
shall calculate " for small amplitude waves (a -+ 0) and waves close to the limiting wave (a-+1). For small amplitudes, (3.28) and (3.29) imply that (3.33)
196 Then, using (3.29) - (3.33), we find that (3.34)
= -32I7a- 2
+ 0(1).
It follows that small amplitude travelling waves (3.8) with 17 = +1 are unstable, while those with 17 = +1 are linearly stable to long transverse perturbations.
For large amplitudes, standard asymptotic expansions of complete elliptic integrals imply that (3.35) as a
-+ 1-.
= 21 /12 7r log (13_2a ) + 0(1),
After some algebra, (3.29) - (3.32) and (3.35) show that
(3.36) Thus, the wave with 17 with 17 = -1 is stable.
2 (37r 2 7r2 _
3177r = -"""32
= +1
16) + 0(1).
is also unstable at large amplitudes, while the wave
Acknowledgements. This work was supported in part by the Institute for Mathematics and its Applications with funds provided by the NSF, and by the NSF under Grant Number DMS-8810782. REFERENCES [I]
[2] [3] [4] [5]
[6] [7]
[8] [9] [10]
[11] [12] [13]
ABLOWITZ, M.J., AND SEGUR, H., Solitons and the Inverse Scattering Transform, SIAM, Philadelphia (1981). BAMBERGER, A., ENQUIST, B., HALPERN, L., AND JOLY, P., Parabolic waVe equations and
approximations in heterogeneous media, SIAM J. App!. Math., 48 (1988), pp. 99-128. CATES, A., Nonlinear diffractive acoustics, fellowship dissertation, Trinity College, Cambridge, unpublished,
(1988). CHANG, T., AND HSIAO, L., the Riemann Problem and Interaction of Waves in Gas Dynamics, Longman, Avon (1989). COLE, J.D., AND COOK, L.P., Transonic aerodynamics, Elsevier, Amsterdam (1986).
CRAMER, M.S., AND SEEBASS, A.R., Focusing of a weak shock at an arete, J. Fluid Mech, 88 (1978), pp. 209-222. CRIGHTON, D.G., Model equations for nonlinear acoustics, Ann. Rev. Fluid Mech., 11
(1979), pp. 11-13. CRIGHTON, D.G., Basic theoretical nonlinear acoustics, in Frontiers in Physical Acoustics, Proc. Int. School of Physics "Enrico Fermi", Course 93 (1986), North-Holland, Amsterdam.
HAMILTON, M.F., Fundamentals and applications of nonlinear acoustics, in Nonlinear Wave Propagation in Mechanics, ed. T.W. Wright, AMD-77 (1986), pp. 1-28. HARABETIAN, E., Diffraction of a weak shock
by a wedge, Comm. Pure App!. Math., 40 (1987), pp. 849-863. HORNUNG, H., Regular and Mach reflection of shock waves, Ann. Rev. Fluid Mech., 18 (1986), pp. 33-58. HUNTER, J.K., Transverse diffraction
and singular rays, SIAM J. App!. Math., 75 (1986), pp. 187-226. HUNTER, J.K., Hyperbolic waves and nonlinear geometrical acoustics, in Transactions of the Sixth Army Conference on Applied Mathematics
and Computing, Boulder CO (1989), pp. 527-569. HUNTER, J .K., AND KELLER, J .B., Weakly nonlinear high frequency waves, Comm. Pure App!. Math., 36 (1983), pp. 547-569.
197 [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
[25] [26] [27]
[28] [29] [30] [31] [32] [33] [34] [35]
HUNTER, J.K., AND KELLER, J.B., Caustics of nonlinear waves, Waye motion, 9 (1987), pp. 429-443. HUNTER, J.K., MAJDA, A., AND ROSALES R.R., Resonantly interacting weakly nonlinear hyperbolic waves,
II: several space variables, Stud. Appl. Math., 75 (1986), pp. 187-226. KADOMTSEV, B.B., AND PETVIASHVILI, V.I., On the stability of a solitary wave in a weakly dispersing media, SOY. Phys. Doklady,
15 (1970), pp. 539-541. KELLER, J.B., Rays, waves and asymptotics, Bull. Am. Math. Soc., 84 (1978), pp. 727-750. KODAMA, Y., Exact solutions of hydrodynamic type equations having infinitely many
conserved densities, IMA Preprint # 478 (1989). KODAMA, Y., AND GIBBONS, J., A method for solving the dispersionless KP hierarchy and its exact solutions II, IMA Preprint # 477 (1989). KUZNETSOV,
V.P., Equations of nonlinear acoustics, SOY. Phys. Acoustics 16 (1971), pp. 467-470. LIGHTHILL, M.J., On the diffraction of a blast I, Proc. R. Soc. London Ser. A 198 (1949), pp. 454-470. LUDWIG, D.,
Uniform asymptotic expansions at a caustic, Comm. Pure Appl. Math. 19 (1966), pp. 215-250. MAJDA, A., Nonlinear geometrical optics for hyperbolic systems of conservation laws, in Oscillation Theory,
Computation, and Methods of Compensated Compactness, Springer-Verlag, New York, IMA Volume 2 (1986), pp. 115-165. MAJDA, A. AND ROSALES, R.R., Resonantly interacting hyperbolic waves, I: a single
space variable, Stud. Appl. Math. 71 (1984), pp. 149-179. MAJDA, A., ROSALES, R.R., AND SCHONBEK, M., A canonical system ofintegro-differential equations in resonant nonlinear acoustics, Stud. Appl.
Math., 79 (1988), pp. 205-262. NAYFEH, A., A comparison of perturbation methods for nonlinear hyperbolic waves, in Singular Perturbations and Asymptotics, eds. R. Meyer and S. Parter, Academic Press,
New York (1980), pp. 223-276. NIMMO, J.J .C., AND CRIGHTON, D.C., Nonlinear and diffusive effects in nonlinear acoustic propagation over long ranges, Phil. Trans. Roy. Soc. London Ser. A 384 (1986),
pp. 1-35. PEGO, R., Some explicit resonating waves in weakly nonlinear gas dynamics, Stud. Appl. Math., 79 (1988), pp. 263-270. STURTEVANT, B., AND KULKARNY, V.A., The focusing of weak shock waves,
J. Fluid Mech., 73 (1976), pp. 1086-1118. TIMMAN, R., in Symposium Transsonicum, ed. K. Oswatitsch, Springer-Verlag, Berlin, 394 (1964). WHITHAM, G.B., Linear and Nonlinear Waves, Wiley, New York
(1974). ZABOLOTSKAYA, E.A., AND KHOKHLOV, R.V., Quasi-plane waves in the nonlinear acoustics of confined beams, SOy. Phys.-Acoustics, 15 (1969), pp. 35-40. ZAHALAK, G.I., AND MYERS, M.K., Conical
flow near singular rays, J. Fluid. Mech., 63 (1974), pp. 537-561. JOHNSON, R.S., Water Waves and Korteweg-deVries Equations, J. Fluid. Mech., 97 (1980), pp. 701-719.
GEOMETRIC THEORY OF SHOCK WAVES* TAl-PING LIUt Abstract. Substantial progresses have been made in recent years on shock wave theory. The present article surveys the exact mathematical theory on the
behavior of nonlinear hyperbolic waves and raises open problems. Key words. Conservation laws, nonlinear wave interactions, dissipation and relaxation. AMS(MOS) subject classifications. 76N15, 35L65
1. Introduction. A large class of nonlinear waves in mechanics, gas dynamics, fluid mechanics and the kinetic theory of gases are nonlinear hyperbolic waves in that the behaviors of these waves are
governed in a basic way by certain a-priori determined characteristic values. Of these the most important ones are the shock waves. The strong nonlinear nature of shock waves makes the theory
interesting and rich. Because most physical models carrying hyperbolic waves are not scalar but systems, waves of different characteristic families interact. Understanding this nonlinear coupling of
waves is the essence of the theory for hyperbolic conservation laws, which is described in the next section. With the inclusion of dissipative mechanism, as in the compressible Navier-Stokes
equations, we have viscous conservation laws. The inclusion is due to the importance of dissipative mechanisms in the study of shock layer, initial and boundary layers, and wave interactions. It is
also to check the validity of hyperbolic conservation laws through the zero dissipation limits. These issues are considered in Section 3. The phenomenon of relaxation occurs in many physical
situations: gas dynamics with thermo-non-equilibrium elasticity with fading memory, kinetic theory of gases, etc. Conservation laws with relaxation are in some sense more singular a perturbation of
hyperbolic conservation laws than that of viscous conservation laws. This and the dual nature of hyperbolicity and parabolicity for a relaxation system are explained in Section 4. Conservation laws
with reaction and diffusion may be highly unstable. In Section 5 a class of such systems originated from the study of nozzle flow is described. Nonstrictly hyperbolic conservation laws are important
in the study of MHD, elasticity and multiphase flows. Behavior of waves for such a system with or without the effect of damping or dissipation is discussed in Section 6. Finally several concluding
remarks are made in the last section. 2. Hyperbolic Conservation Laws. Hyperbolic conservation laws
af(u) _ 0
at + ax - ,
*This paper was written while t.he aut.hor was visiting t.he Institute for Mathematics and Its Applications, University of Minnesota, Minneapolis, MN 55455 tCourant Institute, New York University,
251 Mercer Street New York, NY 10012
199 are the basic model for shock wave theory. In this section we assume that it is strictly hyperbolic, i.e. f'(u) has real and distinct eigenvalues >'l(U) < >'2(U) < ... < >'n(u). The compressible
Euler equations have this property. Each characteristic value >'i( u) carries a family of waves. The interactions of these waves are the essence of the theory for (2.1). For system of two
conservation laws the interactions of waves of different characteristic families are weaker because of the existence of the Riemann invariants for two equations. Mainly because of this, behavior of
solutions whose initial data oscillating around a constant has been studied only for two conservation laws, Glimm-Lax [4], which are genuinely nonlinear in the sense of Lax [8]. For general systems
for which each characteristic field is either genuinely nonlinear or linear degenerate, the large-time behavior of solutions with small total variation has been studied satisfactorily, Lin [11]. The
regularity and large-time behavior for general systems not necessarily genuinely nonlinear are studied in Liu [12], though with no rate of convergence to asymptotic states. The above results are
obtained through a principle of nonlinear superposition, Glimm [3], Glimm-Lax [4] and Liu
[10]. There is no satisfactory uniqueness theory for any physical system. The best result so far is that of DiPerna [2], which shows that for genuinely nonlinear two conservation laws a piecewise
smooth solution with no compression wave is unique within the class of solutions of bounded total variation. The problem is important because of the need of the entropy condition for the hyperbolic
shock wave theory. Existence theory for large data has been obtained for certain two conservation laws using the theory of compensated compactness, see articles on the theory in this volume. The
problem makes sense in general only for certain physical systems, since it is easy to construct systems for which Riemann problem with large data is not solvable. It would be interesting to study the
problem for the compressible Euler equations. Study of interaction of rarefaction waves, Lin [9], has led us to conjecture that when the data does not yield vacuum immediately then the solution does
not contain vacuum and is of bounded local variation for any positive time. 3. Viscous Conservation Laws. Consider the viscous conservation laws
(3.1) For physical systems, the viscosity matrix is in general not positive. The system then becomes hyperbolic-parabolic and not uniformly parabolic. This has the effect that discontinuity in the
initial data may not be smoothed out. A more important hyperbolic character of the system comes from the nonlinearity of the flux function f( u), it is there even if B( u) is positive. The behavior
of a nonlinear wave can be detected through the characteristic values. Viscous shock waves are compressive and therefore is stable in a different sense from the expansion waves, Liu [13], LiuXin
[15], and references therein. Diffusion waves are at worst weakly expansive and quite stable, Chern-Liu [1]. It is therefore natural to study these waves through a hyperbolic-parabolic technique, Liu
[13]. The technique needs to be refined to study the stability of a general wave pattern consisting of both compression and expansion waves.
200 The central question for (3.1) is to understand the behavior of a general flow as the strength of the viscosity matrix B( u) varies, in particular when it tends to zero. The interesting case, of
course, is when the corresponding inviscid solution for (2.1) is not smooth. Hopf-Liu [6] solves the problem for a single shock wave. The interaction of initial and shock layers are studied there.
For viscosity matrix not positive, discontinuities in the data propagate into the solution [6] and reference therein. Recently Goodman-Xin [5] uses the technique of matched asymptotic expansion and
characteristic-energy method to show that for a given piecewise smooth inviscid solution there exists a sequence of viscous solutions converging to the given inviscid solution. While the inviscid
solution in [5] is more general than that of [6], [5] does not address the formation and interaction of shock waves, and the initial data for viscous solutions are not fixed. The difference of
results in these works provided interesting problems. One possible way to make further progress in this area is to refine and generalize the techniques in [13]. 4. Relaxation. In many physical
situations effects of nonequilibrium, delay, memory and relaxation are important. The mathematical models usually take the form of either hyperbolic conservation laws with integral or lower-order
terms. Such an effect has a partial smoothing property much like the viscous conservation laws except that it is weaker and does not smooth out strong shock wave. Although equations in various
physical situations, e.g. gas dynamics with thermo-non-equilibrium, elasticity with fading memory, kinetic theory of gases, take different forms, there are common features, see the mathematical
analysis for a simple model in [14] and physical models in the references therein. Mathematical models in phase transition and multi-phases often are ill-posed. It is hoped that the inclusion of the
nonequilibrium would make the equations well-posed. When succeeded, the question is then to study the stability and instability of nonlinear waves. These, however, remain challenges for future
researches. As mentioned above; hyperbolic conservation laws with relaxation have some dissipative behavior not dissimilar to viscous conservation laws. However, the former is less parabolic than the
latter. There is a hierarchy of hyperbolicity and parabolicity. Mathematic!'Ll analysis suitable to study general nonlinear behavior of these various degree of hyperbolicity and parabolicity of these
physical models remains far from being complete. Here we mention a specific problem. Consider a simple model as in [14].
ov ot +f(v,w)=O
Ow og 7ft + ox(v,w) = h(v,w)
where v represents the conserved quantity and w the relaxing quantity and h( v, w) may take the form
h(v,w) '"
w*(v) - w rev) ,
r( v) > 0 the relaxation time and w*( v) the equilibrium value for v. One may view
201 the system as the perturbation of the equilibrium conservation law (4.2)
~~ + :xf(v,w*(v)) = 0
The question is to show that solutions of 14.1) tend to solutions of (4.2) in the limit of the relaxation time rev) ---> 0+. Because the perturbation (4.1) of (4.2) is more singular than that of
(3.1) of (2.1), theory of compensated compactness has not been shown to work. 5. Convective Reaction-Diffusion. When reaction is present in viscous conservation laws, the system becomes convective
reaction-diffusion equations and often takes the form
Such a model occurs in important physical situations such as combustions. Nonlinear waves for the system can be highly unstable, see articles on combustions in this volume. In [7] a model arised from
the theory for gas flow through a nozzle is studied. It turns out that the inviscid theory offers guide for the stability and instability of waves for such viscous model. The theory for nozzle flow,
see (7) and references therein, provides the mathematical basis for the occurance of unstable waves for gas flows. For nozzle flow instability results from the geometric effect of the nozzle. In
combustion the reactions are chemical and have highly unstable effects. Mathematical study of such behavior remains largely a challenge. 6. Nonstrictly Hyperbolicity. Studies of conservation laws
which are nonstrictly hyperbolic are centered mostly on the important Riemann problem, see articles on this volume. The effects of viscosity on the behavior of overcompressive waves have been
studied, see the article of Liu and Xin in the proceeding of the last IMA workshop on equations of mixed types. Recently overcompressive shock waves have shown to occur in MHD and nonlinear
elasticity, see also articles in the aforementioned volume. It would be interesting to study the effects of viscosity for these systems as well as for other systems with crossing shocks. 7.
Concluding Remarks. We have seen several types of equations which carry shock waves; and there are more. The classification of these equations into hyperbolic, parabolic or mixed types tells part of
the story. Coupling of different models of waves and dissipation induced by nonlinearity, relaxation, viscosity etc. are also important elements in the shock wave theory. It is important to recognize
the elementary modes of a general flow, whenever possible. Even though the progresses so far have been very substantial, many more fundamental questions remain to be answered. One hopeful sign is
that several different approaches are available now. The present article emphasizes the geometric approach of shock waves. Undoubtedly new progress will be made based on the basic understanding of
the available techniques illustrated in the articles in this volume.
202 REFERENCES [1]
CHERN, I.-L., AND LIU, T.-P., Convergence to diffusion waves of solutions for viscous conservation laws, Comm. Math. Phys. 110 (1987), 503-517.
DI PERNA, R., Uniqueness of solutions to hyperbolic conservation laws, Indiana U. Math. J. 28 (1979), 244-257.
GLIMM, J., Solutions in the large for nonlinear systems of equations, Comm. Pure Appl. Math., 18 (1965), 95-105.
GLIMM, J. AND LAX, P.D., Decay of solutions of nonlinear hyperbolic conservation laws, Amer. Math. Soc. Memoir, No. 101 (1970).
[5] GOODMAN, J. AND XIN, Z., (preprint). [6]
HOPF, D. AND LIU, T.-P., The in viscid limit for the Navier-Stokes equations of compressible isentropic flow with shock data, Indiana U. J. (1989).
Hsu, S.-B. AND LIU, T.-P., Nonlinear singular Sturm-Liouville problem and application to transonic flow through a nozzle, Comm. Pure Appl. Math. (1989).
LAX, P .D., Hyperbolic systems of conservation laws, II, Comm. Pure Appl. Math., 10 (1957), 537-566.
LIN, L., On the vacuum state for the equation of isentropic gas dynamics, J. Math. Anal. Appl. 120 (1987), 406-425.
LIU, T.-P., Deterministic version of Glimm scheme, Comm. Math. Phys., 57 (1977), 135-148.
LIU, T.-P, Linear and nonlinear large time behavior of general systems of hyperbolic conservation laws, Comm. Pure Appl. Math., 30 (1977), 767-798.
LIU, T .-P., Admissible solutions of hyperbolic conservation laws, Amer. Math. Soc. Memoir, No. 240 (1981).
LIU. T.-P., Nonlinear stability of shock waves for viscous conservation laws, Memoirs, Amer. Math. Soc. No. 328 (1975).
LIU, T.-P., Hypel'bolic conservation laws with relaxation, Math. Phys. 108 (1987), 153-175.
LIU, T .-P, AND XIN, Z., Nonlinear stability of rarefaction waves for compressible Navier-Stokes equations, 118 (1988), 415-466.
Abstract. In fluid flows one can often identify surfaces that correspond to special features of the flow. Examples are boundaries between different phases of a fluid or between two different fluids,
slip surfaces, and shock waves in compressible gas dynamics. These prominent features of fluid dynamics present formidable challenges to numerical simulations of their mathematical models. The
essentially nonlinear nature of these waves calls for nonlinear methods. Here we present one such method which attempts to explicitly follow (track) the dynamic evolution of these waves (fronts).
Most of this exposition will concentrate on one particular implementation of such a front tracking algorithm for two space, where the fronts are one-dimensional curves. This is the code associated
with J. Glimm and many co-workers.
Introduction. In fluid flows one can often identify surfaces of co-dimension one that correspond to prominent features in the flow. Examples are boundaries between different phases of a fluid or
between two different fluids, slip surfaces, shock curves in compressible gas dynamics. All such surfaces are characterized by significant changes in the flow variables over length scales small
compared to the flow scale. For example in oil reservoirs the oil banks have a size of 10 meters compared to an average length scale of 10 kilometers; or in compressible gas dynamics shock waves have
a width of 10- 5 cm compar'ed to a length scale of 10 cm. The dynamics of such waves may be influenced by their internal structures. Whereas for shock waves the speed depends on the asymptotic states
to the left and right, for two dimensional detonation waves the speed depends also on the chemistry and curvature, [B], [J]. There are situations where it is necessary to take these physical aspects
of the flow into account when doing a numerical simulation. A simple model for nonlinear wave propagation is Burger's equation Ut
+ UU x = VU xx
where the state variable U is convected with characteristic speed U and diffused with viscosity v. Because of the dependence of the characteristic speed on the state variable one obtains a focusing
effect that leads to the formation of shock waves. Consider initially a wave of length L (see Fig. 1 y. The monotone decreasing part of the wave will steepen such that in a thin layer the solution
rapidly decreases from a value u/ to u r . The width w of this layer is about I v I' and this layer moves with speed s =
(u/ - u r ). If w ~ L, we may approximate the layer by a jump
from U/ to U r and consider the inviscid limit by neglecting v to obtain the inviscid Burger's equation
*Department of Applied Mathematics, University of Heidelberg, 1m Neuenheimer Feld 294, D-6900 Heidelbert, Germany tDepartment of Applied Mathematics and Statistics, SUNY at Stony Brook, Stony, Brook,
NY 11794
t=- i, ~to
II , 14-~ I
s \ .-~ I
Fig. 1 The evolution of the initial data (left) under Ut
+ UU x =
VU xx
is given on the right. Now data as in Fig. 1 leads to jumps in the solution, where the Rankine-Hugoniot conditions govern the relationship between the speed of the jump and its left and right
asymptotic states. When computing such a flow with very small viscosity v, suppose we represent the state variables associated with points on a fixed underlying grid with spacing ~x. In this
framework we would like to contrast two numerical methodologies: shock capturing and shock tracking. In the shock capturing methods v is replaced by a 1/ numerical viscosity Vnum ~ v. The width of a
shock layer W num = I nUIn I ~ 3~x, UI- U r
so that these waves are most accurate for weak waves. In a shock tracking method an additional moving grid point is introduced which serves as a marker for the shock position. The algorithm has to
update its position and the asymptotic left and right states on the underlying fixed grid. For the moving of the shock point analytic information about it is necessary. Shock tracking corresponds to
replacing v by zero, so it is best for strong waves and gives a high resolution on relatively coarse grids. The front tracking principle, which is not limited to conservation laws or to shocks, is
that a lower dimensional grid gets fit to and follows the significant features in the flow. This is coupled with a high quality interior scheme to capture the waves that are not tracked. In the
following we talk only about front tracking in two space dimensions. First we describe tracking of a single wave and mathematical issues arising from this. Next we discuss tracking wave interactions
and its mathematical issues. Then follows a section describing the data structure of a front tracking code. After a few numerical examples we give a conclusion.
Front tracking applied to a singe wave. Suppose we consider an expanding cylindrical shock wave for a certain time interval. Say this is modeled by the two dimensional Euler equations for polytropic
gas dynamics where the outstanding feature of the flow is a shockwave with smooth flow in front and behind it. If the numerical simulation requires a high level of resolution on a moderate size grid,
front tracking lends itself to this problem. To this end a one dimensional grid is fitted to the shock wave and follows its dynamic evolution. The smooth flow is captured using an underlying two
dimensional grid, where in each time step an initial-boundary value problem is solved in each smooth component of the flow field. The front is represented by a finite number of points along the
curve, which carry with them physical data, in this case the left and right states and the fact that it is a hydrodynamic shock wave. Say the underlying grid is cartesian, which carries the
associated state variables at each grid point. Each timestep consists of a front propagation and an interior update. THE CURVE PROPAGATION is achieved by locally at each curve point rewriting the
equation in a rotated coordinate system, normal and tangential to the front: Ut
+ n((n. V)f(u)) + 8' ((8' V)f(u))
= O.
This then gets solved through dimensional splitting. The normal step reduces to a one dimensional Riemann problem, if one approximates the data to the left and right of the shock by constants.
1--- -- - - ' - - - -
Fig. 2 A second order scheme for the normal propagation of a hydrodynamic shock wave, [CG]. This normal step can be made into a second order scheme in the following way
[CG], see Fig. 2: - first solve Riemann problem to obtain speed and approximate states at
- follow the characteristics from the left and right states at t = tl back to t = to and use the data at the foot of them to obtain updated left and right states at t = tl
206 - finally solve a Riemann problem at t = tl to improve states and speed there. After the normal step has been implemented at all points representing the shock curve, the tangential step, which
propagates surface waves, is done by a one dimensional finite difference scheme on each side of the front. Because points on the front may move too far apart (or too close together) during
propagation, a routine which redistributes the points along the curve is sometimes useful. One has to be cautious though, because this routine stabilizes the curve which may tend to become unstable
due to physical or numerical effects. THE INTERIOR SCHEME. The underlying principle is to solve an initial-boundary value problem on both sides of the front (the front is a moving boundary), and to
never use states on the opposite side of the front. Away from the front this is readily achieved by using any finite difference scheme compatible with the resolution one needs in the interior. Near
the front an algorithm which is consistent with the underlying partial differential equation has yet to be worked out. The following recipe has been implemented successfully (see Fig. 3): suppose the
stencil gets cut off by the front. Use the states at the nearest crossing point (obtained through linear interpolation from the front states) and place them at the missing stencil points.
Fig. 3 A five point centered stencil near the front, where the states on the front are assigned to the two grid points on the opposite side of the front. So far two papers have addressed the
front-interior coupling problem in two space dimensions: [ee] suggest and implement a coupling which is conservative for gas dynamics. [KZ] have formulated a class of front tracking schemes for which
they show stability. Mathematical issues related to this. In the previous section we saw that this approach leads to the study of one dimensional Riemann problems. This is a
207 special Cauchy problem of the type ut+f(u)x=O
0 UL,X
Since the equation and initial data are scale invariant
(x, t)
(ax, at) ,
we may expect scale invariant solutions. These are well understood e.g. for the scalar equation and for gas dynamics. There is a considerable research effort trying to understand the Riemann
solutions of more complicated models. One example are the 2 x 2 systems with quadratic flux functions studied by various authors, e.g. [IMj, [ITj. New interesting mathematical phenomena arise: -
non-classical waves - non-contractible discontinuous waves, i.e. it is not possible to decrease the wave strength to zero while following a connected brach of the wave curve - open existence and
uniqueness questions. Another example are Riemann solvers for equations describing conservation of mass, momentum and energy in real materials. Their effects on the wave structure has been studied,
[MPj. In another approach the equation of state is tabulated (SESAME code at Los Alamos). Scheuermann used this for a Riemann solver by preprocessing the data. Finally we mention certain waves where
the internal structure of the waves play a role. Whereas say for shock waves of isentropic gas dynamics the two jump equations plus the three pieces of information given by the impinging
characteristics determine the four state variables on both sides of the shock with its speed, for transitional shock waves not enough information impinges through the characteristics and one needs
information from the internal structure in order to determine speed and states. The structure depends sensitively on the viscosity used in the parabolic approximation. These waves thus present a
danger for finite difference schemes, which introduce their own brand of viscosity which is different for different schemes. Here a tracking algorithm which mimicks the structure with a Riemann
solver lends itself naturally to this problem. The front tracking method described so far could also be applied to more complex flow patterns than the expanding spherical shock wave by simply
tracking a single front and capturing all other phenomena using a high quality interior scheme. An example are the Euler equations coupled with complex chemistry used to model the flow around a
hypersonic projectile [Zhuj. Here the hydrodynamic bowshock is tracked and the flow with most of the chemistry concentrated right behind this shock is captured. This is an example where a tracking of
the bowshock is necessary.
208 Wave interaction. One can also track interacting waves. To illustrate this consider a planar shock wave impinging on a curved ramp (Fig. 4), giving rise first to a regular and then to a Mach
reflection. This is an example on how new curves may arise. For hydrodynamic shock waves this bifurcation may arise through the intersection of shocks with each other or with other "curves", or
through compressive waves ("blow up" of the smooth solution). If one wants to incorporate these phenomena into a front tracking algorithm it is necessary to understand them mathematically. For
example in the case of the planar shock impinging onto the wedge, one needs a criterion which gives for given shock strength the ramp-angle when a bifurcation from regular to Mach reflection occurs.
If one wants to track all the waves, the algorithm needs to have this criterion built in.
Fig. 4 A planar shock impinges onto a wedge, and, depending on the shock strength and wedge angle, give rise either to a regular reflection (left) or a Mach reflection (right). In the latter the
reflected point has lifted off the wall to become a "triple point" from which a "Mach stem" connects to the wall. This is an example of a two dimensional Riemann problem. In general, at the meeting
point of more than two curves, if one approximates the curves by rays and the states nearby by constant states, these nodes are examples of two dimensional Riemann problems. As in one dimensional
case, this is scale invariant Cauchy data
(x, y, t
a, ay, at , a > 0) giving rise to a self similar solution u
= u (~ ,~).
Thus front tracking may lead to two dimensional Riemann problems. Mathematical issues related to this. There has been some progress on studying the qualitative behavior of two dimensional Riemann
problems. For the equations of compressible inviscid, polytropic gas dynamics, in analogy to the one dimensional Riemann problem which is resolved by elementary waves, one expects that the two
dimensional Riemann problem will evolve into a configuration containing several two dimensional elementary waves. This this end these elementary waves were completely classified [GK], some of them
can already be found in [L]. For the scalar two dimensional conservation law the two dimensional Riemann
209 problem could be solved much further. For Ut
+ f(u)x + g(u)y
= 0
with f = 9 it was solved in [W] (J convex), [Ll], [L2] (J one inflection point), [KO] (J any number of inflection points). For f # 9 [W] (J close to g,J convex) and [KO], [eH) (J convex, 9 one
inflection point) gave solutions. Numerical implementation. This knowledge of two dimensional Riemann problems has been used in front tracking codes to some extent. The classification of elementary
waves for gas dynamics gave a list of the generic node one can expect there, that is all generic meeting points of shock waves, contact discontinuities and centered rarefaction waves. The tracking of
a node is the numerical solution of a subcase of the full Riemann problem, one has to determine the velocity and states associated with one specific elementary wave. for gas dynamics this has been
done [GK), GI], [G2). For the scalar two dimensional conservation law the resolution of the two dimensional Riemann problem caused by the crossing of two shocks has been implemented. Whereas in [K)
the point is to solve the interaction of two scalar waves quite accurately, in [GG) the emphasis is on following scalar wave interaction within a complicated topology of curves in a robust fashion
without an unacceptable proliferation of subcases. An approximate numerical solution to a general two dimensional Riemann problem was implemented by approximating the flux functions by piecewise
linear functions [R). Computer science issues related to front tracking. Here we briefly describe a package of subroutines which provides facilities for defining, building and modifying
decompositions ofthe plane into disjoint components separated by curves. It is worth noting that ideas from conceptual mathematics, symbolic computation and computer science have been utilized,
thereby going beyond the usual numerical analysis framework, see [GM).
Fig. 5 The front tracking representation of a Mach reflection. Taking the Mach reflection example (Fig. 4), we illustrate in Fig. 5 the representation of this particular flow. The front consists of
piecewise linear curves at the endpoints of each linear piece we have associated quantities like states and wave types. Given this interface, the plane is decomposed into disjoint components. An
integer component value is associated with each such component. Given any point x, y in the plane, the component value can be recovered. The underlying grid and possible interpolating grids near the
front allow the definition of associated state variables in the interior. There is a recursive data structure. It consists of
which denotes the position of the grid points on the curve
which denotes the piece of the curve between two adjacent points and previous bond by giving a start and an end point and having a pointer to the next
denoting usually a pice of the interface homotopic to an interval. A curve is a doubly-linked list of bonds given by a start and node (see below). It has a point to the first and last band.
which is the position of a point on the interface where more than two curves meet. Its position is given with a list in and out curves.
is a list of nodes and curves
Then there are routines that operate on the interface structure. There are routines that allocate and delete the above structures, then those which add these to the interface, routines that split and
join bonds and curves, all needed for example when there is a change in topology. Also one can traverse a list of the above structures. The code has purposely been set up in such a way that this
interface data structure can be dressed with the physics of a given problem containing curves. For gas dynamics one would associate with each point a left and right state, with each curve the wave
type and at the node the state in each sector in order to have the set up for the Riemann problem. This whole structure now needs routines which allows the interface to propagate from one timestep to
the next. This is done by first moving the interface. This means moving bonds and nodes. Next the interior is updated. Then one has to handle possible interactions and bifurcations. These have to be
detected, classified (they could be tangIer of curves or two dimensional Riemann problems and th'en resolved. There is also a routine which redistribute points on the interface, in case they become
to close together or too far apart.
Numerical examples. We shall give four examples out of many that have been calculated over the years with the code. Fig. 6 shows regular and Mach reflection, [GK]. Fig. 7 show an underwater explosion
[G2]. Fig. 8 shows RayleighTaylor instability [FG]. Fig. 9 shows an example from oil reservoir modelling [GG].
Fig. 6 On the left the numerical simulation of regular reflection, where the incident shock has Mach number 2.05 and the wedge angle is 63.4 0 • The calculation was performed on a 80 by 20 grid. The
picture shows lines of constant density inside the bubble formed by the reflected shock. On the right the numerical simulation of a Mach reflection, where the incident shock has Mach number 2.03 and
the wedge is 27 0 • Inside the bubble formed by the reflected shock the calculated lines of constant density are shown. The calculations we performed on a 60 by 40 grid. In both cases there is
excellent agreement with experiments,
213 (a) time 0.0 msec
(b) Lime 7.5 msec
(d) time 50.0 msec
(c) time 15.0 msec
20.:U= 206y
Fig. 7 An underwater expanding shock wave diffracting through the water's surface. The internal pressure is 100 kbans and initial radius of 1 meter installed 10 meters below the water's surface. The
tracked front in dark lines is super imposed over lines of constant pressure. The grid is 60 by 120.
Light fluid
lOdx = lOdy
Heavy fluid t = 0
t = 12
t = 18
t = 24
Fig. 8 Two compressible fluids of different densities, with gravitational forces (here positing upward) pushing lighter fluid into heavy one. The interface is initialized by 14 bubbles with different
wave length and initial amplitude of 0.01. The density ratio is 10. The interface between these fluids is unstable and leads to a mixing layer, with bubbles of light fluid rising in the heavy fluid.
- - ---
(a) SICP U
- -
® @
o il
@ 0
® ®
® ® ®
o o
(b) S ICP 40
® (e) . Iep 80
® (d) . Iep 240
Fig. 9 A horizontal cross section of an oil reservoir modeled by the Buckley-Leverett equations_ Water gets injected at 19 injection wells (cross squares), displacing the oil in the porous media, and
oil get extracted at 12 producing wells (open squares). Plots of the fronts between water and oil are shown_ The frontal mobility ratio for water displacing oil is 1.33. Conclusion. It should have
become clear that this numerical approach forces one to think hard about underlying physics and mathematics_ If one is successful
216 at penetrating the problem at hand, front tracking can give the correct simulation with very high resolution. REFERENCES
BUKIET, The effect of curvature on detonation speed, SIAM J. Appl. Math., 49 (1989).
CHANG, HSIAO, The Riemann problem and interaction of waves in gas dynamics, John Wiley, New York, 1989.
CHERN, COLELLA, A conservative front tracking method for hyperbolic conservation laws, Journal Compo Physics, (1989).
CHERN, GLIMM, McBRYAN, PLOHR, YANIV, Front tracking for gas dynamics, J. Compo Phys., 62 (1986).
FURTATO, GLIMM, GROVE, LI, LINDQUIST, MENIKOFF, SHARP, ZHANG, Front tracking and the interaction of nonlinear hyperbolic waves, NYU preprint (1988).
GLIMM, McBRYAN, A computational model for interfaces, Adv. Appl. Math., 6 (1985).
GLIMM, KLINGENBERG, McBRYANT, PLOHR, SHARP, YANIV, Front tracking and two dimensional Riemann problems, Adv. Appl. Math. 6 (1985).
GLIMM, GROVE, LINDQUIST, McBRYAN, TRYGGVASON, The bifurcation of tracked scalar waves, SIAM J. Sci. Stat. Compo 9 (1988).
GROVE, The interaction of shock waves with fluid interfaces, Adv. Appl. Math (1990).
GROVE, Anamolous reflection of a shock wave at fluid interfaces, Los Alamos preprint LA UR (1989) 89-778.
ISAACSON, MARCHES IN , PLOHR, TEMPLE, The classiBcation of solutions of quadratic Riemann problems I, MRC Report (1985).
ISAACSON, TEMPLE, The classiBcation of solutions of quadratic Riemann problems II, III, to appear SIAM J. Appl. Math ..
JONES, Asymptotic analysis of an expanding detonation, NYU DOE report (1987). scalar conservation laws in one and two space dimensions, Probl., ed. BaUmann, Jeltsch, Vieweg Verlag (1989).
KLINGENBERG, OSHER, Nonconvex Proc. 2 nd Int. Conf. Nonlin. Hyp.
KLINGENBERG, ZHU, Stability of difference approximations for initial boundary value problems applied to two dimensional front tracking, Proc. 3rd Int. Conf. on Hyp. Problems, ed. Gustafsson (1990).
LANDAU, LIFSHITZ, Fluid Mechanics, Addison Wesley (1959).
LINDQUIST, The scalar Riemann problem in two space dimensions, SIAM J. Anal. 17 (1986).
LINDQUIST, Construction of solutions for two dimensional Riemann problems, Adv. Hyp. PDE and Math. with Appl. 12A (1986).
MENIKOFF, PLOHR, Riemann problem for fluid flow of real materials, Los Alamos prepint LA UR-8849 (1988).
RiSEBRO, The Riemann problem for a single conservation law in two space dimensions, May 1988, Freiburg, Germany.
[W] WAGNER, The Riemann problem in two space dimensions for a single conservation law, SIAM J. Math. Anal. 14 (1983). [Zhu]
ZHU, CHEN, WARNATZ, Same computed results of nonequilibrium gas flow with a complete model, SFB123 Heidelberg University preprint 530 (July 1989).
Introduction. It is evident from the lectures at this meeting that the subject of systems of hyperbolic conservation laws is flourishing as one of the prototypical examples of the modern mode of
applied mathematics. Research in this area often involves strong and close interdisciplinary interactions among diverse areas of applied mathematics including (1) Large (and small) scale computing
(2) Asymptotic modelling (3) Qualitative modelling (4) Rigorous proofs for suitable prototype problems combined with careful attention to experimental data when possible. In fact, the subject is
developing at such a rapid rate that new predictions of phenomena through a combination of theory and computations can be made in regimes which are not readily accessible to experimentalists.
Pioneering examples of this type of interaction can be found in the papers of Grove, Glaz, and Colella in this volume as well as the recent work of Woodward, Artola, and the author ([1], [2], [3],
[4], [5], [6]). In this last work, novel mechanisms of nonlinear instability in supersonic vortex sheets have been documented and explruned very recently through a sophisticated combination of
numerical experiments and mathematical theory. Here I will discuss my own perspective on several open problems in the field of hyperbolic conservation laws which involve the interaction of ideas in
modern applied mathematics. Since the audience at the meeting consisted largely of specialists in nonlinear P.D.E. and analysis, I will mostly emphasize open problems which represent rigorous proofs
for prototype situations. I will concentrate on open problems in three different areas: 1) Self-similar patterns in shock diffraction; 2) Oscillations for conservation laws; 3) New phenomena in
conservation laws with source terms. In the first section, I will give the compressible Euler equations as the prototypical example of a system of conservation laws in several space variables and
then describe several approximations such as isentropic, potential flows which yield other related hyperbolic conservation laws. I will also discuss the nature of these approximations in
multi-dimensions. This material may not be well-known to the reader and provides background material for some of the open problems discussed in the remruning sections. Each of the remruning three
sections is devoted to my own *Oepartment of Mathematics and Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544 partially supported by grants N.S.F. OMS
8702864, A.R.O. OAAL03-89-K-0013, O.N.R. N00014-89-J-I044
218 perspective on the open problems in the three areas mentioned earlier. It was clear from my lectures during the meeting that I regard the mathematical problems associated with turbulence and
vorticity amplification and concentrations as extremely important for future research but they are not emphasized here. The interested reader can consult some of my other research/expository articles
(see [7], [8]) for my perspective on these topics. Section 1: The Compressible Euler equations and Related Conservation Laws. A general m x m system of conservation laws in N -space variables is
given by
t ~(Fj(il))
ail + at _ ax)-
= S(il) .
The functions Fj(il), 1::; j ::; N, are the nonlinear fluxes and S(il), are the source terms. These are smooth nonlinear mappings from an open subset of R M to R M. For convenience in notation, we
have suppressed any explicit dependence on (x; t) of the coefficients in (1.1). The prototypical example of a system of homogeneous conservation laws is the system of N +2 conservation laws in
N-space variables given by the Compressible Euler Equations expressing conservation of mass, momentum, and total energy and given by
at+ IV aE d. at + IV
In (1.2), p is the density with l/p = r, the specific volume, v = t(Vl, ... ,VN) is the fluid velocity with pv = mthe momentum vector, p is the scalar pressure, and E = ~(m. m)/p + pe(r,p) is the
total energy with e the internal energy, a given function of (r, p) defined through thermodynamics considerations. For an ideal gas, e = (pr)h - 1 with I > 1, the specific heat ratio. The notation 1/
9 b denotes the tensor product of two vectors. It is well-known that for smooth solutions of (1.2), the entropy S(p, E), is conserved along fluid particle trajectories, i.e.
-Dt = 0 where
DaN - =- + Dt at
'"""v--f:r) aXj .
The first simpler system of conservation laws which emerges as an approximation for solutions of the compressible Euler equations is probably well-known to the reader. If the entropy is initially a
uniform constant and the solution remains smooth, then (1.3) implies that the energy equation can be eliminated. Furthermore, under standard assumptions on the equation of state, the pressure can be
regarded as a
219 function of density and entropy, pep, S). Thus, with constant initial entropy the smooth solution of (1.2) satisfies the system of N + 1 conservation laws in N-space variables given by the
equations for Isentropic Compressible Flow
8p at 8m
+ div (m) =
. (mQ9m) at + div - p - + \lp(p, So) = 0 .
For an ideal gas law, pcp) = A(So)p"Y with I> 1. I remind the reader that solutions of the system in (1.4) are a genuine approximation to solutions of the system in (1.2) once shock waves form since
the entropy increases along a shock to third order in wave strength for solutions of the compressible Euler equations while in (1.4), the entropy is constant. Next, I present a conservation law which
involves a further approximation to solutions of (1.2) beyond the isentropic approximation from (1.4); this approximation is well-known in transonic aerodynamics and is called the equation for
time-dependent potential flow. First I consider smooth solutions of (1.4) that are irrotational, i.e. curlv=O.
With = curlv defining the vorticity, then the vorticity in a smooth solution of the 3-D Euler equations from (1.2) satisfies (1.6) where P. ~(p, S). The general formula in (1.6) is readily verified
by taking the curl of the momentum equation and using vector identities. One immediate consequence of the equations in (1.6) and (1.3) is that a smooth solution of compressible 3-D Euler which is
both isentropic and irrotational at time t = 0 remains isentropic and irrotational for all later times as long as this solution stays smooth; thus, the condition in (1.5) is reasonable for smooth
solutions. Next, for smooth irrotational solutions of the equations for isentropic compressible flow, I will integrate the N-momentum equations in (1.4) through Bernoulli's law. With the condition,
curl v = 0, the N-momentum equations in (1.4) assume the form
(1. 7)A) where h(p) satisfies
(1. 7)B)
= ~:(p,So)/p > O.
On a simply connected space region, the condition, curlV' = 0, implies that (1.8)
so that (1.7)A) determines the density from the potential through the formula (1.9) with D = t( t, V'. is one of the m wave speeds of the linearization at Uo. The amplitude solves a nonlinear
transport equation
(3.4) The operator D is a first order operator which is the linear transport operator of geometric optics given by differentiation along the bicharacteristic rays associated with tP and has the form,
(3.5) with a(x, t), b(x, t), C(x, t) determined from tP by explicit formulas. In the case of single propagating waves, provided b f- 0 (which is always true for a genuinely nonlinear wave), there are
elementary changes of variable which reduce (3.4) to the inviscid Burgers equation, (3.6) The advantage of utilizing geometric optics as an asymptotic tool in understanding phenomena in the complex
general multi-D system in (3.1) is now evident. The solutions of (3.6) are known explicitly and provide general quantitative asymptotic approximations for (3.1) through the equations in (3.2)-(3.5).
Obviously, it is an important theoretical problem to justify nonlinear geometric optics for discontinuous solutions of conservation laws. The only rigorous work thus far on this topic is that by
Diperna and the author ([27]) for a class of systems in a single space variable which I will discuss briefly below. An outstanding and very important but accessible open problem is the following
Problem #1: Provide a rigorous justification of the single wave geometric optics expansion in (3.2) for discontinuous initial data.
I believe that this problem is ripe for solution for the following reasons: at this meeting, G. Metevier (see his paper in this volume) has announced the existence
of shock front solutions of (3.1) for a uniform time T independent of € as € 1 0; the existence and structure of the discontinuous approximating solutions of (3.2)-(3.5) is completely understood in
the genuinely nonlinear case because the discontinuous solutions of (3.6) are well known; the errors between the approximate and exact solution can probably be estimated through an appropriate
multi-dimensional generalization of the estimates utilizing geometric measure theory developed by Diperna and the author ([27]) for systems in a single space variable. Next, I turn to important open
problems regarding the new phenomena which occur when one attempts to build multi-wave approximations for geometric optics with the form,
(3.7) In linear geometric optics the wave patterns in (3.7) superimpose and each amplitude solves the corresponding single wave transport equation of geometric optics. When does this happen in the
nonlinear case? The formal asymptotic theory ([28], [29]) predicts that the 8ingle wave pattern8 of nonlinear geometric OptiC8
described in (3.2)-(3.5) 8uperimp08e and are non-resonant provided that the amplitudes {o"p(x,t,8n~=1 have compact 8upport in 8 .
The only systematic rigorou8 jU8tification for discontinuous solutions for geometric OptiC8 has been developed by Diperna and the author ([27]) in this non-reMnant ca8e in a 8ingle 8pace variable.
The main theorem from [27] requires the hypothesis that the initial amplitudes {o- p( x, ~=l at time t = 0 have compact support (thus, are non-resonant) and that all m wave fields are distinct and
genuinely nonlinear. Under these assumptions Diperna and the author prove that
(3.9) where u(x,t) is the solution of the conservation laws with the same initial data constructed by Glimm's method. Here II . IlL' is the LI norm. Thus, even for discontinuous initial data,
geometric optics is valid uniformly for all time in this non-resonant situation - this is a surprising result !! Incidentally, one immediate corollary of this theorem is that for small amplitude
initial data of compact support with size €, the discontinuous solutions of the isentropic flow equations in (1.4) in one-space dimension and the discontinuous solutions (3.10)
of the potential flow equations in (1.10) in one-space dimension with the same initial data agree within in the L 1 norm for all time
230 This result provides a rigorous justification for some of the approximations described in Section 1 in the special case of a single space variable. Since the isentropic flow equations in (1.4)
and the potential flow equations in (1.10) have the same smooth solutions, it is an exercise to check that these two equations have the same single wave expansions for nonlinear geometric optics.
With this fact, the corollary in (3.10) follows immediately from (3.9) and the triangle inequality. Some interesting and accessible open problems generalizing the results stated in (3.9) are
described at the end of the author's survey article ([24]). I return to the general multi-wave expansions of geometric optics and ask whether new phenomena occur when the non-resonance conditions
from (3.8) are no longer satisfied? The answer is yes. Recent research of Hunter, Rosales, and the author ([29], [30]) employing a systematic development of nonlinear geometric optics reveals more
complex effects beyond (3.8); general periodic or almost periodic wave trains do not superimpose but instead interact resonantly. The eikonal equations in (3.3) remain the same but the amplitudes,
{o"P( x, t, B)};;'=l no longer solve simple decoupled transport equations like those in (3.4); in fact the different amplitudes resonantly exchange energy through nonlinear interaction and solve a
coupled system of quasi-linear integro-differential equations provided that m ~ 3. Applications to the equations of compressible fluid flow from (1.2) and (1.4) are developed in detail in the above
papers in both a single and several space dimensions. As regards the 3 x 3 system from (1.2) describing compressible fluid flow in one space variable, the resonant nonlinear interaction of small
amplitude sound waves with small amplitude entropy waves produces additional sound waves which resonantly interact. After some elementary changes of variables, the two sound wave amplitudes oo±(B, t)
satisfy the coupled system of resonant equations
00: + GO"+ ): + (3.11 )
+ GO"-): - [
k(B - y)O"-(y, t)dy = 0 k( -B - y)O"+(y, t)dy
where I assume in (3.11) that 0"+,0"-, and k are periodic with period one. The kernel k is a multiple of a rescaled derivative of the initial entropy perturbation; the asymptotics predicts that the
entropy perturbation does not change to leading order in time. Recent papers ([31J, [32]) which combine small scale numerical computation and several exact solutions reveal surprising new phenomena
in the solutions of (3.11) through resonant wave interaction. Thus, the formal predictions from geometric optics for periodic wave trains at small amplitudes for 3 x 3 compressible fluid flow involve
surprising new phenomena. The open problems which I suggest next are motivated by these new phenomena. Since conservation laws without source terms are scale invariant, I propose some open problems
for the rigorous justification of nonlinear geometric optics for the resonant case by considering solutions of the M x M system of conservation laws in a single space variable
231 with small amplitude periodic initial data
= Uo + fU~(X) .
Here uOfRM is a constant and u~(x) is a function with period one, i.e. u~(x + 1) = u~(x ).
Problem #2: For a general system of conservation laws, let u' denote the weak solution with initial data in (3.12)B) and let u~ denote the corresponding approximation from nonlinear geometric optics
(involving resonant wave interaction for m ~ 3 in general). Show that there is a time, T( f) with fT( f) ---+ 00 as 15 ---+ 0 so that (3.13)
max Ilu' - u:VllL' ::; 0(15)
where L1 denotes the L1- norm of a one periodic function in x. I make several remarks on this problem. For m = 2 where the resonant effects are absent and for a pair of genuinely nonlinear
conservation laws the estimate in (3.13) has been proved in [27] with T(f) = 0(15- 2 ); it would be interesting to know if this is sharp. Furthermore, there is an improved geometric optics formal
approximation for large times due to Cehelsky and Rosales (see [33]) which accounts for accumulating phase shifts from wave interactions and this geometric optics approximation u~ should be used in
Problem #2. In fact, the result of Diperna and the author for periodic waves for pairs of conservation laws does not utilize this more refined geometric optics approximation with phase shift
corrections for long times. An interesting and much more accessible technical problem than Problem #2 is to assess whether through the use of this refined geometric optics approximation for m =
2,.the time of validity T(f) becomes significantly larger than T(f) = 0(15- 2 ). One of the reasons that the work in [27] for the periodic case is restricted to m = 2 is that general existence
theorems for small periodic initial data for conservation laws following Glimm's work are unknown for m ~ 3. A straightforward repeat of Glimm's proof shows that the solution u' of the system of
conservation laws in (3.12)A) with general initial data exists for times of order 0(15- 2 ). I conjecture that for a general system of conservation laws with genuinely nonlinear and linearly
degenerate wave fields, this crude result is sharp; my conjecture is based on the unstable nature of solutions of the resonant asymptotic equations for a particular example system discussed in [30].
It would be very interesting to find out whether this conjecture is correct. One the other hand, I believe that there is global existence for the 3 X 3 system of compressible fluid flow, (1.2), for
small amplitude periodic initial data as given in (3.12)B). This I list as Problem #3: Show that for the specific 3 x 3 system of compressible fluid flow, Glimm's method yields the global existence
of solutions for general small amplitude periodic initial data. I believe that Problem #2 is too difficult to attack in full generality; the special and illlPortant case of 3 x 3 gas dynamics is
already extremely interesting. For emphasis I stat" this as
232 Problem #4: For the 3 x 3 system of compressible fluid flow, let u:" denote the resonant geometric optics approximation for the initial data in 3.12B) given through 3.11 (see [30]) but including
the large time phase shift corrections of CehelskyRosales ([33]). Let u be the weak solution of (3.12)A) with the same initial data that exists for times of order e- 2 • Find a time interval T(e)
with eT(e) / 00 as e -+ 0 so that u:" differs from u' by o( e) on that time interval.
I remind the reader that from my earlier comments the full solution of Problem #3 is not needed to study Problem #4 and any progress on Problem #4 would be very interesting. Large Oscillations
This section involves the study of existence of solutions via the weak topology and the propagation of large amplitude oscillations for systems of conservation laws
Ut+F(u)x=O u(x,O) = Uo
with large amplitude initial data, Uo. The use ofthe weak topology and the method of compensated compactness was introduced by Tartar ([34]) and applied to scalar conservation laws. Diperna ([35],
[36]) carried out Tartar's program for pairs of conservation laws provided that both wave fields are genuinely nonlinear; in this case strong convergence was deduced from the apriori weak convergence
so that no oscillations propagate. Rascle and P. Serre (see [37], [38], and Serre's paper in this volume) have studied pairs of conservation laws which are not genuinely nonlinear; for example, for a
general nonlinear wave equation, they show that oscillations propagate but the nonlinear tenns in the equations still converge and define a weak solution in the limit. Given all of the phenomena
deduced via geometric optics for propagation and interaction of oscillations at small amplitudes for m ~ 3 it is not surprising that the propagation of large amplitude oscillations and the use of the
weak topology provide difficult questions for systems of conservation laws with m ~ 3. The most important and most accessible of these problems regards propagation of oscillations for 3 x 3
compressible fluid flow. In Lagrangian mass co-ordinates, these equations have the fonn Tt Vt
Gv2 where (3.16)
= 0
+ Px = 0
+ e(T,p)) t + (pv)x = 0
is the specific volume; the interval energy e is given by
pT e---,),-1'
for an ideal gas law. The first remark is that large amplitude oscillations do propagate in solutions of (3.15). Consider the rapidly oscillating exact solution sequence
233 defined by contact discontinuities, i.e. (3.17) where VO,PO are fixed constants and TO is a fixed positive I-periodic function. Large amplitude oscillations propagate for this equation because
the weak limit of TO (~) has a non-trivial Young measure but the velocity and presure converge strongly. Nevertheless, the weak limit is a solution ofthe equations in (3.15). The conjectured behavior
is that these examples provide the worst possible situation. I present this as Problem #5: Let t(T',V',p') be a sequence of weak solutions of the compressible fluid equations in (3.15). Assume the
uniform bounds, 0 < L :S; T' :S; T+, 0 < p_ :S; p' :S; p+, Iv'l:S; V ~nd as € -+ 0, t( T', v', p') converges weakly to t( T, V, p). Is it true that (v',p') converges strongly to (v,p)? Both C.S.
Morawetz and D. Serre are currently working on this problem. In fact, Serre has remarked that if the conjecture in Problem #5 is true, then (3.18)
for an ideal gas law, the limit is a weak solution of the equations for compressible flow.
With (3.16), the result in (3.18) is an easy exercise for the reader. Nevertheless, I have some doubts that this conjecture is true; some high quality numerical simulations could generate some
important insight here.
Section 4: New Phenomena in Conservation Laws with Source Terms. In this section, I briefly discuss my perspective on conservation laws with source terms. I focus on solutions of systems in a single
space variable with the form, (4.1)
+ F(u)x
S(u) .
T.P. Liu has been the principal contributor to the study of conservation laws with a special class of source terms with x dependence which model physical problems such as the averaged duct equations
for one dimensional fluid flow. He has discussed the stability and large time asymptotic behavior for a large class of problems with source terms. The interested reader can consult Liu's paper in
this volume for a detailed list of references. Here I will discuss open problems for conservation laws with source terms which do not satisfy the hypotheses of Liu's work - the equations of reacting
gas flow are a prototypical example. I will discuss some of the new phenomena that occur for these systems with source terms which have been discovered recently through numerical, asymptotic, and
qualitative modelling and then I suggest some accessible open problems motivated by these new phenomena. I emphasize the phenomena for the compressible Euler equations of reacting gas flow as an
example of (4.1) although I am confident that similar phenomena are likely to occur for the hyperbolic conservation laws with suitable source terms arising in multi-phase flow, retrograde materials,
and other applications.
234 The compressible Euler equations for a reacting gas with simplified one-step irreversible kinetics are given by
+ (pv)", = 0 (pV)1 + (pV 2 + p)", = 0 (pE)1 + (pvE + pv)", = qoK(T)pZ (pZ)1 + (pvZ)", = -K(T)pZ PI
where E = T +e is the energy density, Z is the mass fraction offuel, e = ~,qO > o is the heat release, and T = ,pip is the temperature. In the discussion below we assume either the Arrhenius form for
the rate function K(T), 2
= K exp( -E+ IT)
or the ignition temperature law
K(T) = { :
, T"2 T j , T 5: T j
where T; is a fixed reference ignition temperature. In (4.3) K is the rate constant while in (4.3)A), E+ is the non-dimensional activation energy. An important practical problem for both safety and
enhanced combustion regarding the system in (4.2) is the initiation of detonation. Detonation waves are travelling wave solutions of (4.2) which have the structure of an ordinary fluid dynamic shock
followed by chemical reaction; these exact solutions are readily determined by quadrature of a single O.D.E. (see Fickett-Davis [39], Majda [40]) and are called Z-N-D waves; the Z-N-D wave moving
with the slowest velocity is called the C-J (Chapman-Jouget) wave. The problem of initiation involves an initial flow field with a small region of hot gas kept at constant volume and velocity. The
main issue in initiation regards whether this perturbation will grow into a fullydeveloped Z-N-D wave in which case there is transition to detonation or whether this perturbation will die out as time
evolves and the chemical reaction will be quenched so that there is failure? Both experimental data and detailed numerical computations display many complex features in examples illustrating both
failure and initiation. The recent paper by V. Roytburd and the author, [41], contains a discussion and documentation of these phenomena as well as a large list of background references. While these
phenomena in initiation are becoming understood through a combination of experiments and numerical computation, the rigorous theory of such phenomena for solutions of the equations in (4.2) seems
beyond reach. In fact, a very interesting preliminary open problem is the following Problem #1: Establish the existence of solutions for the reacting Euler equations in (4.2) for appropriate initial
data by a modification of Glimm's method. At the meeting, D. Wagner (personal communication) has announced some major progress toward solving Problem # 1.
235 Next, I present an interesting qualitative-quantitative model for high Mach number combustion and then I indicate some beautiful prototype problems for the initiation of detonation which are
accessible in this model. It is not surprising given the complexity of the phenomena described in the preceding paragraph that there is an interest in simpler models which qualitatively mimic some of
the features in solutions of (4.2) in various regimes. One such model for high Mach number combustion was proposed and studied by the author in [42] and then derived by Rosales and the author [40],
[43] in a slightly modified form from the equations of reacting gas flow in (4.2) through nonlinear geometric optics as a quantitative asymptotic limiting equation of (4.2). This
qualitative-quantitative model for high Mach number combustion is the following 2 x 2 system Ut
+ ("2u2)x = qoK(u)Z Zx = K(u)Z
where the rate function K( u) has either of the forms in (4.3). The function Z is the mass fraction of reactant and the function u appearing in (4.4) is the amplitude of an acoustic wave moving to
the right; when the reaction terms vanish so that Z K( u) == 0, we get Burgers equation as expected from the theory of geometric optics sketched in (3.2)-(3.6). The coordinate x appearing in (4.4) is
not physical space but instead is a suitable space-time distance to the reaction zone. Thus, the natural data for (4.4) is a signalling problem: uo(x) and Zo(t) are prescribed with (4.5)
u(x, t)lt=o = uo(x) and lim Z(x, t) = Zo(t) . t~oo
For simplicity in exposition, we assume below that Zo(t) == 1. From my discussion above, it is evident that the model equations in (4.4) retain the nonlinear interactions between the right sound wave
and combustion but ignore all other multi-wave interactions that are present in solutions of (4.2). The model equations in (4.4) have a transparent analogue of Z-N-D waves and C-J waves (see [40],
[42]) and also an analogue of the initiation problem. To mimic the initiation problem, I take the ignition temperature form from (4.3)B) for the rate function K(u) in (4.4) and consider a pulse in
the initial data for u given by ,O Ui and < Ui and Ui is the ignition value in the model. The initial data in (4.6) is the analogue in the model of the hot spot mentioned earlier in the initiation
problem. The solution of (4.4), (4.5) with the initial data in (4.6) was studied by Roytburd and the author through numerical computations in the paper, [44], from LM.A. volume 12. Also, numerical
solutions of initiation with (4.4), (4.5), (4.6) were compared with simulations of the full reacting gas system in (4.2) and as expected the solutions of (4.4)-(4.6) have good qualitative agreement
with those in (4.2) provided the initiation process in solutions of (4.2) does not involve complex
236 multi-wave gas dynamic interactions. In [44], Roytburd and the author found that depending on the parameters for the initial data u~, d, and the heat release qo and the rate constant K in (4.3)
B), the solution of (4.2) either was quenched and tended rapidly to zero so that there was failure or the solution grew (sometimes in a highly non-monotone fashion) to a fully-developed C - J wave so
that strong initiation occurs. The equations in (4.4) have both the attenuating effects on u of the spreading of rarefaction waves and the amplifying effects of exothermic heat release which compete
to produce either outcome. A discussion of these competing effects is given in [44]. The main rigorous prototype problem which I propose in this section is the following
Problem #2: For a fixed K, qo, and Ui characterize those initial data Uo given in (4.6) so that either 1) the asymptotic solution of (4.4) as t / 00 is a C-J wave or 2) the solution tends rapidly to
zero as t / 00. I remark that the global existence of solutions for (4.4), (4.5) has been established by V. Roytburd (unpublished) through a constructive proof utilizing finite difference schemes. I
believe that Problem #2 demonstrates v~ry interesting new phenomena and also is extremely accessible to a rigorous analysis. One natural strategy would be to implement a version of the random choice
scheme for the equations in (4.4) together with an appropriate version of Liu's wave tracing ideas to assess the ultimate growth or failure of the wave pattern.
I end this section with some additional comments regarding the equations of reacting gas flow as a source of new phenomena in conservation laws with source terms. Those familiar with homogeneous
hyperbolic conservation laws know that shock waves in these systems are asymptotically stable at large times as t / 00. The analogues of shock wave solutions for (4.2) are the Z-N-D- travelling waves
mentioned earlier. It is both an experimental and numerically documented fact that in appropriate regimes of heat release, overdrive, and reaction rate, the Z-N-D waves lose their stability to
time-dependent wave patterns with either regular or sometimes even chaotic pulsations. These facts and a corresponding asymptotic theory together with numerical calculations are mentioned in the
paper by V. Roytburd in this volume. I would like to mention here that such effects cannot be found in solutions of the equations in (4.4); the full multi-wave structure of the gas dynamic equations
in (4.2) is needed to produce these pulsation instabilities. An asymptotic analysis by Roytburd and the author to appear in a forthcoming publication confirms this. Concluding Remarks: I have
presented several problems in the modern applied mathematics of hyperbolic conservation laws. I have emphasized phenomena for the equations of compressible flow in several space variables. However, I
believe that many of the phenomena and problems which I discuss here also have analogues in other applications such as dynamic nonlinear elasticity, magneto fluid dynamics, and multi-phase flow. I
would like to thank Harland Glaz for the use of two of his graphs and also for interesting conversations regarding Section 2 of this paper.
237 REFERENCES [1] [2]
[3] [4] [5] [6] [7] [8]
[9] [10] [11] [12] [13] [14]
[15] [16] [17] [18] [19] [20] [21]
[22] [23] [24]
[25] [26] [27]
M. ARTOLA AND A. MAJDA, Nonlinear development of instabilities in supersonic vortex sheets I: the basic kink modes, Physica 28D, pp. 253-281, 1988. M. ARTOLA AND A. MAJDA, Nonlinear development of
instabilities in supersonic vortex sheets II: resonant interaction among kink modes, (in press S.I.A.M. J. Appl. Math., to appear in 1989). M. ARTOLA AND A. MAJDA, Nonlinear kink modes for supersonic
vortex sheets, in press, Physics of Fluids A, to appear in 1989. P. WOODWARD, in Numerical Methods for the Euler Equations of Fluid Dynamics, eds. Angrand, Dewieux, Desideri, and Glowinski, S.I.A.M.
1985. P. WOODWARD, in Astrophysical Radiation Hydrodynamics, eds. K.H. Winkler and M. Norman, Reidel, 1986. P. WOODWARD AND K.H. WINKLER, Simulation and visualization offluid flow in a numerical
laboratory, preprint October 1988. A. MAJDA, Vorticity and the mathematical theory of incompressible fluid flow, Comm. Pure Appl. Math 39, (1986), pp. S 187-220. A. MAJDA, Mathematical Fluid
Dynamics: The Interaction of Nonlinear Analysis and Modern Applied Math, Centennial Celebration of A.M.S., Providence, RI, August 1988 (to be published by A.M.S. in 1990). R. COURANT AND K.
FRIEDRICHS, Supersonic Flow and Shock Waves, Springer-Verlag, New York, 1949. C.S. MORAWETZ, The mathematical approach to the sonic barrier, Bull. Amer. Math. Soc., 6, #2 (1982), pp. 127-145. A.
MAJDA, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables, Appl. Math. Sciences 53, Springer-Verlag, New York 1984. W. HAYES, The vorticity jump across a gas dynamic
discontinuity, J. Fluid Mech. 2 (1957), pp. 595-600. A. MAJDA AND E. THOMANN, Multi-dimensional shock fronts for second order wave equations, Comm. P.D.E., 12 (1987), pp. 777-828. H. GLAZ, P.
COLELLA, 1.1. GLASS, AND R. DESCHAMBAULT, A detailed numerical, graphical, and experimental study of oblique shock wave reflections, Lawrence Berkeley Report, April 1985. M. VAN DYKE, An Album of
Fluid Motion, Parabolic Press, Stanford, 1982. C.S. MORAWETZ, On the non-existence of continuous transonic flows past profiles, I, II, III, Comm. Pure Appl. Math. 9" pp. 45-68, 1956; 10, pp. 107-132,
1957; 11, pp. 129-144, 1958. J. HUNTER AND J .B. KELLER, Weak shock diffraction, Wave Motion 6 (1984), pp. 79-89. E. HARABETIAN, Diffraction ofa weak shock by a wedge, Comm. Pure Appl. Math. 40
(1987), pp. 849-863. D. JONES, P. MARTIN, AND C. THORNHILL, Proc. Roy. Soc. London A, 209, 1951, pp. 238-247. J.B. KELLER AND A. A. BLANK, Diffraction and reflection of pulses by wedges and corners,
Comm. Pure Appl. Math. 4 (1951), pp. 75-94. J. HUNTER, Hyperbolic waves and nonlinear geometrical acoustics, in Proceedings of 6th Army Conference on Applied Mathematics and Computations, Boulder,
CO, May 1988 (to appear). D. G. CRIGHTO N, Basic theoretical nonlinear acoustics, in Frontiers in Physical Acoustics, Proc. Int. School of Physics Enrico Fermi, Course 93, North-Holland, Amsterdam
(1986). D.G. CRIGHTON, Model equations for nonlinear acoustics, Ann. Rev. Fluid. Mech. 11 (1979), pp.11-33. A. MAJDA, Nonlinear geometric optics for hyperbolic systems of conservation laws, in
Oscillation Theory, Computation, and Methods of Compensated Compactness, IMA Volume 2, 115-165, Springer-Verlag, New York, 1986. A. MAJDA AND R. ROSALES, Nonlinear mean field-high frequency wave
interactions in the induction zone, S.I.A.M. J. Appl. Math., 47 (1987), pp. 1017-1039. R. ALMGREN, A. MAJDA AND R. ROSALES, Rapid initiation through high frequency resonant nonlinear acoustics,
(submitted to Combustion Sci. and Tech., July 1989). R. DIPERNA AND A. MAJDA, The validity of nonlinear geometric optics for weak solutions of conservation laws, Commun. Math. Physics 98 (1985), pp.
238 [28] [29] [30] [31] [32] [33]
[35] [36] [37] [38] [39] [40] [41] [42] [43] [44]
J. K. HUNTER AND J .B. KELLER, Weakly nonliner high frequency waves, Comm. Pure App\. Math. 36 (1983), pp. 543-569. J.K. HUNTER, A. MAmA, AND R.R. ROSALES, Resonantly interacting weakly nonlinear
hyperbolic waves, II: several space variables, Stud. App\. Math. 75 (1986), pp. 187-226. A. MAmA AND R.R. ROSALES, Resonantly interacting weakly nonlinear hyperbolic waves, I: a single space
variable, Stud. Appl. Math. 71 (1984), pp. 149-179. A. MAmA, R. ROSALES, M. SCHONBEK, A canonical system ofintegro- differential equations arising in resonant nonlinear acoustics, Studies App\. Math.
in 1989 (to appear). R. PEGO, Some explicit resonanting waves in weakly nonlinear gas dynamics, Stud. App\. Math. in 1989 (to appear). P. CEHELSKY AND R. ROSALES, Resonantly interacting weakly
nonlinear hyperbolic waves in the presence of shocks: A single space variable in a homogeneous time independent medium, Stud. App\. Math. 74 (1986), pp. 117-138. 1. TARTAR, Compensated compactness
and applications to partial differential equations, in Research Notes in Mathematics, Nonlinear Analysis and Mechanics: Heriot-Watt Symposium, Vol. 4, R. Knops, ed., Pitman, London, 1979. R. DIPERNA,
Convergence of approximate solutions to conservation laws, Arch Rat. Mech. Ana\. 82 (1983), pp. 27-70. R. DIPERNA, Convergence of the viscosity method for isentropic gas dynamics, Comm. Math. Phys.
91 (1983), pp. 1-30. M. RASCLE AND D. SERRE, Comparite par compensation et systemes hyperboliques de lois de conservation, Applications C.R.A.S. 299 (1984), pp. 673-679. D. SERRE, La compucite par
compensation pour les systems hyperboliques nonlineaires de deux equations a une dimension d'espace, J. Maths. Pures et Appl., 65 (1986), pp. 423-468. W. FICKET AND W. DAVIS, Detonation, Univ.
California Press, Berkeley, 1979. A. MAmA, High Mach number combustion, in Reacting Flows: Combustion and Chemical Reactors, AMS Lectures in Applied Mathematics, 24, 1986, pp. 109-184. A. MAmA AND V.
ROYTBURD, Numerical study of the mechanisms for initiation of reacting shock waves, submitted to S.I.A.M. J. of Sci. and Stat. Computing in May 1989. A. MAmA, A qualitative model for dynamic
combustion, S.I.A.M. J. App\. Math., 41 (1981), pp.70-93. R. ROSALES AND A. MAmA, Weakly nonlinear detonation waves, S.I.A.M. J. Appl. Math., 43 (1983), pp. 1086-1118. A. MAmA AND V. RoYTBURD,
Numerical modeling of the initiation of reacting shock waves, in Computational Fluid Mechanics and Reacting Gas Flows, B. Engquist et al eds., I.M.A. Volumes in Mathematics and Applications, Vol. 12"
1988, pp. 195-217. A. MAmA AND R. ROSALES, A theory for spontaneous Mach stem formation in reacting shock fronts, I: the basic perturbation analysis, S.I.A.M. J. App\. Math. 43, (1983), pp.
STABILITY OF MULTI-DIMENSIONAL WEAK SHOCKS GUY METIVIER * Abstract. In this paper we discuss the stability of weak shocks for a class of multi-dimensional systems of conservation laws, containing
Euler's equations of gas dynamics; we study the wellposedness of the linearized problem, and study the behaviour of the L2 estimates when the strength of the shock approaches zero. AMS(MOS) subject
classifications. 35L65 - 76L05 - 35L50.
1. Introduction. In this lecture, we are concerned with the linearized stability of multi-dimensional weak shocks. Let us first recall that A. Majda has defined the notion of "uniform stability" for
shock front solutions to systems of free boundary mixed hyperbolic problems, this stability condition is the natural "uniform Lopatinski condition" for the linearized problem. However, the analysis
in [Ma 1] relies on the fact that the front of the shock is non-characteristic while, for weak shocks, the front is "almost" characteristic, i.e. the boundary matrix has a small eigenvalue; in fact
the estimates given in [Ma 1] blow up when the strength of the shock tends to zero. In this context, our main goal is to make a detailed study of the behaviour of the L2 estimates that are valid for
the linearized equations, when the strength of the shock tends to zero. Another interesting point we get as a by-product of our analysis, is that, in rather general circumstances, any weak shock that
satisfies Lax' shock conditions, is uniformly stable (this was already noted in [Met 1] for 2 x 2 systems). The details of the proofs are given in [Met 3].
2. Equations of shocks. Let us consider a system of conservation laws; (2.1) where the space-time variables are called x = (xo, ... , x n ), and the unknowns U = (Ull ... , UN). The functions /j are
supposed to be Coo on the open set n C RN , and denoting by Ai the jacobian matrix of /j, the quasi linear form of (2.1) is: (2.2)
The typical example we keep in mind all along this paper, is Euler's system of gas dynamics:
atP + div (pv) = 0
at(pv) + div (pv 1/9 v) + grad p = 0 at(pE) + div (pEv
+ pv) =
*IRMAR URA 0305 CNRS, Universite de Rennes I, Campus Beaulieu, 35042 Rennes Cedex, France.
240 with p the density, p the pressure, v the velocity and E = ~ Ivl 2 + e(p, p); the unknowns are u = (p, v, s), s being the entropy; as usual, we assume that p together with the temperature T are
given function of (p, s), which satisfy the second law of thermodynamics: de = Tds + p- 2 dp. Going back to general notations (2.2), we will always assume that the system is symmetric hyperbolic with
respect to the time variable t = Xo, (for instance assuming that it admits a strictly convex entropy) that is: ASSUMPTION 1. There is a matrix S(u), which depends smoothly on u such that all the
matrices SAj are symmetric, with SAo definite positive.
A shock front solution of (2.1) is, to begin with, a piecewise smooth weak solution u which is discontinuous across an hypersurface 2:, say of equation ,p( x) = 0; the restrictions u± of u to each
side of 2: are smooth solutions of (2.1), and asserting that u is a weak solution is equivalent to the Rankine-Hugoniot jump conditions:
(2.4) where [f] denotes the jump of the function f across
Recall from [Lax] the following lemma, which allows the construction of planar shock fronts (solutions where u+ and u- are constant and 2: is the hyperplane of equation at = x.O: LEMMA 1. Let
A( u,
be, for u E RN and ( E Rn\O, a simple eigenvalue of
(j AOl(u)Aj(u)
Then there is a (Rankine-H ugoniot) "curve" of solutions to the jump equations: (2.6)
a{fo(u+) - fo(u-n =
(j{h(u+) -
u+ = U(c:, u-O, a small) such that: (2.7)
= S(c:, u-o, (c: being the parameter on the {
1c:1 remaining
u+ = u- +c:r(u-,O +O(c: 2 ) a
= A(U-,O+ tc:r.duA(u-, 0 + O(c: 2 )
where r( u, 0 denotes a right eigenvector associated to the eigenvalue A. 3. Structure of the problem. The starting point is to consider equations (2.2) for u+ together with the jump condition (2.4),
as a free boundary value problem, and in this context, the boundary matrix (the coefficient of the normal derivative to 2: in (2.1)) plays an important role: (3.1)
L O::;j::;n
241 The first requirement is:
a) the front :L is non-characteristic. That means that both matrices M( u+, d¢J) and M( u- ,d¢J) are invertible. When looking at the example (2.7), this is well known to be equivalent to agenuine non
-linearity assumption, i.e. d u )'( u, O·r(u, 0 i0, in which case we impose the standard normalization: (3.2)
du )'( u, O.r( u, ()
Also recall that, when the eigenvalue is linearly degenerate (d u )'( u, O·r( u, 0 = 0), one falls into the completely different category of contact discontinuities, which in the multi-D case are not
yet understood. The next thing to check is b) the number of boundary conditions. For that purpose one has to look at the number of characteristics impinging the boundary (number of positive and
negative eigenvalues of M(u±,d¢J»; note that, here, we have N boundary conditions for 2N + I unknowns u+, u- and ¢J). In fact, requiring that our problem possesses the right number of boundary
conditions leads to the familiar Lax' shock conditions ([Lax]). In the example (2.7), with the normalization (3.2), that means we need restrict ourselves to the case: (3.3)
£ 0 small enough, the problem (6.1) (6.2) is uniformly stable as defined by A. Majda. Such a fact was already noticed for 2 x 2 systems in the appendix of [Met 1]. 2. Estimate (7.3) just makes
precise the dependence on e in the estimates given by A. Majda ([Ma 1]). In particular, existence of solutions (v,cp) for the problem (6.1) (6.2) with data (f,g) in L2, follows from [Ma 1] as well as
estimates and existence in domains {t < T}. 3. The reader might be worried by the term C I / 2IgI0,"'( in the right hand side of
(7.3). Indeed, in the Rankine-Hugoniot condition (2.4) each term is O(e), so in the forthcoming applications, linearizing (2.4) will yield a term 9 which will contain a factor e in front of it.
8. Several reductions. In the three last sections we shall give a few indications concerning the proof of theorem 1, assuming for simplicity that the eigenvalue >. under consideration is the smallest
one. First, one can perform several reductions:
a) localize estimate (7.3), making use of a partition of unity. b) after a (local) change of variables, one can assume that condition (6.9), rh± = >.(ru±, re#) - reo = =j=£e b±, holds not only on Xn
= 0, but also on both side ±x n ~ o. c) next, one can diagonalize the boundary matrix, getting a problem of the following form: n-l
J±On v ± + "LBfOjV± = f±
J+rv+ = M rrv- + £Xip + G
Thanks to assumption 1, the matrices B j can be assumed to be symmetric, and
Bo definite positive. The next lemma is a consequence of (6.8) and of assumption 3, but it is crucial in the understanding of the structure of the problem:
LEMMA 2. bj = _ ea+b+/ 2 x first column of Bj
+ 0(£), and X
is elliptic.
Remark. It is a good exercise to go to the litnit in the boundary conditions (8.2) (assuming that g = £h). Indeed, in the first row, one can factor out £, and the limit is of the form: (8.3) and the
limit of the N - 1 other equations is simply: (8.4)
(8.3) is nothing but the linearized equation of the eikonal equation corresponding to the limit problem of sound waves mentioned in section 4, while (8.4) are the natural transmission conditions for
the linearized equations of sound waves. In these conditions, weak shocks appear as singular perturbations of sound waves, the perturbation being singular in two aspects: first, the boundary becomes
non characteristic and second, the boundary conditions become elliptic with respect to ip.
d) Denoting by VI [resp v] the first component of V [resp the vector of the N - 1 last components ], theorem 1 is a consequence of the more precise following estimates:
246 THEOREM
2. Under the same circumstances as in theorem 1, one has:
,1/2Ivlo,")' + £1/2IrVlI0,")' + Iri>lo,")' + ,£1/2Icplo,")' + £Icpll,")' ::; C {,-1/21/10,")' + £-1/2Ig110,")' + Ig 1o"),}
e) Because A is the smallest eigenvalue, the problem lying on the side Xn ::; 0 is symmetric hyperbolic and well posed without any boundary condition, so that for , large enough:
f) A direct analysis of the boundary conditions shows that: (8.7)
,£1/2Icplo,")' + c:lcpll,")' ::; C{ £1/2IrVlI0,")' + Iri>lo,")'} + C{C 1/ 2Ig110,")' + Iglo,")'}
g) Therefore, it suffices to provide an estimate for v±, and in fact, because (8.8) it suffices to give an estimate ofrv+. More precisely, forgetting the +'s in (8.1), we consider the following
problem: n-l
+ LBjojv = 1
and it remains to prove an estimate of the form:
£1/2IrWlI0,")' + !fiOlo,")' ::; ::; Cb- 1/ 21/10,")' + Iglo,")' + £-1/2Ig110,")' + ,-1/2Iwlo,")'}+ + C{ £ + ,-I H£I/2Irwdo,")' + IriOlo,")' + ,£1/2Icplo,")' + £Icpit.")'}
Indeed, with (8.6) (8.7) and (8.8) estimate (8.5) follows immediately if, is large enough and £ > 0 is small. 9. Symmetrizors. As usual, theorem 2 is proved by using suitable symmetrizors and
"integrations by part", but, as shown in [Ma 1], the nature ofthe boundary conditions (8.2) or (8.10) forces us to introduce pseudo-differential symmetrizors; however, there is a difficulty due to
the lack of smoothness of the coefficients (u, B, ",), and the classical calculus does not apply. To overcome this, there exists a convenient modification of the pseudo-differential calculus which
was introduced by J .M. Bony ([Bo]), and which he called the "para-differential" calculus. In fact, we need a version "with parameter ," of the calculus, similar to the one which was used in [Met 1].
We will not enter into the details here, referring the reader to [Met 3] for
a precise description of the calculus and also for a complete proof of the theorems. Instead, we would like to explain a little what happens at the symbolic level, and for that purpose, say that (u-,
u+, 8, K) are constant; for instance the reader may think of (6.1) (6.2) or (8.1) (8.2) or (8.9) (8.10), as the linearized equations of (2.2) (2.4) around a weak planar shock. In that case, a natural
way to study (8.9) (8.10) is to perform a partial FourierLaplace transform with respect to the tangential variables y = (t, y') = (t, Xl," Xn-l)' Let us call 'r/ = (r, 'r/') the dual variables; as
usual in this kind of problem, r is complex, with Imr = -, < 0 and 'r/' remains real. So, after this transformation we are led to the following system:
= JDnv + Pv = f
Jfv = £Xcp+ 9
where Dn = -tan and P and X are matrices which depend linearly on Il P= [ a
P and X are real when, (9.3)
P' ;
= [~] + 0(£1'r/1)
= 0, P' is of dimension N
- 1, and:
OrP is definite positive, and in particular Orll > 0
The following fact is a consequence of assumption 3, and implies that X elliptic as stated in lemma 2: there is c > 0 such that:
(9.4) Moreover, the 0(£1'r/1) term in X can be neglected because it only yields error terms in the right hand side of (8.11); so in the sequel we just drop it. The construction of the symmetrizor S
which holds as soon as 5J is hermitian:
= S ('r/) relies on the following formula,
+ j(Im(SP)v,v)dxn = j
where (,) denotes the scalar product on eN and 11.110 the Classically, two facts are needed (see [Kr] or [Ch-Pi]): (9.6)
L2- norm
on [0,+00].
for some constant c > 0, and:
= £Xcp + g. IIlI ~ lal or IIlI :S lal.
whenever w satisfies the boundary condition Jw Now, the choice of S depends on whether Case I: IIlI ~ S = -ld: one has:
In that case, it suffices to take the standard symmetrizor
* because of (9.3) Im(SP) ~ c( -Imr) = Cf for some constant c> 0 * the boundary term is:
clWl12 + Iwl 2 - 219 + curpl2 and because lui S ClJ.lI,
(8 Jw, w) = ClWl12 -lilW = this term is bigger than
ClWl12 + Iwl 2 - 4191 2 - ClcJ.lrpl2 ~ ClWl12 + Iwl 2 - 4191 2 - Cc21wl12 - CIgl1 2 which certainly implies (9.7) if 15 is small enough. Case II: LEMMA
1J.l1 S 817]1, with 8 small. The main ingredient is the following one:
3. There are invertible matrices W(7]) and V(7]) such that:
W JV
of dimension N - 2,
Setting v into:
= Vw
and w
I1 real when T
[ -15 o
0] D 1
is real, and
Dn w" + II"wl
Furthermore, neglecting (9.10)
= (WbW2'W") = (w',w"),
II ~f WPV = [~ ~ ~ 1
= J;
il = J.l + 0(1517]1).
we see that (9.1) decouples
= f"
+ [il(j u] P w , = f'
O(cl7]lrp) terms, the boundary conditions also decouple:
rw" =g" -CrW1 = cilrp + gl rW2 = curp + g2
The study of (9.8) (9.10) is easy, and in fact we can skip it because, as said in section 8, it suffices to provide estimates for the traces and this is trivial from (9.10). So it suffices to study
the 2 X 2 system (9.9) (9.11); the first step is to solve (9.9) with the boundary condition rW2 = g2, and subtracting this solution to the solution of (9.9) (9.11), one reduces oneself to the case
where g2 = O. Eliminating rp in (9.11) leads to a boundary condition of the form: (9.12) Now, it remains to get estimates for the traces of solutions of systems like (9.9) (9.12), and this will be
performed in the next and last section.
249 10. The 2 x 2 problem. Let us stop for a while, to give a typical and very simple example of system (9.9) (9.12), which may help the reader to understand the problem. It is a differential example
in which the normal variable and the spacetime tangential variables are respectively called x, and (t, y) E R2; the equations in the half-space x > 0, are
+ OtWI + OyW2 = Ox W 2 + Ot W 2 + OyWI = h and the boundary condition on x = 0 is: (10.1)
(10.2) In that case the matrix P of (9.1) is simply: (10.3) Let us go back now to a general system (9.9) (9.12); however, for simplicity, we drop the tildes from the notations, and we set W = (WI,
W2). The first idea is to use a new weight function, and more precisely we introduce: (10.4) where a
> 0 is a small parameter to be determined. (9.9) is transformed into:
(10.5) while the boundary condition is unchanged: (10.6) It is important to note that Ilrll S; IIfll (because a 2: 0), but that it is equivalent to estimate the traces of z or those of w. In order to
do so, we introduce the symmetrizor:
(10.7) with q = 17- 1 P (recall that we are working in the domain Ipl S; 8'1171 by (9.4)).
Ipl S; 811)1, and hence that
With this choice, it is clear that SJ is hermitian, and that: (10.8) Therefore, if the parameter 8 is small enough, condition (10.6) implies that: (10.9)
is small, so the boundary
On the other hand:
[-r !]
with C = 0(,) and:
m = ~Im {2~P - 2p(1 + g)lql2 + gp} We now remark that the condition IIlI :S 811J1 implies that, :S 81171; because (T is real when, = 0 and Iql is small when 8 is small, and because Imp = -Imll ~ C/,
we see that if 8 and g are small enough, then Imm ~ cg-l, and therefore: (10.10) With (10.8), we conclude that, iJ a is small enough, then: (10.11) At last, we note the trivial estimate: (10.12)
l(r,z)l:S {IJd +gl/21121 + Iqhl 2}x {IZll + g-I/2Iz21 + g-llz21} :S Cllrllo{lzll2 +g-llz212 +g-llq Z 21 2}1/2
With a formula similar to (9.5), we see that estimates (10.10), (10.11) and (10.12) are exactly what we need in order to conclude and get an estimate for z and its traces. In fact, because the
symmetrizor (10.7) is singular as g -> 0, the actual calculus with operators is slightly more complicated than the calculus on symbols we have sketched above (several terms have a g-1 coefficient).
In particular remainders deserve a great attention, but again, we refer the reader to [Met 3J for complete proofs. REFERENCES [AI] [Bo] [Ch-Pi]
[Co-Me] [Kr] [Lax] [Mal] [Ma2] [Met 1] [Met 2] [Met 3]
S. ALINHAC, Existence d'ondes de rarefaction pour des systemes quasilineaires multidimensionnels, Comm. in Partial Diff. Equ., 14 (1989), pp. 173-230. J .M. BONY, Calcul symbolique et propagation des
singularites pour les equations aux derivees partielles non lineaires, Ann. Sc. E.N.S., 14 (1981), pp. 209-246. J. CHAZARAIN - A. PIRIOU, Introduction a la tMorie des equations aux derivees
partielles, Bordas (Dunod), Paris, 1981, English translation Studies in Math. and its Applications, vol 14, North Holland (1982). R. COIFMAN - Y. MEYER, Au dela des operateurs pseudodifferentiels,
Asterisque 57 (1978). H.O. KREISS, Initial boundary value problems for hyperbolic systems, Comm. Pure and Appl. Math., 23 (1970), pp. 277-298. P. LAX, Hyperbolic systems of conservation laws, Comm.
on Pure and Appl. Math., 10 (1957), pp. 537-566. A. MAJDA, The stability of multidimensional shock fronts, Memoirs of the Amer. Math. Soc., nO 275 (1983). A. MAJDA, The existence of multidimensional
shock fronts, Memoirs of the Amer. Math. Soc., nO 281 (1983). G. METIVIER, Interaction de deux chocs pour un systeme de deux lois de conservation en dimension deux d'espace, Trans. Amer. Math. Soc.,
296 (1986), pp. 431-479. G. METIVIER, Ondes soniques, Seminaire E.D.P. Ecole Poly technique, annee 1987-88, expose nO 17 & preprint a paraitre, J. Math. Pures et Appl.. G. METIVIER, Stability of weak
shocks, preprint.
NONLINEAR STABILITY IN NON-NEWTONIAN FLOWS* J. A. NOHELt:j:, R. L. PEGOt#
AND A. E. TZAVARASt##
1. Introduction. In this paper, we discuss recent results on the nonlinear stability of discontinuous steady states of a model initial-boundary value problem in one space dimension for
incompressible, isothermal shear flow of a non-Newtonian fluid between parallel plates located at x = ±1, and driven by a constant pressure gradient. The non-Newtonian contribution to the shear
stress is assumed to satisfy a simple differential constitutive law. The key feature is a non-monotone relation between the total steady shear stress and steady shear strain rate that results in
steady states having, in general, discontinuities in the strain rate. We explain why every solution tends to a steady state as t -+ 00, and we identify steady states that are stable; more details and
proofs will be presented in [8].
We study the system
S:= T+fx,
Sx ,
(1.2) on [0,1] x [0,00), with tions
S(O, t) =
a fixed positive constant. We impose t.he boundary condi-
v(1, t) = 0,
t ~ 0,
and the initial conditions (1.4)
l7(x, 0)
accordingly, S(x,O) = So(x) := l7o(x) + vox(x) + fx. The function g : R -+ R is assumed to be smooth, odd, and g(O > 0, =I 0. In the context of shear flow, v, the velocity of the fluid in the
channel, and T, the shear stress, are connected through the balance of linear momentum (1.1). The shear stress T is decomposed into a non-Newtonian contribution 17, evolving in accordance with the
simple differential constitutive law (1.2), and a viscous contribution V x • The coefficients of density and Newtonian viscosity are taken as 1, without loss of generality. The flow
*Supported by the U. S. Army Research Office under Grant DAAL03-87-K-0036 and DAAL0388-K-0185, the Air Force Office of Scientific Research under Grant AFOSR-87-0191; the National Science Foundation
under Grants DMS-8712058, DMS-8620303, DMS-8716132, and a NSF Post Doctoral Fellowship (Pego). tCenter for the Mathematical Sciences, University of Wisconsin- Madison, Madison, WI 53705. :j:Also
Department of Mathematics. #Department of Mathematics, University of Michigan. ##Also Department of Mathematics.
252 is assumed to be symmetric about the centerline of the channel. Symmetry dictates the following compatibility restrictions on the initial data:
vo(l) = 0,
they imply that 0"(0, t)
= vx(O, t) = 0,
= 0,
= 0;
and 0"0(0)
and symmetry is preserved for all time.
The system (Ll )-(1.4) admits steady state solutions (v( x), 0'( x)) satisfying
on the interval [0,1]. In case the function w(O := g(O + is not monotone, there may be multiple values ofvx ( x) that satisfy (1.6) for some x's, thus leading to steady velocity profiles with jumps
in the steady velocity gradient v x • Our objective is to study the stability of such steady velocity profiles; we also study well-posedness and the convergence of solutions of (1.1)-(1.4) to steady
states as t -+ 00. w@
Fig. 1:
v (x)
Fig. 2: Velocity profile with a kink; w( -vx(x)) = fx.
For simplicity, the function w( 0 is assumed to have a single loop. The graph of a representative w(O is shown in Fig. Ij in the figure m and M stand for the levels of the bottom and top of the loop,
respectively. Our results and techniques can be easily generalized to cover the case when w( 0 has a finite number of loops. Steady state velocity profiles are constructed as follows: First solve w(u
( x)) = f x for each x E [0,1], where u = -v x. This equation admits a unique solution for ~ fx < m or fx > M, and three solutions for m < fx < Mj let u(x), ~ x ~ 1, be a solution. Setting
= g( -u(x)),
then (v(x),O'(x)) satisfy (1.6) and (1.3) for a.e. x E [0,1] and give rise to a steady state. Clearly, if f < m there is a unique smooth steady state; if m < f < M, there is a unique smooth velocity
profile and a multitude of profiles with kinks; finally, if f > M, all steady state velocity profiles have kinks. An example of a velocity profile with kinks is shown in Fig. 2.
253 Problem (1.1)-(1.4) captures certain key features of a class of viscoelastic models that have been proposed to explain the occurence of "spurt" phenomena in nonNewtonian flows. Specifically, for
a particular choice of the function g in (1.2), the system under study has the same steady states as the more realistic systems studied in [6] and [7]; the latter, derived from a three-dimensional
setting that is restricted to one-dimensional shearing motions, produce non-monotone steady shear stress vs. strain-rate relations of the type shown in Fig. 2. The phenomenon of spurt was apparently
first observed by Vinogradov et al. [13] in the flow of highly elastic and very viscous non-Newtonian fluids through capillaries or slit-dies. It is associated with a sudden increase in the
volumetric flow rate occuring at a critical stress that appears to be independent of the molecular weight. It has been proposed by Hunter and Slemrod [5], using techniques of conservation laws, and
more recently by Malkus, Nohel, and Plohr [6] and [7], using numerical simulation and extensive analysis of suitable approximating dynamic problems (motivating the present work), that spurt phenomena
may be explained by differential constitutive laws that lead to a non-monotone relation of the total steady shear stress versus the steady shear strain-rate. In this framework, the increase of the
volumetric flow rate corresponds to jumps in the strain rate when the driving pressure gradient exceeds a critical value. We conjecture that our stability result discussed in Sec. 3 below can be
extended to these more complex problems.
2. Preliminaries. In this section, we discuss preliminary results that are essential for presenting the stability result; further details and proofs can be found in [8].
A. Well-Posedness. We use abstract techniques of Henry [4] to study global existence of classical solutions for smooth initial data of arbitrary size, and also existence of almost classical, strong
solutions with discontinuities in the initial velocity gradient and in the stress components. The latter result allows one to prescribe discontinuous initial data of the same type a~ the
discontinuous steady states studied in this paper. Existence results of this type are established in [8] for a general class of problems that serve as models for shearing flows of non-Newtonian
fluids; the total stress is decomposed into a Newtonian contribution and a finite number of stress relaxation components, viewed as internal variables that evolve in accordance with differential
constitutive laws frequently used by rheologists (for discussion, formulation and results, see [11], [7], also the Appendix in [8]). Existence of classical solutions may also be obtained by using an
approach based on the Leray - Schauder fixed point theorem (cf. Tzavaras [12] for existence results for a related system). Other existence results were obtained by Guillope and Saut [2], and for
models in more than one space dimension in [3]. As a consequence of the general theory, one obtains two global existence results (see Theorems 3.1, 3.2, 3.5, and Corollary 3.4 in [8]): (i.) the
existence of a unique classical solution (v(x, i), 0"(:1:, I)) of (1.1)- (1.5) on [O,IJ x [0,00) for initial data (vo(x),O"o(x)), not restricted in size, that sat-
254 isfy: So(x) := vox(x) + O"o(x) + Ix E HS[O,l] for some s > 3/2, with So(O) = 0, vo(l) = Sox(l) = 0, and 0"0 E eI[O, 1], where HS denotes the usual interpolation space. (ii.) the existence and
uniqueness of a strong, "semi-classical" solution of (1.1) -(1.5), obtained by a different choice of function spaces, for initial data (vo(x), 0"0 (x » that satisfy: So(x) E HI[O, 1] with So(O) = 0,
vo(l) = Sox(l) = 0, and 0"0 E
Result (ii.) yields solutions in which 0" and Vx may be discontinuous in x, but Sx and Vt are continuous, and 0" is e l as a function of t for every x. Thus all derivatives appearing in the system
may be interpreted in a classical sense as long as the equation is kept in conservation form. A result of this type was obtained by Pego in [10] for a different problem by a similar argument.
B. A Priori Bounds and Invariant Sets. To discuss global boundedness of solutions, let 0", v be a classical solution on an arbitrary time-interval, and note that system (1.1) -(1.5) is endowed with
the differential energy identity
~ {1/211v;dx + 11 [W(V x) + x I vx]dx} + 11 [v; + v;tldx = 0 .
The function W(~) := Joe w«()d( plays the role of a stored energy function; by the assumption on g, W is not convex. This fact is the main obstacle in the analysis of stability. (i.)Boundedne33 of S.
Since ~g(O > 0, it follows that and W(O satisfies the lower bound
Joe g«()d( ~
0 for ~ER,
(2.2) Standard energy estimates based on (2.1) and (2.2) coupled with integration of (1.1) with respect to x yield a global a priori bound for S:
(2.3) where
IS(x, t)1
o~ x
0 ~ t < 00,
e is a constant depending only on data but not t.
(ii.)Invariant Set3 for a Related ODE. Control of S enables us to take advantage of the special structure of Eq. (1.2) and determine suitable invariant regions. For this purpose, it is convenient to
introduce the quantity s := 0" + I x. Then, Eqs. (1.2), (1.3) readily imply that s satisfies
+ S + g(s -
= Ix.
For a fixed x, it is convenient to view Eq. (2.4) as an ODE with forcing term S(x,.). Also, observe that at a steady state (a,v x ), one has If = 0, and consequently,
s = -vx
is an equilibrium solution of (2.4) (with S = 0). If S == 0 in (2.4), the hypothesis concerning 9 implies that the ODE admits positively invariant intervals for each fixed x. We sketch how this
property is preserved in the presence of a priori control of S as provided by (2.3); more delicate bounds are essential in the proof of stability in Sec .3. To fix ideas, let (2.5)
to > 0 be given, and assume that IS(x,t)l::; p,
x::; 1,0::; t::; to,
for some p > O. For x fixed in [0,1]' we use the notation Set) .- sex, t) and conveniently rewrite (2.4) as (2.6)
+ w(s -
x - S(t).
We state the following result on invariant intervals; its proof is obvious. Proposition 2.1. Let S satisfy the uniform bound (2.5) for 0 ::; t ::; to. For x fixed, ::; 1, assume there exist s_, s+
such that s_ < s+ and
o ::; x (2.7)
w(B- - A) < f x - A
w(s+ - A) >
Then the compact interval time interval 0 ::; t ::; to.
IAI::; p IAI::; p
x - A
[L, s+ 1 is positively invariant for the OnE (2.6) on the
Invariant intervals are generated by solution sets of the inequalities (2.7) and (2.8) as functions of p and x. In particular, since lim w(e) = ±oo, given any x and
p, one easily determines So+ large, positive and So_ large, negative such that if L < So- and s+ > so+, then L and s+ satisfy (2.7) and (2.8), respectively, and
the compact interval [L, s+l is positively invariant for the ODE (2.6). More discriminating choices of invariant intervals occur if one restricts attention to small values of p; the analysis becomes
more delicate. For a function w( 0 with a single loop, the most interesting case arises when f x - p, f x and f x + p each intersects the graph of w( 0 at three distinct points. Referring to Fig. 3,
the abscissae of the points of intersection are denoted by (CL, /3-,,-), (0'0, /30, ,0) and (0'+, /3+, 1+), respectively. It turns out that for x fixed and p small enough, there are discriminating
invariant intervals of the type shown in Fig. 3. However, in contrast to the large invariant intervals discussed in the previous paragraph, the more discriminating ones degenerate as we approach the
top or bottom of the loop (when x varies). For the stability of discontinuous steady states in Sec. 3., it is crucial to construct compact invariant intervals that are of uniform length (see
Corollary 2.2 in [8]). The latter is accomplished by taking p sufficiently small and by avoiding the top and bottom of the loop in Fig. 3. Of specific interest is the situation in which s( x) is a
piecewise smooth solution of
w(s(x)) = fx,
w (~)
fx ---------
fx + p
--f----------f- ------------------
oro' fx -
Fig. 3: Invariant Intervals. defined on [0,1] and admitting jump discontinuities at a finite number of points XI, ... ,Xn in [0,1]. Recall that s(x) is a steady solution of the ODE (2.6)
corresponding to the steady state (0', vx ). In addition, suppose that s( x) takes values in the monotone increasing parts of the curve w(O and that it avoids jumping at the top or bottom of the
loop, i.e., (2.10)
w'(s(x)) 2: Co> 0,
for some constant Co. A delicate construction in [8] yields compact, positively invariant intervals of (2.6) of uniform length, centered around s(x) at each x E
[O,I]\{XI, ... ,X n}. (iii.)Boundedness of u and V x . As an easy application of Sec. 2 (ii), choose a compact interval [s_, s+] that is positively invariant for (2.6) and valid for all x E [0,1]. By
virtue of the global bound (2.3) satisfied by S(x, t), we conclude that
Is(x, t)1 :S C,
o:S x :S 1, t 2: 0
which, in turn, using (1.3) and (2.11), implies (2.12)
o :S x :S 1, t 2: 0,
for some constant C depending only on the data. The definition of s also implies that u is uniformly bounded.
257 (iv.) Convergence to ~teady ~tate8. Let (v(x, t), u(x, t)) be a classical solution of (1.1) -(1.5) defined on [0,1] X [0,00]. We discuss the behavior of this solution as t -+ 00. The first result
indicates that S value.
Proposition 2.2. Under the
= (f + Vx + f
x converges to its equilibrium
of the exi8tence
lim S(x,t) = t-=
uniformly for x E [0,1].
The proof is a consequence of Sobolev embedding applied to the following a priori estimates that are derived from the system (1.1)-(1.4) by standard techniques:
S;(x,t)dx:::; C, 0:::; t
< 00,
where C is a positive constant depending only on data. Use of (2.13) enables us to identify the limiting behavior of solutions of (2.4) as t -+ 00. The following result is analogous to Lemma 5.5 in
Pego [10]; its elementary
proof is given in Lemma 4.2 of [8].
°: :;
Proposition 2.3. Let s(x,.) E C 1 [0, 00) be the ~olution of (2.10), where S(x,.) i~ continuow and ~ati~fie8 (2.13), x:::; 1. Then s(x,.) converge~ to s=(x) as t -+ 00 and s=( x) ~ati~fies
(2.17) In view of the shape of w(e) = ~ + g(O, equation (2.17) has one solution for 0:::; f x < m or f x> M and three solutions for m < f x < M. Let (v(x, t), (f(x, t)) be a classical solution of
(1.1)-(1.4) on [0,1] x [0,00). Recalling the definition of s, Proposition 2.3 implies
(2.18) Also, combining (Ll), (2.13) and (2.18) yields
lim vx(x,t)
= t-oo lim(S(x,t)-s(x,t))=-s=(x),
and (2.20) Finally, noting that (2.21)
v(x, t) =
Vx(X, t)dx,
x) is Lipschitz continuous and satisfies
V OO (
(2.22) We conclude that any solution of (1.1)-(1.4) converges to one of the steady states. If 0 ~ f < m, then there is a unique smooth steady state which is the asymptotic limit of any solution.
However, if m < f, then there are multiple steady states and thus a multitude of possible asymptotic limits. In Sec. 3, we identify stable steady states. Also note from (2.20) that in a discontinuous
steady state, the discontinuities in 7f and Vx cancel.
Observe that in case w( 0 is monotone the above arguments yield that every solution converges to the unique steady state. Moreover, the above results can be routinely generalized to the case that the
function w(~) has multiple loops but the graph of w has no horizontal segments. 3. Stability of Steady States.
The purpose is to study the stability of velocity profiles with kinks. To fix ideas, let (v(x),7f(x)) be a steady state of (1.1)-(1.3) such that v(x) has a finite number of kinks located at the
points xl, . .. ,x n in (0, 1); accordingly, v x( x) and 7f( x) have a finite number of jump discontinuities at the same points. Recall that, if we set
= -vx(x),
(3.1) and 7f(x)
XE[0,1],x=jxI, ... ,x n
= g(-u(x)).
Given smooth initial data (vo(x),O"o(x)), there is a unique smooth solution
0"( x, t)) of (1.1 )-(1.4). As t
-+ 00, the solution converges to one of the steady states, not a-priori identifiable. We now restrict attention to initial data that are close to (v( x), 7f( x)), except on the union U of small
subintervals centered around the points Xl, ... ,x n . U can be thought of as the location of transition layers separating the smooth branches of the steady state. Roughly speaking, it turns out that
the steady state is "asymptotically stable" under smooth perturbations that are close in energy, provided (v( x), 7f( x)) takes values in the monotone increasing parts of w( 0; the stable solutions
are local minimizers of an associated energy functional (see (3.8) below). The interesting problem of finding the domain of attraction of a
stable steady solution appears to be a difficult task. Our main result is:
Theorem 3.l. Let (v(x),O'(x)) be a 3teady 3tate 80lution a8 de8cribed above and 3ati3/ying
w'(vx(x)) ~ co> 0, x E [0,1],
xix}"",X n
for 80me p08itive con8tant co. If the mea3ure of U i8 8ufficiently 8mall, there i8 a p08itive con8tant 80 depending on U 3uch that, if 8 < 80 , then for any initial data (vo(x), uo(x)) 3ati3/ying sup
ISo(x)1 < 8,
1 v;(x,O)dx < _8 2 2
and Ivox(x)-vx(x)I Xo. To insure that the construction of ell produces the property desired, this part of the analysis makes crucial use of invariant intervals of the ODE (2.6) that are of uniform
length as discussed in Sec. 2(ii) above.
REFERENCES 1. G. ANDREWS AND J. BALL, " Asymptotic Stability and Changes of Phase in One-Dimensional Nonlinear Viscoelasticity:," J. Diff. Eqns. 44 (1982), pp. 306-341.
2. C. GUILLOPE AND J.-C. SAUT , "Global Existence and One-Dimensional Nonlinear Stability of Shearing Motions of Viscoelastic Fluids of Oldroyd Type ," Math. Mod. Numer. Anal., 1990. To appear. 3. C.
GUILLOPE AND J.-C. SAUT, "Existence Results for Flow of Viscoelastic Fluids with a Differential Constitutive Law," Math. Mod. Numer. Anal., 1990. To appear. 4. D. HENRV, Geometric Theory of Semi
linear Parabolic Equations, Lecl.ure Notes in Mathematics, vol. 840 Springer-Verlag, New York, 1981. 5. J. HUNTERAND M. SLEMROD, "Viscoelastic Fluid Flow Exhibiting Hysteretic Phase Changes," Phys.
Fluids 26 (1983), pp. 2345-2351. 6. D. MALKus, J. NOHEL, AND B. PLOHR, "Dynamics of Shear Flow of a Non-Newtonian Fluid," J. Comput. Phys., 1989. To appear. 7. D. MALKUS, J. NOHEL, AND B. PLOHR,
"Analysis of New Phenomena In Shear Flow of NonNewtonian Fluids," in preparation, 1989. 8. J. NOHEL, R. PEGO, AND A. TZAvARAs, "Stability of Discontinuous Steady States in Shearing Motions of
Non-Newtonian Fluids," Proc. Roy. Soc. Edinburgh, Series A, 1989. submitted.
9. A. NOVICK-COHEN AND R. PEGO, "Stable Patterns in a Viscous Diffusion Equation," preprint, 1989. submitted. 10. R. PEGO, "Phase Transitions in One-Dimensional Nonlinear Viscoelasticity:
Admissibility and Stability," Arch. Rational Mech. and Anal. 97 (1987), pp. 353-394. 11. M. RENARDV, W. HRUSA, AND J. NOHEL, Mathematical Problems in Viscoelasticity, Pitman Monographs and Surveys in
Pure and Applied Mathematics, Vol. 35, Longman Scientific & Technical, Essex, England, 1987. 12. A. TZAVARAS, "Effect of Thermal Softening in Shearing of Strain-Rate Dependent Materials," Arch.
Rational Mech. and Anal. 99 (1987), pp. 349-374. 13. G. VINOGRADOV, A. MALKIN, Yu. YANOVSKII, E. BORISENKOVA, B. YARLVKOV, AND G. BEREZHNAVA, "Viscoelastic Properties and Flow of Narrow Distribution
Polybutadienes and Polyisoprenes," J. Polymer Sci., Part A-2 10 (1972), pp. 1061-1084.
A NUMERICAL STUDY OF SHOCK WAVE REFRACTION AT A CO 2 / CH 4 INTERFACEt ELDRIDGE GERRY PUCKETTt Abstract. This paper describes the numerical computation of a shock wave refracting at a gas interface.
We study a plane shock in carbon dioxide striking a plane gas interface between the carbon dioxide and methane at angle of incidence IT;. The primary focus here is the structure of the wave system as
a function of the angle of incidence for a fixed (weak) incident shock strength. The computational results agree well with the shock polar theory for regular refraction including accurately
predicting the transition between a reflected expansion and a reflected shock. They also yield a detailed picture of the transition from regular to irregular refraction and the development of a
precursor wave system. In particular, the computations indicate that for the specific case studied the precursor shock weakens to become a band of compression waves as the angle of incidence
increases in the irregular regime. Key words. shock wave refraction, conservative finite difference methods, Godunov methods, compressible Euler equations
AMS(MOS) subject classifications. 35L65, 65M50, 76L05
1. The Problem. In this work we consider a plane shock wave striking a plane gas interface at angle of incidence 0° < 0'; < 90°. This is a predominantly two dimensional, inviscid phenomenon which we
model using the two dimensional, compressible Euler equations with the incident shock wave and gas interface initially represented by straight lines. Incident shock ' "
Figure 1 A diagram of the problem t\Vork performed under the auspices of the U.S. Department of Energy at the Lawrence Livermore National Laboratory under contract number \V-7405-ENG-48 and partially
supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research under contract number W-7405-Eng-48 and the Defense Nuclear Agency under IACRO 88-873. :j:Applied
Mathematics Group, Lawrence Livermore National Laboratory, Livermore, CA 94550.
262 A diagram of the problem is shown in figure 1. The shock wave travels from right to left in the incident gas striking the interface from the right. This causes a shock wave to be transmitted into
the transmission gas and a reflected wave to travel back into the incident gas. The reflected wave can either be a shock, an expansion, or a band of compression waves. Depending on the strength of
the incident shock, the angle of incidence, and the densities and sound speeds of the two gases these three waves may appear in a variety of distinct configurations. In the simplest case the
incident, transmitted, and reflected waves all meet at a single point on the interface and travel at the same speed along the interface. This is known as regular refraction. A. ,\ingram depictin,g
regnlnr refraction appears in figure 2.
undisturbed gas interrace
incident shock reflected wave
\ \ \
...~--- disturbed interface \
Figure 2 Regular Refraction 'When the sound speed of the incident gas is less than that of the transmission gas the refraction is called slow-fast. In this case the transmitted wave can break away
from the point of intersection with the incident and reflected waves and move ahead of them, forming what is known as a precursor. The incident shock can also form a stem between its intersection
with the interface and its intersection with the reflected wave, similar to the well known phenomenon of Mach reflection. \Vhen the sound speed of the incident gas is greater than that of the
transmission gas the refraction is called fast-slow. In this case the transmitted shock will lean back toward the interface. In this paper we restrict ourselves to the study of a specific sequence of
slow-fast refractions. See Colella, Henderson, & Puckett [1] for a description of our work with fast-slow refraction. For the purposes of modeling this phenomenon on a computer we assume the two
gases are ideal and that each gas satisfies a ,-law equation of state,
Here p is the pressure, p is the density, I is the ratio of specific heats, and the coefficient A depends on the entropy but not on p and p. Note that ~f is a constant for each gas but different
gases will have different ,. Given these assumptions the problem depends on the following four parameters: the angle of incidence O'i, the ratio of molecular weights for the two gases It;/ p,t,
the ratio of the, for the two gases ,d,t, and the inverse incident shock strength ~i = PO/PI where Po (respectively pd is the pressure on the upstream (respectively downstream) side of the shock. In
this paper we consider the case when the incident gas is CO 2 , the transmission gas is CH 4 , the inverse incident shock strength is ~i = 0.78 and only the angle of incidence ai is allowed to vary.
Thus , i = 1.288, = 1.303, {Ii = 44.01, fit = 16.04, and the incident shock Mach number is 1.1182.
For this choice of parameters we find three distinct wave systems depending on These are: i) regular refraction with a reflected expansion, ii) regular refraction with a reflected shock, and iii)
irregular refraction with a transmitted precursor. These wave systems appear successively, in the order listed, as ai increases monotonically from head on incidence at ai = 0° to glancing incidence
at ai = 90°. In this paper we examine this sequence of wave patterns computationally much as one would design a series of shock tube experiments. This particular case has been extensively studied
both experimentally and theoretically by Abd-el-Fattah & Henderson [2]. This has enabled us to compare our results with their laboratory experiments thereby providing us with a validation of the
numerical method. See Colella, Henderson, & Puckett [1, 3] for a detailed comparison of our numerical results with the experiments of Abcl-el-Fattah & Henderson. Once we have validated the numerical
method in this manner we can use it to study the wave patterns in a detail heretofore impossible due to the limitations of schlieren photography and other experimental flow visualization techniques.
Early work on the theory of regular refraction was done by Taub [4] and Polachek
& Seeger [5]. Subsequently Henderson [6] extended this work to irregular refractions, although a complete theory of irregular refraction still remains to be found. More recently, Henderson [7, 8] has
generalized the definition of shock wave impedance given by Polachek & Seeger for the refraction of normal shocks. Experiments with shock waves refracting in gases have been done by Jahn [9],
Abd-el-Fattah, Henderson & Lozzi [10], and Abd-el-Fattah & Henderson [2, 11]. More recently, Reichenbach [12J has done experiments with shocks refracting at thermal layers and Haas & Sturtevant [13]
have studied refraction by gaseous cylindrical and spherical inhomogeneities. Earlier, Dewey [14J reported on precursor shocks from large scale explosions in the atmosphere. Some multi phase
experiments have also been done: Sommerfeld [15J has studied shocks refracting from pure air into air containing dust particles while Gvozdeava et al. [16] have experimented with shocks passing from
air into a variety of foam plastics. Some recent numerical work on shock wave refractions include Grove & Menikoff [17] who examined anomalous refraction at interfaces between air and water and
Picone et al. [18] who studied the Haas & Sturtevant experiments at Air/He and Air/Freon cylindrical and spherical interfaces. Fry & Book [19] have considered refraction at heated layers while
Glowacki et al. [20] have studied refraction at high speed sound layers and Sugimura, Tokita & Fujiwara [21] have examined refraction in a bubble-liquid system.
264 2. The Shock Polar Theory. 2.1 A Brief Introduction to the Theory. In this section we present a brief introduction to the theory of regular refraction. This theory is a straightforward extension
of von Neumann's theory for regular reflection (von Neumann [22]) and is most easily understood in terms of shock polal's. The theory is predicated on the observation that oblique shocks turn the
flow. Consider a stationary oblique shock. If we call the angle by which the flow is turned 8 (see figure 3), then 8 is completely determined by the upstream state (po, Po, Uo, vo) and the shock
strength pipo where p denotes the post-shock pressure. Shock
turning angle
Figure 3 An oblique shock turns the flow velocity towards the shock For a ,-law gas the equation governing this relation is
tan (8)
where Ms is the freestream Mach number upstream of the shock (e.g. see Courant & Friedrichs [23]). If we now allow the shock strength to vary and plot log (plpo) versus the turning angle 8 we obtain
the graph shown in figure 4, commonly referred to as a shock polar.
log!. Po
Figure 4 A Shock Polar Recall that, by definition, in regular refraction the incident, transmitted, and reflected waves all meet at a single point on the interface. We now assume that these waves are
locally straight lines in a neighborhood of this point and (for the moment) that the reflected wave is a shock. Each of these shocks will turn the flow by some amount, say OJ, Ot, and Or respectively
(figure 5) and each of these angles will satisfy (2.1) with the appropriate choice of M s , I, and p/Po.
incident shock undisturbed gas interface
reflected shock
transmitted shock . disturbed gas interface
Figure 5 The shock polar theory for regular refraction is based on the fact that the flow must be parallel to the gas interface both above and below the intersedion of the shocks. Thus, Ot = OJ + Or.
All shocks are assumed to be locally straight in a neighborhood of this intersection.
266 Furthermore, since the interface is a contact discontinuity we must have (2.2)
Pt =P2
(2.3) where the latter condition follows from the fact that the flow is parallel to the interface both upstream and downstream of the intersection of the incident, transmitted and reflected shocks.
Note that the interface is, in general, deflected forward downstream of this intersection. The problem now is as follows. Given the upstream state on both sides of the interface (po;,po, lto;, VOi)
and (POt, Po, ltOt, VOt), the inverse incident shock strength ~i' and the angle of incidence (Xi determine all other states. Let (PI, PI, lt1, vd denote the state downstream of the incident shock
(upstream of the reflected shock) and let (Pt,Pt, 1tt, Vt) and (p2,P2, lt2, V2) denote the states downstream of the transmitted and reflected shocks respectively. For certain values of the given data
this information is sufficient to completely determine all of the unknown states, although not necessarily uniquely. For example one can derive a 12th degree polynomial in the transmitted shock
strength pt!po from (2.1-3), which for regular reflection has as one root the observed transmitted shock strength (Henderson [8]). The other roots either do not appear in laboratory experiments or
are complex, and hence not physically meaningful. Note that knowledge of the transmitted shock strength pt!Po is sufficient to determine all of the other states.
Mr (reflected)
MI (transmitted)
Figure 6 Each intersection of the transmitted shock polar and the reflected shock polar represents a possible wave configuration for regular refraction. The physically meaningful roots of this
polynomial may also be found by plotting the shock polars for the three waves in a common coordinate system. An example is shown in figure 6. Note that we have scaled the reflected shock strength P2
/ PI by PI! po and translated br by b;. Thus the plot of the reflected shock polar is given by log(pdpo) = log(P2/P1)+log(pI!po) versus br+b;. This causes the base of
the reflected shock polar (P2 = P1) to coincide with the map of the incident shock on the incident shock polar (D;, pI/po), labeled 'i' in the figure. In this shock polar diagram any intersection of
the transmitted and reflected shock polars represents a physically meaningful solution to the problem, i.e. a pair of downstream states (Pt,Pt,Ut,Vt) and (p2,P2,U2,V2) such that all of the states
satisfy the appropriate shock jump conditions and the boundary conditions (2.2-3). Note that more than one such intersection may exist. For example, in figure 6 there are two, labeled A1 and A2. It
is also the case that for some values of the initial data (pOi, Po, UOi, VO;), (POt,PO,UOt,VOt), ei and Q;, the transmitted and reflected shock polars do not intersect. It is interesting to inquire
whether the existence of such an intersection exactly coincides with the occurrence of regular refraction in laboratory experiment. 1,Ve will discuss this point further below.
We can extend the shock polar theory to include reflected waves which are centered expansions by adjoining to the reflected shock polar the appropriate rar2 + v 2 denote the magnitude of efaction
curve for values of P2 < Pl. Let q = flow velocity, c the sound speed, and define the Mach angle Jl by Jl = sin- 1 l/M where M = q/c is the local Mach number of the flow. Then this rarefaction curve
is given by
p2 PI
cos Jl dp qcp
(see Grove [24]). This curve is sometimes referred to as a rarefaction polar. The sign will determine which branch of the shock polar is being extended. In figure 6 the branch corresponding to a
negative turning angle Dr has been plotted with a dotted line and labeled with a c. The intersection of this curve with the transmitted shock polar has been labeled fl. In some cases there may be two
intersections. Each intersection represents a wave system in which the state (PI, P1, U1, V1) is connected to the state (p2, P2, U2, V2) across a centered rarefaction. Such systems are also found to
occur in laboratory experiments (e.g. Abd-el-Fattah & Henderson [2]). 2.2 A Shock Polar Sequence. In this section we present the shock polar diagrams for the COdCH 4 refraction with ei = 0.78. The
data was chosen as specified in Section 1 with only the angle Q; being allowed to vary. In figure 7 we present four shock polar diagrams. These correspond to the two types of regular refraction -
namely regular refraction with a reflected expansion (RRE) and regular refraction with a reflected shock (RRR) - the transition between these two states, and the transition between regular and
irregular refraction. The polars are labeled Mi, M t , and M r , which represent the freest ream Mach numbers upstream of the incident, transmitted, and reflected waves respectively. To the right of
each shock polar diagram is a small diagram of the wave system in which the initial interface is denoted by an m, and the deflected interface by aD.
In each of the shock polar diagrams the tops of the incident and reflected polars have not been plotted in order to allow us to focus on the intersections which are of interest. As stated above the
map of the incident shock on the incident shock
268 polar is labeled i. This point corresponds to the base of the reflected shock polar. The intersection of the incident shock polar with the transmitted shock polar has been labeled AI' log~ Po
Figure 7a
logL Po
IX I = 32.0592 0
Figure 7b In figure 7 a) we plot the polars for Qi = 27°. Here we have only plotted the reflected rarefaction polar c and its intersection with the transmitted shock polar €1, not the reflected shock
polar. There still exist two solutions Al and A2 with a reflected shock but €1 is the solution observed in the laboratory (Henderson [2]). If we now continuously increase the angle Qi the points i
and A 1 move towards each other until they coincide at Qi :::::; 32.0592°. Here there is no need for a reflected
269 shock or expansion since 8i = 81 . The shock polar diagram for this value of O is a gradient we can take the base solution as Po = Po(x) = ¢>, TO = TO(X) = V(¢» and Vo = O. Introducing u = (p -
Po, vf as the set of variables, the hypothesis of Theorem 3.1 are satisfied and the system is isotropic. Actually, strict hyperbolicity fails except when N = 1, but the acoustical modes are simple
with c~ = ±eo. See Remark 3.7 and Example 2.1. 4. The basic expansion ( single wavefront case ). In this section we will consider a small variation of the basic expansion in §3 that allows us to deal
with the propagation of single wavefronts, in particular weak shocks. Basically, the only difference is at the level of: which requirements we enforce on the () dependence of the expansion. Here we
remove the oscillatory condition and replace it by the requirement that definite limits should be achieved as () -+ ±OOlO. Unfortunately, in this case it is generally not possible to impose a
sublinearity condition, as in (3.7), on U2 and the expansion is generally non-uniform, which ( as we will see ) forces the need to use matched asymptotic expansion techniques ( see [KC] ) to complete
the expansion. For a discussion of the meaning of this non-uniformity, see Remark 4.3. We start again with the ansatz (3.1) and up to equation (3.18) everything is the same as in §3. The only
difference is as to the interpretation of equation (3.12). Since the condition 7J = 0 is now meaningless, the splitting into v and 0' is now made unique lOObviously, this is the natural condition on
a situation where a narrow transition layer (shock region) connects two different states.
by requiring ( see Remark 3.4 for notation) r . Ag . v = o. Thus O"rO includes all the component of Ut, not just the oscillatory part as in (3.12). We also introduce
(4.1) so that
+ and -
= 8-+±oo lim 0",
indicate limits ahead (0) 0) and behind (0 < 0) the wave.
Now, clearly, the arguments leading from (3.18) to (3.19) and so on do not apply. We consider three cases:
This case is actually a particular instance of the situation considered in §3 as we can see by redefining 0" and v as follows:
(4.2) Then we are in the situation of §3 when in (3.6) fa = O. In this case the expansion will be uniform ( U2 bounded) provided O"new is integrable, otherwise it will be semiuniform with U2 merely
sublinear in O.
(4.3) This is a rather exceptional situation that can occur, for example, if all the Aj, 0 ~ j ~ N, are constant and 'P = k· x - wt is a plane wave with k and w constants. Then rO is also constant
and (4.3) will apply if, for example, 0"1:>. is constant. Case 1 follows when 0"1:>. = O. In this case the expansion can also be made uniform ( U2 bounded in () ), or semi-uniform ( U2 semilinear in
() ), depending on the rate of convergency of 0" to O"± as 0 -+ ±oo. In any event, it is quite clear that the "solvability" conditions in (3.18) that guarantee a solution U2 with the proper behavior
are now Limit Equations
(4.4) where v± are defined in (4.1) and these two systems are consistent because of (4.3).
296 Layer Equation
(4.5) where r = r(x, t; v) = -Lo . 'L.{i -/x-;(Aj . v) and the rest is as in (3.21). Thus, with J very small obvious modifications, all the considerations in §3, from Remark 3.9 to the end, apply
here. Obviously, as () --+ ±oo, (4.5) yields
d dt
+ t:.0'± = r '
which also follows from (4.4) upon multiplication by LO and use of (4.1). CASE 3.
f- 0'_
and (4.3) cannot be enforced.
Clearly, from (3.18), (4.5) and thus (4.6) must still apply. On the other hand, (4.4) cannot be enforced - at least not on both the (+) and (-) sides. No other condition can be imposed on (3.18) and
we are faced with the fact that IIu211 will grow secularly as I(}I when I(}I --+ 00. Thus the expansion is nonuniform and is formally valid only in a narrow zone If(}1 = 1 O. It becomes elliptic as
the dependent variables enter inside the parabola v 2 + 4u < O. Thus the aforementioned linear manifolds (straight lines in this 2 x 2 case) cover only the domain v 2 + 4u 2: O. An important
consequence of this remark is that the formula (2.8) will not be usable for constructing entropies inside the elliptic zone. We thus shall restrict to the hyperbolic one. The Riemann invariants are
clearly the two (real) roots wand z of the quadratic equation X2 +vX - u = o. Then Ea = (w - a)(a - z), where we assume w 2: z. A similar idea to (2.8) is that (w - a) + (a - z) is again an entropy,
so that the following formulae define an entropy and its flux for any choice of a bounded measure m:
J =J w
(w - a)(a - z)dm(a)
(v - a)(w - a)(a - z)dm(a) .
We shall keep in mind that v = -w - z. Defining functions J,g,h,k(w) as the anti derivatives of aP dm( a), O:S; P :s; 3, we rewrite E and F as
E(w, z) F(w,z)
= -wzJ(w) + (w + z)g(z) - hew) ,
= (w + z)wzJ(w) -
(w 2 + wz
+ z2)g(w) + k(w)
The definition of J,g, hand k allows us to introduce a function T(w) satisfying the following four equalities:
J=T'" 9
= wT'" - T"
k = w3 TIII - 3w 2T" + 6wT' - 6T h = w 2TIII - 2wT" + 2T'
So that we get an infinite family of pairs entropy-flux, parametrized by a real function T of one variable:
E = (w - z)T"(w) - 2T'(w) F
= (z -
w)(z + 2w)T"(w) + 6wT'(w) - 6T(w)
The above formula actually does not give all the entropies of our system because z did not play the smale role in our calculations. We need to supplement it
w and
323 by the symmetric formula depending on a real function S( z), so that the general entropy would be
(w - Z)(T"(W) - S"(Z)) - 2T'(w) - 2S'(z) .
It turns out that this formula gives all the entropies of the system, as it can be checked by hand, using the entropy equation
((2w+z)E w)z
= ((2z+w)E z)w'
Conversely, (2.10) does not give any information about entropies in the elliptic zone, except the case where S = T is a polynomial. Then the formula makes sense and defines a smooth function on the
whole plane. For instance, the choice S = T = X 5 /10 gives E = v 4 + 6v 2 u + 6u 2 • REMARK. In the formula (2.8), the special entropy (li.U - Ci)+ appears to be an extremal one in the cone of
convex entropies, so that Ei,(T will be convex if and only if u is a non-negative measure. This fact relies to the scalar example where the entropies Iu - kl or (u - k)+, used by Kruzkhov [27] and
Tartar [4] generate all the convex functions of one variable by means of the integrals
- k)du(k) .
Coming back to the more explicit formula (2.10), we get a convex entropy if and only if T = -S and T 1V is non-negative. The condition T = -S comes from the fact that (w - a)+ (a - z) is not convex,
so that we have to apply formula (2.8). The next subsection is devoted to the compensated compactness theory, applied to this system, and we shall pay attention to the elliptic zone. 8) Compensated
compactness with an elliptic zone. The compensatedcompactness theory is a tool which has been powerful in the study of the convergence of the artificial viscosity method for the Cauchy problem:
+ f(u 0 is fixed, then for sufficiently large N it will be that I:' 0.0
-0.5+---------..,---------, 0.00
Figure 4.8b: The evolution of ypp(p, t) at the same times as in Figure 4.8a. Figures 4.8a and 4.8b show the behavior of xpp(p, t) and ypp(p, t), respectively, from t = 1.4 to t = 1.6 at intervals of
0.025. Again, the difference in behavior between xpp and YPP is observed as t -+ tc -. In particular, xpp(p, t) appears to be diverging at p = Jr, in the form of an infinite jump discontinuity, while
YPp(p, t) is plainly not diverging, though its derivative is seemingly becoming infinite. To make more clear the difference in behavior, Figures 4.9a and b show xppp(p, t) and yppp(p, t) at the same
times. Both (x ppp ( Jr, t))-l and (YPpp( Jr, t))-l appear to be approaching zero simultaneously at some finite time, which through extrapolation, is estimated to be tc ~ 1.615 ± .01. Further, the
behavior of YPpp(p, t) in the neighborhood of p = Jr suggests that ypp(p, t) is approaching a finite jump discontinuity ast-+t;;.
360 5.0
4 .0
-1.0 0.00
3 .14
Figure 4.9a: The evolution of xppp(p,t) at the same times as in Figure 4.8a.
3 .0
- 1.0 0.00
:1 . 14
Figure 4.9b: The evolution of Yppp(p, t) at the same times as in Figure 4.8a.
2 .0
64 .0
12 8 .0
2:>6 .0
Figure 4.10a: The evolution of 13x(k), fit using formula (4.5) from = 1.4 to 1.6 at intervals of 0.05. The range over which the fit is approximately 5/2 (dashed line) increases as time increases.
4 .0 . . . - - - - - - - - - - - - - - - - - . . . ,
2 .0
l.o-l-----.-----,r-----;-----l 0.0
64 .0
128 .0
2:>6 .0
Figure 4.lOb: The evolution of /3v(k), fit using formula (4.5) at the same times as Figure 4.lOa. Dashed lines are at 5/3 and 3. The approach to 3 corresponds to increasing times. This behavior is
consistent with that of the spectrum. Figures 4.10a and 4.lOb
362 show (3x and (3y, respectively, from t = 1.4 to t = 1.6 at intervals of 0.025, using the fit in (4.5). The difference in the behavior of these fits is apparent. (3x is still well fit by a value
of 2.5, which would yield a divergent second derivative, while the fit to (3y shows a transition away from 2.5. Indeed, the last time shown, t = 1.6, suggests that there is a transition from an
algebraic decay of k- 5 / 2 for Yk(t), to a k- 3 decay. Such an algebraic decay at the singularity time would be be consistent with ypp(p, t c ) containing a step discontinuity.
g.. (j
0 .5
-t-=====::;:::=====;==:=.,- -----1256.0 0.0
64 .0
Figure 4.11: The evolution ofax(k), fit using formula (4.5) at the same times as Figure 4.10a. Figure 4.11 shows ax at the same times. It is clear that it is becoming more difficult to resolve the
sheet as evidenced by the noisiness in the fit, but ax still shows the tendency towards zero as the singularity time is approached. While there are differences in the singular behavior of x(p, t) and
yep, t), resp., no such difference in their singularity times is evident; ay is not shown as its values are practically identical to those of ax . Choosing k = 32 as a representative of the k
independent portion of the ax fit , Figure 4.12 shows its behavior as a function of time. The approach to zero is obvious. The dashed line is the extrapolation to zero at t = 1.615.
363 .5 .4
.1 0
Figure 4.12: ax(kh = 32) as a function of time. The dashed portion is an extrapolation to an approximated singularity time of teh 1.615. While there are differences between analytic prediction and
numerical results, the manner in which vortex sheets become singular does not change. At a point along the sheet, there is a very rapid compression of the vorticity through the Lagrangian motion of
the marker particles. It is because the vorticity must remain confined to the sheet, without the additional degrees of freedom of smoother vorticity distributions, that this compression leads to the
appearance of singularities. This is illustrated by the behavior of the true vortex sheet strength w, shown in Figure 4.13 from t = 1.3 to t = 1.6 at intervals of 0.05, as a function of the signed
arclength of the sheet from p = 7r. w gives the jump in tangential velocity across the sheet, and is given initially by w(p, t
= 0) = -p + ~ cos(p)
(dashed). Around
the local extrema at p = 7r (8 = 0), w(p, t) becomes concentrated, and increasing in amplitude. Moore's analysis predicts that at t = t e , w(p, t) is finite, with a squareroot cusp at p = 7r. As the
singularity in y(p, t) remains of the form predicted by Moore, and that in x(p, t) only weakens, this conclusion is unchanged.
364 2.5 2
'3 I
.5 0 -4
0 s
Figure 4.13: The evolution of w(s, t), the true vortex sheet strength, from th = 1.3 to 1.6 at intervals of 0.05. The dashed graph is the initial true vortex sheet strength. Increasing amplitude
corresponds to increasing time. Section 5. Concluding remarks. The intent of this work was to examine the generality of Moore's analysis, presumeably valid only for small amplitude data, in
predicting the form of vortex sheet singularities. By studying in detail the evolution of a large amplitude, entire initial condition, it was found that Moore's analysis was valid in predicting the
behavior of the sheet at times well away from the singularity time, but near the singularity time the sheet behavior underwent a transition leading to a change of form in the nascent singularity. A
more complete study (Shelley 1989), is to be given elsewhere. This work does not address the possible existence and nature of the vortex sheet after the singularity time, but has instead focused on
gaining precise information on the form of the singularity. As has also been observed by Kr, no convergence of the numerical solution was observed after the singularity time. Of course, the spectral
accuracy of the MPVA is lost in the presence of singularities. At these later times, the motion becomes dominated by grid scale interactions, and is apparently chaotic. It appears that mollification
of some sort is necessary to numerically study behavior past the singularity time (Krasny 1986b, Baker & Shelley 1989). Such studies indicate that the solution, if it exists, may have the form of
doubly branched spiral. It is known that measured-valued solutions exist globally for vortex sheet initial data, but the notion of such a solution is so general that it gives little information about
its specific nature. The scaling of vorticity concentrations in the study of thin vortex layers by Baker & Shelley (1989) suggests that the vortex sheet may
actually exist as a classical weak solution after the singularity time (Diperna & Majda, 1987). Explicit singular solutions have been constructed by Caflisch & Orellana (1988), with 'Y(p) = 1, which
have the form z(p, t)
= p + So + r
where So = c:(1- i)
1- e- .2I-.''P
)1+" (1- e-
.2I . 'P
c: IS small, r is a correction term, and v > O. v = 1/2 would give the spatial structure of Moore's singularity at t = O. The singularity found in this work is not of this form, though it is quite
possible that such a singularity could be constructed analytically. Acknowledgements. The author would like to thank G. R. Baker, R. E. Caflisch, A. Majda, D. I. Meiron and S. A. Orszag for useful
discussions. This work was partially supported under contracts ONR/DARPA N00014-86-K-0759 and ONR N00014-82-C-0451. Some of the computations were carried out on the CrayXMP at Argonne National
Laboratory. REFERENCES BAKER, G. R., MCCRORY, R. L., VERDON, C. P. & ORSZAG, S. A., Rayleigh-Taylor instability of fluid layers, J. Fluid Mech. 178 (1987) 16I. BAKER, G. R., MEIRON, D. I., & ORSZAG,
S. A., Vortex simulations of the Rayleigh-Taylor instability, Phys. Fluids 23 (1980), 1485. BAKER, G. R., MEIRON, D. I., & ORSZAG, S. A., Generalized vortex methods for free-surface flow problems, J.
Fluid Mech. 123 (1982), 477. BAKER, G. R. & SHELLEY, M. J., Boundary integral techniques for multi-connected domains, J. Camp. Phys. 64 (1986), 112. BAKER, G. R. & SHELLEY, M. J., On the connection
between thin vortex layers and vortex sheets, To appear in J. Fluid Mech. (1989). CAFLISCH, R. & LOWENGRUB J., Convergence of the Vortex Method for Vortex Sheets, to appear where (1988)? CAFLISCH, R.
& ORELLANA, 0., Long time existence for a slightly perturbed vortex sheet, Comm. Pure Appl. Math. XXXIX (1986), 807. CAFLISCH, R. & ORELLANA, 0., Singular solutions and ill-posedness of the evolution
of vortex sheets, (1988). DUCHON, R. & ROBERT, R., Global vortex sheet solutions to Euler equations in the plane, to appear in Comm. PDE. (1989). DIPERNA, R & MAJDA, A., Concentrations in
regularizations for 2-d incompressible flow, Comm. Pure Appl. Math. 40 (1987), 301. EBIN, D., Ill-posedness of the Rayleigh-Taylor and Helmholtz problems for incompressible fluids, CPAM (1988).
KRASNY, R., A study of singularity formation in a vortex sheet by the point-vortex approximation, J. Fluid Mech. 167 (1986a), 65. KRASNY, R., Desingularization of periodic vortex sheet roll-up, J.
Compo Phys. 65 (1986b), 292. LONGUETT-HIGGINS, M. S. & COKELET, E. D., Proc. R. Soc. Lond. A 350 (1976), 1.
366 MAJDA, A., Vortex dynamics: Numerical analysis, scientific computing, and mathematical theory, In Proc. of the First Intern. Conf. on Industrial and Applied Math., Paris (1987). MCGRATH, F. J.,
Nonstationary plane flow o[ viscous and ideal fluids, Arch. Rat. Mech. Anal., 27 (1967), 329. MEIRON, D. I., BAKER, G. R., & ORSZAG, S. A., Analytic structure o[ vortex sheet dynamics. 1.
Kelvin-Helmholtz instability, J. Fluid Mech. 114 (1982), 283 .. MOORE, D. W., The spontaneous appearance o[ a singularity in the shape o[ an evolving vortex sheet, Proc. R. Soc. Lond. A 365, (1979)
105. MOORE, D. W., Numerical and analytical aspects o[ Helmholtz instability, In Theoretical and Applied Mechanics, Proc. XVI IUTAM, eds. Niodson and Olhoff (1985), 263. PULLIN, D. I. AND PHILLIPS,
W. R. C., On a generalization o[ Kaden's problem. J. Fluid Mech. 104 (1981), 45. ROSENHEAD, L., The [ormation o[ vortices [rom a surface o[ discontinuity, Proc. R. Soc. Lond. A 134 (1931), 170.
SHELLEY, M. J., A study o[ singularity [ormation in vortex sheet motion by a spectral accurate vortex method, To appear in J. Fluid Mech. (1989). SIDI, A. & ISRAELI, M., Quadrature methods [or
periodic singular and weakly singular Fredholm integral equations, J. Sci. Compo 3 (1988), 20l. SULEM, C., SULEM, P. L., BARDOS, C. & FRISCH, U, Finite time analyticity [or the two and three
dimensional Kelvin-Helmholtz instability, Comm. Math. Phys. 80 (1981), 485. SULEM, C., SULEM, P. L. & FRISCH, H., Tracing complex singularities with spectral methods, J. Compo Phys. 50 (1983), 138.
VAN DER VOOREN, A. I., A numerical investigation o[the rolling up o[vortex sheets, Proc. Roy. Soc. A 373 (1980), 67. VAN DYKE, M., Perturbation Methods in Fluid Mechanics, The Parabolic Press (1975).
THE GOURSAT-RIEMANN PROBLEM FOR PLANE WAVES IN ISOTROPIC ELASTIC SOLIDS WITH VELOCITY BOUNDARY CONDITIONS* T. C. T. TINGtAND TANKIN WANGt Abstract. The differential equations for plane waves in
isotropic elastic solids are a 6 x 6 system of hyperbolic conservation laws. For the Goursat-Riemann problem in which the initial conditions are constant and the constant boundary conditions are
prescribed in terms of stress, the wave curves in the stress space are uncoupled from the wave curves in the velocity space and the equations are equivalent to a 3 X 3 system. This is not possible
when the boundary conditions are prescribed in terms of velocity. An additional complication is that, even though the system is linearly degenerate with respect to the C2 wave speed, the C2 wave
curves cannot be decoupled from the Cl and C3 wave curves. Nevertheless, we show that many features and methodology of obtaining the solution remain essentially the same for the velocity boundary
conditions. The Cl and C3 wave curves are again plane polarized in the velocity space although the plane may not contain a coordinate axis of the velocity space. Likewise, the C2 wave curves are
circularly polarized but the center of the circle may not lie on a coordinate axis of the elocity space. Finally, we show that the C2 wave curves can be treated separately of the Cl and C3 wave
curves in constructing the solution to the Goursat-Riemann problem when the boundary conditions are prescribed in terms of velocity. Key words. Goursat-Riemann problem, wave curves, elastic waves AMS
(MOS) subject classifications. 35L65, 73D99
1. Introduction. In a fixed rectangular coordinate system Xl, X2, X3, consider a plane wave propagating in the Xl - direction. Let u, 72, 73 be, respectively, the normal stress and two shear stresses
on the Xl = constant plane. Also, let u, V2, V3 be the particle velocity in the Xl> X2, X3 direction, respectively. The equations of motion and the continuity of displacement can be written as a 6 x
6 system of hyperbolic conservation laws[1,2,3] Ux
F(U)! = 0,
= (pu,PV 2,PV3,E"2"3).
In the above, X = Xl, t is the time, p is the mass density in the undeformed state, are, respectively, the longitudinal strain and the two shear strains. For isotropic elastic solids, the
stress-strain laws have the form [1]
E, ,2,,3
= 1(E,,2),
72 = ' 2
73 = ' 3
l =,? +,~,
*This work has been supported by the U.S. Air Force Office of Scientific Research under contract AFOSR-89-0013. tDepartment of Civil Engineering, Mechanics and Metallurgy, University of Illinois at
Chicago, Box 4348, Chicago, IL 60680.
368 where f and g are functions of £ and 1'2. We see that l' is the total shear strain. If T is the total shear stress on the x = constant plane, we obtain from (1.2h,3,
T = 1'g(£,1'2), T2
= Ti + T;'
In the region of £ and 1'2 where equations (1.2h and (1.3)1 have an inversion, we have
= h(l7, T2),
= T q( 17, T2),
where h and q are functions of 17 and
Equations (1.2h,3 can then be written as
= T, q( 17, T2), 1'3 = T3 q( 17, T2).
We study the Goursat-lliemann problem of (1.1) in which the strains £,1'2,1'3 are assumed to be known functions of 17, T2, T3 as given in (1.4)1 and (1.5). In Section 2 the characteristic wave speeds
and the right eigenvectors of (1.1) are presented. The simple wave curves and shock wave curves in the stress space are presented in Section 3 and that in the velocity space are examined in Section
4. Up to this point, the material is general isotropic Cauchy elastic solids, i.e., the existence of strain energy function is not assumed. In Section 5 we consider hyperelastic solids. In
particular, the simple wave curves for second order hyperelastic materials are presented which are used in Section 6 as an illustration to solve the Goursat-lliemann problem with stress boundary
conditions. In Section 7 the Goursat-Riemann problem is solved in which the boundary conditions are prescribed in terms of velocity. 2. Characteristic wave speeds and right eigenvectors. For the
Riemann problem and the Goursat-lliemann problem, the solution U depends on one parameter xlt only [4-9]. If U is continuous in xlt, we have a simple wave (or rarefaction wave) solution in which xl t
= c, the characteristic wave speed. In this case, (1.1 h is reduced to
(2.1) where I is a unit matrix, 'V is the gradient with respect to the components of U, the superscript T stands for the transpose, and the prime denotes differentiation with c. If we introduce the
equation (2.1) can be written as two equations
s'+pcu'=O, u' +cGs' = 0,
369 where the components of the 3
3 matrix G are
(2.4) Elimination of u' in (2.3) yields
(G -1]1)s' = 0, I] = (pC 2)-I.
(2.5) (2.6)
Thus I] and s' are, respectively, the eigenvalue and eigenvector of G. Assuming that I]i, i= 1, 2, 3, are positive, the wave speed Ci come in three pairs of positive and negative values. We let
ci ~ c~ ~ c~
> O.
'Hence, (2.8) From equations (l.4h, (l.5), (2.2) and (2.4), it is readily shown that the second and third columns of the matrix (G -1]1) are linearly dependent when I] = q. Hence I] = q is an
eigenvalue of G. The other two eigenvalues can be shown to satisfy the quadratic equation
(2.9) in which the subscript ~ and 7 denote differentiation with these variables. We therefore have, using (2.8), 1],
= 2" {(c.,. + Ir) -
1], = q(~,72) = 1/7, 1 1]3 = 2"{(c.,. + Ir) + y},
Y = {(c.,. -Ir)2 +4 / .,.cr}I/2. The second equality for 1]2 follows from (l.4h. By substituting
s' = (0,73,-72),
= 1]2
S'={7(1]-,r), 72/.,., 73/"'} ,
f or
= 1]1, 1]3 ,
in (2.5) and making use of (2.10h for I] = 1]2 and (2.9) for I] = 1]1,1]3, it can be verified that equation (2.5) is satisfied. Equations (2.11) therefore provide the eigenvectors s'.
(a) The stress space
(b) The velocity space (e2 < 0)
(c) The velocity space
> 0)
Fig.l The C2 simple wave( or V2 shock wave) curve for which CT, T and u are constants and C2 = V2 •
371 T3
B T
A T2
(a) The stress space
v v
----------~--------_e_ V2
- -______- L__________
(b) The velocity space
< 0) v
__________- L________- . _
----------~------------- u
(c) The velocity space
Fig.2 The
simple wave curves on
> 0)
= constant
372 3. Simple wave curves and shock wave curves in the stress space. The differential equation for simple wave curves associated with C2 is given in (2.11)1 which can be written as da dT2 dT3 -=-=-o
T3 -T2. Hence, (3.1)
T2 = Ti
a = constant,
+ T;
= constant.
In the stress space (a, T2, T3), (3.1) represents a circle with its center on the a-axis. The C2 simple wave curve is therefore "circularly polarized", Fig.1(a). Moreover, from (3.1) and (2.10)2, 'f/
2 and hence C2 is a constant along the C2 simple wave curve. Thus the system is linearly degenerate with respect to C2 [8] and the C2 simple wave curve is in fact a shock wave curve. The simple wave
curve for C1 or C3 is given in (2.11)2 which is rewritten as (3.2) The last equality yields (3.3)
= constant.
If we let
T2 = TcosB,
T3 = TsinB,
we have B = constant.
(3.5) Equations (3.2) now reduce to
or (3.6)
dT I" 'f/ -E" -=---=--da 'f/
the second equality follows from (2.9). Equation (3.6), when integrated, provides simple wave curves for C1 and C3 on the (a, T) plane. If (a, T2, T3) is regarded as a rectangular coordinate system,
(a, T, B) is a cylindrical coordinate system. Since simple wave curves for C1 and C3 are on a B = constant plane, they are "plane polarized", Fig.2( a).
373 If U as a function of x/t is discontinuous at x/t = V, we have a shock wave with shock velocity V. The Rankine-Hugoniot jump conditions for (1.1h are (3.7)
+ V[F(U)] =
where [U]=U--U+
denotes the difference in the value of U + in front of and U - behind the shock wave. Using the notations of (2.2), (3.7) is equivalent to
[s] + pV[u] = 0, [u] + V[p] = 0.
Equations (3.8) are the Rankie-Hugonint jump conditions for (2.3). Elimination of [u]leads to (3.9) From (1.4h, (1.5) and (2.2), we may write (3.9) in full as
= pV2[ h(O", T2)],
= pV2[ T2 q(O", T2)],
= pV2[ T3 q(O", T2)].
If we eliminate pV 2 between (3.10h.3, it can be shown that [1]
(3.11) There are two possibilities for this equation to hold. We discuss them separately below. One possibility is [q] = O. If the shock wave speed for this case is V2 , we see that (3.10) are
satisfied if
= q+ = q-, k] = 0 = [TJ,
(pVl)-1 (3.12)
This is identical to the circularly polarized simple wave curve discussed earlier. Hence V2 = C2 and the V2 shock wave curve is identical to the C2 simple wave curve, Fig.l(a). The other possibility
for (3.11) to hold is
This is identical to (3.3). It follows from (3.4) that
0+ = 0-,
and (3.10h,3 are reduced to the same equation
This and (3.10h can be written as (3.14) For a fixed (a+, r+), the second equality provides a shock wave curve for (a-, r-) on the (a, r) plane which is a B = constant plane. The shock wave curves
are therefore plane polarized as shown by the double solid lines in Fig.3( a). Since there are two shock wave curves emanating from the point (a+, r+) (only one is shown in Fig.3), the associated
shock wave speeds are denoted by VI and V3 as indicated in (3.14).
A (0"-,7-)
B (O"+, 7+) 0"
(a) The stress space
A (u-,v-)
B (u+,v+) u
(b) The velocity space (Vl , V3 < 0)
v w
A ----------~--------. .
--------~----------~ u
(c) The velocity space (Vb V3 Fig.3 The V l (or V3 ) shock wave curve on
> 0)
e = constant plane.
376 4. Simple wave curves and shock wave curves in the velocity space. The differential equation (2.5) contains the stress s only which enables us to determine simple wave curves in the
three-dimensional stress space without considering the full six-dimensional stress-velocity space. This is not possible for simple wave curves in the velocity space. We write (2.3h in full as
1 = --du, pc
1 pc 1 dV3 = -- dT3. pc
dV2 = --dT2'
For the velocity space we have to distinguish positive wave speed wave speed Cj.
When c = ±C2, equations (3.1) apply and hence u, T and Using (3.4), (4.1) lead to
are all constant.
from negative
= constant,
o T V2 - v 2 = --case, PC2 o T . {) V3 - v3 = --Slnu, PC 2
where and v~ are the integration constants. The determination of the integration constants will be illustrated in Section 7. We see that the Cz simple wave curves in the velocity space are also
circularily polarized, Fig.1(b). The radius ofthe circle is T / p1c21. However, unlike in the stress space, the center of the circle is not necessarily on the u-axis. Moreover, the point on the
simple wave curve assumes a different position depending on whether C2 is a negative (Fig.1(b)) or a positive wave speed (Fig.1(c)). When c = ±CI or ±C3, substitution of (3.4) into (4.1)z,3 and
noticing that a constant in this case, we obtain
= --case dT, pc
= _2. sine dT, pc
+ V3 sine) = -2.dT, pc d( -V2 sine + V3 case) = O. d(V2 case
If we let (4.3)
v = VO + V2 case
+ V3 sine, + V3 case,
w = w O - V2 sine
e is
are the integration constants, (4.1) are equivalent to
1 = --du, pc
1 = --dT, pc
w =0.
The determination of the integration constants VO and wO will also be illustrated in Section 7. From (4.4), the Cl and C3 simple wave curves in the velocity space are also plane polarized on the w =
0 plane, (Fig.2(b), 2( c)). Unlike in the stress space, the plane may not contain the u-axis. Equation (4.4h,2 can be combined to give
dv du
dT du,
and hence the slope of the simple wave curve in the velocity space and in the stress space are identical. This does not mean that the simple wave curves in the two spaces are identical. From (4.4)
1,2, the infinitesimal arclength of the simple wave curve in the velocity space is equal to the corresponding arclength in the stress space divided by the factor pc. For c < 0, therefore, the curves
in the velocity space can be obtained from that in the stress space by dividing every infinitesimal line segment of the curve by the factor pc without changing the orientation of the line segment,
Fig.2(b). For c> 0, the same procedure applies except that the direction of the wave curve is reversed, Fig.2( c). We next present the shock wave curves in the velocity space. Using (3.4), (3.8)1
written in full are
k] + pV[u]
= 0,
+ pV[ V2] = 0, [TsinO] + pV[ V3] = o.
For V = V2 =
C2, U
are constant. Hence
[u] (4.6)
= 0,
= --[ cosO],
= -2-[sinO].
Equations (4.2) which represent the C2 simple wave curves in the velocity space satisfy (4.6). Therefore, the V2 shock wave curves and C2 simple wave curves are also identical in the velocity space.
For V
= VI
or V3 , 0 is a constant and (4.5h,3 can be written as
[T] cosO + pV[ V2]
= 0,
[T] sinO + pV[ V3] =
By linearly combining the two equations and using (4.3), we obtain
+ pV[ v] = 0, w=O.
Therefore, the shock wave curves in the velocity space for V = VI or V3 are also plane polarized on w = 0 plane, Fig.3(b,c). From (4.5)1 and (4.7h, we have
y;r = r;] =~,
the last equality follows from (3.14)z. The first equality implies that the slope of the line connecting (IT+, r+) to (IT- , r-) on the shock wave curve in the stress space is identical to the slope
of the line connecting (u+,v+) to (u-,v-) on the shock wave curve in the velocity space, Fig.3(b,c).
5. Hyperelastic solids. For hyperelastic solids, there exists a complementary strain energy [10] W( IT, r2) whose gradients with respect to IT and r provide the strains c: and /, i.e., (5.1) The
characteristic wave speeds are, from (2.6) and (2.10),
(pcD- 1 = ~{(W.,..,.
= WTlr, (pc~)-1 = ~{(W.,..,. + WTT ) + Y}, (pC~)-1
= {(W.,..,.-WTT )2+4W;T}I/2.
The differential equation (3.6) for the space is (5.3)
+ WTT ) -
dr dlT
2 W.,.T (W.,..,. - W TT ) T Y
simple wave curves in the stress
-(W.,..,. - W TT ) T Y 2 W.,.T
where the upper (or lower) sign is for Cj (or C3) simple wave curves. The simple wave curves for Cj and C3 are now orthogonal to each other[1,2]. The simplest nonlinear hyperelastic solids are the
second order materials for which c: and / are functions of IT and r of the order up to two. This means that W must be a function of IT and r of order up to three. Noticing that W is a function of IT
and r2 and that the constant terms produce no strains while the linear terms would have yielded non-zero strains when the stresses vanish, we write
a 2 d 2 b 3 e 2 W = -IT + -r + -IT + -ITr 2
where a, d, band e are constants. d and a are positive and have the property [1,2]
< 0 = dl(d -
We first study the special case = O. With the initial conditions given in (6.1), it is not difficult to see that V3,T3 and (} vanish for x < 0 and t > O. From (4.3)1, v = V2 for x < 0 and t > O.
Using (4.4), the simple wave curves in the velocity space can be determined from the simple curves in the stress space. The simple wave curves in the velocity space associated with the simple wave
curves BS and BP in Fig. 5(a) are shown in Fig. 7(a). Likewise, the shock wave curves associated with BR and BH in Fig. 5(a) can be obtained from (4.5h and (4.7)1 and are shown in Fig. 7(a). We have
again divided the velocity plane (u,v) into four regions. The method of finding the solution is identical to that for the stress boundary conditions with (}A = O. Thus, depending on whether the
boundary conditions (7.1) with = 0 and = are represented by a point in region 1, 2, 3 or 4, we have the wave pattern 1, 2, 3 or 4 shown in Fig. 6 in which the V2 shock wave is absent.
vt vA
v1 =/= 0, the solution cannot be obtained by a simple superposition of a
V2 shock wave. As shown in [17], the V2 shock wave does not commute with
and C3 simple waves when the wave curves in the velocity space are considered. Nevertheless, we will show that one can make use of Fig. 7(a) to construct the solution when vt f= O. First of all, we
show that the wave curve BMA in Fig. 7(a) corresponds to a family of solutions with vt f= o. When vt = 0, the CI simple wave curve BM and the C3 simple wave curve MA are, in the (vz, V3) plane, the
segments B M+ and M+ A in Fig. 7(b) which are on the vz-axis. When vt f= 0, we may introduce a Vz shock wave curve M+ M- which is a circle with its center at T and radius rM /p/c zM /. This is the
circularly polarized shock wave given in (4.2h,3 in which and v~ are determined by the location of T. If we draw a circle AA' concentric with M+ M-, any point A' on the circle can be the location of
the new boundary conditions (vt, vt). The wave curves for this case will be the CI simple wave curve BM+, the Vz shock wave curve M+ M- and the C3 simple wave curve M- A'. The angle the line T A'
makes with the vz-axis is eA. The new coordinates v,w defined in (4.3) are obtained by rotating the Vz, V3 axes about T an angle eA. The location B' of the origin of (v, w) coordinates determines the
constants (vO, wO) in (4.3). In the (u, v) plane, Fig. 7(a), the wave curve is still BMA. The corresponding wave pattern is the wave pattern 1 in Fig. 6. Thus the wave curve BMA in Fig. 7(a)
corresponds to a family of solutions for which the boundary conditions (vf, vt) are on the circle AA' shown by the dotted line in Fig. 7(b).
With the V2 shock wave considered separately, one can determine the admissible wave curve for the velocity boundary conditions when vt f= 0 by an iteration scheme. However, one should be able to
determine whether the wave pattern belongs to wave pattern 1, 2, 3 or 4 before employing the iteration scheme. This is presented next. When (u A , vt, vt) are given, we draw the vertical line KL in
the (u, v) plane, Fig. 7( a), whose abscissa is u A . This line intersects the ct simple wave curve BP at L and the C3 simple wave curve BS at K. From (4.3),
Since vO and eA are unknowns, vA can be anywhere on the line KL. If A is located above K, between KL or below L, we have wave pattern 2, 1 or 4, respectively. The wave curve BK in Fig. 7(a)
corresponds to the wave curve BK in Fig. 7(b) when vt = O. Following the procedure explained earlier, we can obtain a circle through K shown by the solid line in Fig. 7(b) such that the wave curve BK
in Fig. 7(a) corresponds to a family of solutions with (vf, vt) on this circle. Likewise, one can obtain a circle through L shown by another solid line in Fig. 7(b) such that the wave curve BL in
Fig. 7(a) corresponds to a family of solutions with (vf,vt) on this circle. We then have the result that if (vf, vt) is located within the two circles, the solution belongs to wave pattern 1. If (vf,
vt) is located outside (or inside) the circle passing through K (or L), we have wave pattern 2 (or wave pattern 4). It should be pointed out that the two circles passing through K and L in Fig. 7(b)
are for the fixed u A > 0 shown in Fig. 7(a). For a different value of u A , the circles would be different. For u A < 0, a similar procedure can be employed to
386 determine whether the solution for given (vf, vt) belongs to wave pattern 2, 3 or 4. REFERENCES [1]
[2] [3]
[4] [5] [6] [7] [8] [9] [10] [11]
[12] (13) [14]
(15) [16) [17]
YONGCHI LI AND T. C. T. TING, Plane waves in simple elastic solids and discontinuous dependence of solution on boundary conditions, Int. J. Solids Structures, 19 (1983), pp. 989-1008. ZHIJING TANG
AND T. C. T. TING, Wave curves for the Riemann problem of plane waves in isotropic elastic solids, Int. J. Eng. Sci., 25 (1987), pp. 1343-1381. T. C. T. TING, The Riemann problem with umbilic lines
for wave propagation in isotropic elastic solids, in Notes in Numerical Fluid Mechanics, Nonlinear Hyperbolic Equations Theory, Numerical Methods and Applications, ed. by Josef Ballmann and Rolf
Jeltsch, 24, Vieweg, 1988, pp. 617-629. P. D. LAX, Hyperbolic systems of conservation laws. II, Comm. Pure Appl. Math., 10 (1957), pp. 537-566. A. JEFFREY, Quasilinear Hyperbolic Systems and Waves,
Pitman, 1976. J. A. SMOLLER, On the solution of the Riemann problem with general step data for an extended class of hyperbolic systems, Mich. Math. J., 16 (1969), pp. 201-210. T. -Po LiU, The Riemann
problem for general systems of conservation laws, J. Diff. Eqs., 18 (1975), pp. 218-234. C. M. DAFERMOS, Hyperbolic systems of conservation laws, Brown University Report, LCDS 83-5,(1983). D. G.
SCHAEFFER AND M. SHEARER, Riemann problem for nonstrictly hyperbolic 2 X 2 systems of conservation laws, Trans ArneI'. Math. Soc., 304 (1987), pp. 267-306. C. TRUESDELL AND W. NOLL, The Nonlinear
Field Theories of Mechanics, Handbuch del' Physik, III/3, Springer, Berlin, 1965. D. G. SCHAEFFER AND M. SHEARER, The classification of2 X 2 systems of nonstrictly hyperbolic conservation laws, with
application to oil recovery, Appendix with D. Marchesin and P. J. Paes-Leme, Comm. Pure Appl. Math., 40 (1987), pp. 141-178. B. L. KEYFITZ AND H. C. KRANZER, The Riemann problem for a class of
conservation laws exhibiting a parabolic degeneracy, J. Diff. Eqs., 47 (1983), pp. 35-65. E. ISAACSON AND J. B. TEMPLE, Examples and classification of nonstrictly hyperbolic systems of conservation
laws, Abstracts of Papers Presented to AMS, 6 (1985), pp. 60. M. SHEARER, D. G. SCHAEFFER, D. MARCHESIN AND P. J. PAES-LEME, Solution of the Riemann problem for a prototype 2 X 2 system of
nonstrictly hyperbolic conservation laws, Arch. Rat. Mech. Anal., 97 (1987), pp. 299-320. GUANGSHAN ZHU AND T. C. T. TING, Classification of2 X 2 non-strictly hyperbolic systems for plane waves in
isotropic elastic solids, Int. J. Eng. Science, 27 (1989), pp. 1621-1638. T. C. T. TING, On wave propagation problems in which cJ = c. = C2 occurs, Q. App\. Math., 31 (1973), pp. 275-286. XABIER
GARAIZAR, Solution of a Riemann problem for elasticity, Courant Institute of Mathematical Sciences Report (1989).
E-Book Information
• Series: The IMA Volumes in Mathematics and Its Applications 29
• Year: 1,991
• Edition: 1
• Pages: 386
• Pages In File: 398
• Language: English
• Identifier: 978-1-4613-9123-4,978-1-4613-9121-0
• Doi: 10.1007/978-1-4613-9121-0
• Cleaned: 1
• Orientation: 1
• Paginated: 1
• Org File Size: 12,178,203
• Extension: pdf
• Tags: Analysis
• Toc: Front Matter....Pages i-xiv
Macroscopic Limits of Kinetic Equations....Pages 1-12
The Essence of Particle Simulation of the Boltzmann Equation....Pages 13-22
The Approximation of Weak Solutions to the 2-D Euler Equations by Vortex Elements....Pages 23-37
Limit Behavior of Approximate Solutions to Conservation Laws....Pages 38-57
Modeling Two-Phase Flow of Reactive Granular Materials....Pages 58-67
Shocks Associated with Rotational Modes....Pages 68-69
Self-Similar Shock Reflection in Two Space Dimensions....Pages 70-88
Nonlinear Waves: Overview and Problems....Pages 89-106
The Growth and Interaction of Bubbles in Rayleigh-Taylor Unstable Interfaces....Pages 107-122
Front Tracking, Oil Reservoirs, Engineering Scale Problems and Mass Conservation....Pages 123-139
Collisionless Solutions to the Four Velocity Broadwell Equations....Pages 140-155
Anomalous Reflection of a Shock Wave at a Fluid Interface....Pages 156-168
An Application of Connection Matrix to Magnetohydrodynamic Shock Profiles....Pages 169-172
Convection of Discontinuities in Solutions of the Navier-Stokes Equations for Compressible Flow....Pages 173-178
Nonlinear Geometrical Optics....Pages 179-197
Geometric Theory of Shock Waves....Pages 198-202
An Introduction to front Tracking....Pages 203-216
One Perspective on Open Problems in Multi-Dimensional Conservation Laws....Pages 217-238
Stability of Multi-Dimensional Weak Shocks....Pages 239-250
Nonlinear Stability in Non-Newtonian Flows....Pages 251-260
A Numerical Study of Shock Wave Refraction at a CO 2 /CH 4 Interface....Pages 261-280
An Introduction to Weakly Nonlinear Geometrical Optics....Pages 281-310
Numerical Study of Initiation and Propagation of One-Dimensional Detonations....Pages 311-314
Richness and the Classification of Quasilinear Hyperbolic Systems....Pages 315-333
A Case of Singularity Formation in Vortex Sheet Motion Studied by a Spectrally Accurate Method....Pages 334-366
The Goursat-Riemann Problem for Plane Waves in Isotropic Elastic Solids with Velocity Boundary Conditions....Pages 367-386
|
{"url":"https://vdoc.pub/documents/multidimensional-hyperbolic-problems-and-computations-41hp82dueld0","timestamp":"2024-11-12T06:18:41Z","content_type":"text/html","content_length":"509586","record_id":"<urn:uuid:18c751ba-daaa-4ab5-89af-b17bf1d6a612>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00705.warc.gz"}
|
GotoBLAS2: A faster BLAS library
This post is about a little trick I have for speeding up my quantum chemistry calculations. For some reason this software is unknown to most people, and thus I decided do a short advertisement!
The bottleneck in all forms of computational quantum chemistry is the CPU time spent to converge a calculation to a given accuracy. The most expensive part is in most cases the evaluation of
two-electron integrals, but during many calculations, a lot of time is also spent doing linear algebra operations. The computational routines for doing linear algebra has been optimized greatly over
the years and today there are quite a few standard linear algebra libraries for doing this. One standard API of linear algebra routines is called
, short for "Basic Linear Algebra Subprograms". Many equations involved in quantum chemistry are formulated into
matrix notation
which is easily interpreted and turned into fast, parallelized code.
When people want to use a fast BLAS library, often well-known libraries such as the ALTAS or Intel MKL BLAS libraries are used. However, there is a slightly faster BLAS library out there, named
GotoBLAS2. This library was started by a Japanese guy named Kazushige Goto (the Japanese pronunciation is something lik "go-toe"), who, like Albert Einstein, worked in a patent office while he began
developing his own BLAS library. After a while he was 'discovered' by a university in Texas and soon Goto was deported from Japan. Now he is said to be very rich and working for some large
somewhere in the US.
Enough with the history! Tobias Wittwer has made a detailed comparison of common, fast BLAS libraries in this pdf:
You can download the latest GotoBLAS2
. Not only is GotoBLAS2 faster than Intel's MKL library, GotoBLAS2 is also completely free of charge and open source under the
BSD license
GotoBLAS2 will be used in future posts on compiling and installing software, for which I am always using GotoBLAS2. So be sure to use this and make the most of your CPU resources!
2 comments:
1. Interesting! Can GotoBLAS2 be used with GAMESS? And is it parallelized?
"computational time, which is mainly spent doing standard linear algebra"
That's not quite accurate. The most time-consuming part are usually the 2-electron integrals. But it is true that if the 2-electron integrals have been effectively parallelized and/or
approxmiated, the linear algebra can become the bottleneck for large systems.
2. Thank you for correcting me on the two-electron integrals!
In principle GAMESS should be able to use BLAS routines from GotoBLAS2, since all BLAS libraries contain the same standard set of functions, only coded and optimized in different ways.
I just tried and recompile the current GAMESS on my desktop at home, and it seems like you can only specify the BLAS libraries from MKL, ATLAS and ACML (or a crude, generic set of functions)
because you have to go through the ./config script before compiling.
I will ask Casper for a definite answer and let you know! I'm fairly certain that it is possible to circumvent the standard installation procedure and introduce a non-standard BLAS library
somewhere else. Casper may have done this when we tested the AMD Magny Cours machines.
GotoBLAS2 is fully parallelized, so even if your main program is not parallelized, you can call the linear algebra routines and run those in parallel without your main program ever knowing it.
The number of CPUS can be set either fixed when you compile GotoBLAS2 or later on via an environment variable.
|
{"url":"https://combichem.blogspot.com/2010/11/gotoblas2-faster-blas-library.html","timestamp":"2024-11-03T11:55:02Z","content_type":"application/xhtml+xml","content_length":"65420","record_id":"<urn:uuid:215513b4-5964-4d18-aa11-402212b1286d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00261.warc.gz"}
|
I recently received an email from Tom Lake requesting the addition of matrix functions to BBC BASIC. I must confess that I'd never seen a BASIC dialect that offered native matrix functions, but some
digging located the following:
I'm wondering if people had any requirements or suggestions for the
statement. There are some limitations I have to work with; for example, any matrix operation
be prefixed with
, and matrices are not efficiently resizeable as BBC BASIC only stores the
size, not the maximum size (matrices could be shrunk, but not expanded, without leaking memory).
To whet your appetites, here's the start; I've written some parsing code for the simplest
statements -
(set all elements to zero),
(set all elements to one) and
(set all elements to
). Though I've just realised that I'm missing the
between the result array and the expression to the right, whoops.
Re: Matrices
Wow, matrix manipulation? I am very impressed!
Re: Matrices
As well as mathematical uses I'm sure that the ability to quickly copy one array's contents to another or reset all the elements to a particular value would be quite useful.
I'd like to try and be fairly practical with this, but am not sure of the most sensible way to do it. Would multiplying a two dimensional array ("matrix") with a one-dimensional array ("vector")
transform the vector by the matrix? Would a()^b() calculate the cross product and a()*b() the dot product if both were single-dimension arrays, or would you use a()*b() and a().b() respectively (or
maybe even prefix those statements with VEC instead of MAT)?
I'm concerned about doing this sensibly, and not ending up releasing something with a silly/non-standard syntax that I need to change at a later date.
Re: Matrices
benryves wrote:I'm wondering if people had any requirements or suggestions for the MAT statement. There are some limitations I have to work with; for example, any matrix operation must be
prefixed with MAT, and matrices are not efficiently resizeable as BBC BASIC only stores the current size, not the maximum size (matrices could be shrunk, but not expanded, without leaking
Why not just go with one of the implementations you list? Those are very good (especially the Wang) One thing, though. the operation of the determinant function, DET, is slightly different in
different compilers. In some, DET takes no parameters. It only retuns the determinant of the most recently inverted matrix. In order to find the determinant of array A, you'd need to do this
Code: Select all
MAT T = INV(A)
In others, DET takes the name of an array and there's no need to invert the matrix first. You can do this:
Code: Select all
PRINT DET(A)
Some compilers allow both forms. The ANSI/ISO standard requires DET to have an argument. I like having a choice but if you only have room for one, I'd go with the ISO standard, DET(A).
Tom Lake
Re: Matrices
Thanks for the suggestion. Unfortunately, I would not be able to add a DET function in that form - I can only add statements to BBC BASIC, not functions. To that end, the former method would work, as
I could automatically create/modify a DET variable after inverting a matrix - but this would slow things down unnecessarily if you wished to invert a matrix but didn't want to know the determinant.
The following could work: however, you would not be able to use DET() as a regular function, eg in a PRINT statement.
Re: Matrices
benryves wrote:The following could work: however, you would not be able to use DET() as a regular function, eg in a PRINT statement.
But DET is a single number, not a matrix. It's used to determine how "good" an inversion is. The larger DET is, the better the inversion; the less loss of precision. If DET is zero, then either the
matrix is singular (has no inverse) or its inverse is useless due to lost precision.isn't conceptually right at all since "MAT d" should always refer to a matrix named d, not a scalar. Since you have
to calculate the determinant during the inversion of a matrix anyway, there shouldn't be too much overhead to assign that value to the system variable DET (with no parameter). It's done this way in
Wang BASIC and many others.
Tom Lake
Re: Matrices
Point taken. DET method, then.
What is your view on the different dimensions of arrays? For some operations it wouldn't really matter, I suppose (ZER or CON) and with some it would be nonsensical (assigning a one-dimensional array
to a two-dimensional array), but what would you expect to happen if, say, you multiplied a one-dimensional array with a two-dimensional one?
Re: Matrices
benryves wrote:Point taken. DET method, then.
What is your view on the different dimensions of arrays? For some operations it wouldn't really matter, I suppose (ZER or CON) and with some it would be nonsensical (assigning a one-dimensional
array to a two-dimensional array), but what would you expect to happen if, say, you multiplied a one-dimensional array with a two-dimensional one?
The standards and conventions are very clear on this. Here's a quote from a BASIC manual:
If one array operand of the multiplication operator is one-dimensional and the other is two-dimensional, the product will be one-dimensional. If the first array operand is one-dimensional, it is
treated as a row vector (single row with multiple columns) and must match the first dimension of the second array. If the second array operand is onedimensional, it is treated as a column vector
(many rows in one column) and must match the second dimension of the first array.
In other words an array with m elements may be multiplied by an m by n array, or an L by m array may be multiplied by an array with m elements. In the first case the product will be a one-dimensional
array with n elements, while in the second case the product will be a one-dimensional array with L elements.
Re: Matrices
You can download a preliminary version of the matrix additions by following this link.
Currently implemented:
• MAT b()=a() (assignment/copying)
• MAT a()=ZER (set all elements to zero/empty string)
• MAT a()=CON (set all elements to one)
• MAT a()=CON(expr) (set all elements to result of expr)
• MAT a()=IDN (set to identity matrix)
• MAT c()=a()+b() (add all elements)
• MAT c()=a()-b() (subtract all elements)
• MAT c()=a()*b()
• MAT b()=TRN(b()) (transposition)
• MAT PRINT a() (display a matrix's contents)
To do:
• MAT b()=INV(a()) (inversion)
• DET (determinant, set by INV())
• MAT c=a()*(scalar)
• MAT READ (read array elements from DATA statements)
• MAT INPUT (input array elements from prompt)
I'm not sure how many features I'll be able to fit in (I'm running out of ROM space) but I'll do my best.
I'm sure there will be bugs in the features I've implemented above, so please let me know when you find them!
Edit: Changed release 749 to 750; this includes a quick hack to allow multiplication of single-dimensional matrices, which hasn't been documented but should work.
Re: Matrices
benryves wrote:I'm sure there will be bugs in the features I've implemented above, so please let me know when you find them!
I don't know if you're planning on implementing this or not but
Code: Select all
MAT PRINT A;
should print matrix A in a packed format while
Code: Select all
MAT PRINT A (or A,)
should print them in the usual format (just like a regular PRINT statement)
Also, are the parens really necessary?
Code: Select all
MAT A=B
should be sufficient to the interpreter.
Tom L
Re: Matrices
The parentheses are optional wherever an array is specified, but I've included them in all of the examples for clarity (the documentation is still incomplete).
Here is build 760. I've added a few bug fixes, including an accuracy fix on matrix multiplication (it used to store the intermediate values in the result, which resulted in very poor results when the
result matrix was an integer array) and a crashing bug fix when array names are zero characters long.
I have also added the compacted form of MAT PRINT a; - is there any accepted way to suppress the line feed between PRINTed matrices?
I have also modified quite a lot of the code to try and gain a little more space; I'm down to 490 bytes to squeeze the inversion/determinant code in, which is more than a little tight!
Re: Matrices
benryves wrote:I have also added the compacted form of MAT PRINT a; - is there any accepted way to suppress the line feed between PRINTed matrices?
Not in standard BASIC. Maybe you can eliminate the check for parens when MAT is used. That might free up some space.
When I tried this:
Code: Select all
10 DIM A(2,2)
20 MAT A=IDN
30 MAT PRINT A;
I get
instead of the correct
Tom Lake
Re: Matrices
toml_12953 wrote:Maybe you can eliminate the check for parens when MAT is used. That might free up some space.
It would free some space, but not a considerable amount. I'll see what I can do.
When I tried this:
Code: Select all
10 DIM A(2,2)
20 MAT A=IDN
30 MAT PRINT A;
I get
instead of the correct
Whoops, sorry. Build 761 fixes that (a rather overzealous optimisation).
Re: Matrices
benryves wrote:I have also modified quite a lot of the code to try and gain a little more space; I'm down to 490 bytes to squeeze the inversion/determinant code in, which is more than a little
Maybe you could eliminate Revision 752
"Added check to ensure that result matrix is neither of the operand matrices when multiplying."
This should be legal (and is in standard BASIC).
Code: Select all
MAT X = A * X
Should be allowed since the right side should be calculated then the result assigned to the matrix on the left of the equal sign (Unless you're multiplying in place to save space!)
Tom Lake
Re: Matrices
Yes, matrix operations are currently done in-place.
I'd like to make the implementation as sensible as possible, don't worry, but these things take time (something I don't have a lot of at the moment!)
I don't suppose you have any recommendations or suggestions for an algorithm that could be used to calculate the inversion of a matrix?
Thanks for all of your help and suggestions!
|
{"url":"https://maxcoderz.org/forum/viewtopic.php?t=2783&sid=1608edba38076f29afaa64f6b02e491d","timestamp":"2024-11-02T18:08:05Z","content_type":"text/html","content_length":"82008","record_id":"<urn:uuid:f63381eb-c81d-4f77-a551-b4a45b870f9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00617.warc.gz"}
|
[eclipse-clp-users] Managing lists of real variable
From: simone pascucci <cxjepa_at_...6...> Date: Tue, 13 Oct 2009 15:59:39 +0200
Hi all,
I'm trying to understand how constraint programming works, but I have to say
that it's quite a difficult subject.
I have a reasonable easy problem to solve, given a list of real variable
{x_1,...,x_n}, a real constant that I'll call "amount" and a set of
constraint on the values that the variables can take, I have to solve
amount = sum_{i} x_i
In all the example I could get from the tutorials, it seems to me that
somehow all the programs are always bounded to a fixed, already known,
number of variables. If someone could give a small code example about
solving this simple problem that would help a lot.
Thank you,
Received on Tue Oct 13 2009 - 13:59:52 CEST
This archive was generated by hypermail 2.3.0 : Wed Sep 25 2024 - 15:13:20 CEST
|
{"url":"http://www.eclipseclp.org/archive/eclipse-clp-users/1098.html","timestamp":"2024-11-14T21:11:39Z","content_type":"application/xhtml+xml","content_length":"7530","record_id":"<urn:uuid:1de45e75-289a-4b04-b803-0405c6e283c8>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00046.warc.gz"}
|
Inconsistent intervals for similar labels in graduated classification
Bug report #22038
Inconsistent intervals for similar labels in graduated classification
Status: Open
Priority: Normal
Assignee: -
Category: Symbology
Affected QGIS version: 3.4.6 Regression?: No
Operating System: Easy fix?: No
Pull Request or Patch supplied: No Resolution:
Crashes QGIS or corrupts data: No Copied to github as #: 29852
There is currently an inconsistent meaning behind a similar appearance when we make a graduated classification.
In the
label ... meaning
we see for QGIS
------ -----------
a - b a ≤ x ≤ b
b - c b < x ≤ c
c - d c < x ≤ d
Note that the first class has left inclusion but not the others.
We could wrongly expect from reading only the label :
a - b equals to a ≤ x < b
b - c equals to b ≤ x < c
c - d equals to c ≤ x < d
... or any other scheme actually.
So the same "class interval label" have different interpretations if they are the first or not.
We need a label notation where the endpoints inclusions are explicit, for example :
common French
style style
------- -------
[a, b] [a, b]
(b, c] ]b, c]
(c, d] ]c, d]
... or at least if we keep the basic "a - b" notation then we must use the same interval scheme for all classes.
Note also that some softwares (R for example) use left-open/right-closed intervals by default :
------ ------ ---------
(a, b] ]a, b] a < x ≤ b
(b, c] ]b, c] b < x ≤ c
(c, d] ]c, d] c < x ≤ d
... and some others (like openJUMP) are left-closed/right-open :
------ ------ ---------
[a, b) [a, b[ a ≤ x < b
[b, c) [b, c[ b ≤ x < c
[c, d) [c, d[ c ≤ x < d
So we need :
• to use a consistent scheme for all classes
• to add an option to be able to choose the left-open or right-open scheme
• to add an option to generate the label with the common notation, the French (Bourbaki) notation or any other notation.
See also "Data class groupings" #16983
Also available in: Atom PDF
|
{"url":"https://issues.qgis.org/issues/22038","timestamp":"2024-11-14T14:31:53Z","content_type":"text/html","content_length":"11767","record_id":"<urn:uuid:67ba3a15-573a-4d0f-b0f1-055ba7d71333>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00377.warc.gz"}
|
Quantum computer kills internet banking?
“SCIENTISTS FACTOR THE NUMBER 15.”
Hardly a headline to grab the popular imagination. But when it’s done by a quantum computer – and one that’s scalable – it’s time to take notice.
A paper published today in Science describes a five-atom quantum computer that can factor numbers – that is, start with a number and find numbers that, when multiplied, equal that first number. For
instance, 15 factors into three times five.
It’s also a striking illustration of how quantum computers will smash today’s internet encryption – when they arrive, that is.
Computerised factoring is not new – quantum computers have factored numbers before (and those much bigger than 15). The key point here, though, is the new design can be upscaled to much more powerful
versions simply by adding atoms.
Many of the world’s public key security systems, which encrypt online banking transactions and the like, operate on a simple principle: that it’s easy to multiply two large prime numbers to generate
a gigantic number.
But given the gigantic number, it’s next to impossible to work out its factors, even using a computer.
In March 1991 the encryption company RSA set a challenge – they published a list of very large numbers and announced cash awards for whoever could factor them. The prizes went from $1,000 for
factoring a 100-digit number, up to $200,000 for a 617-digit number.
A quarter of a century later, most of those numbers remain uncracked.
But with a large enough quantum computer, factoring huge numbers – even those 600 digits long – would be child’s play.
In classical computing, numbers are represented by either 0s or 1s called “bits”, which the computer manipulates in a series of linear, plodding logic operations trying every possible combination
until it hits the right one.
Without any prior knowledge of the answers, the system returned the correct factors (15 = 5 x 3), with a confidence of more than 99%.
For example, to factor a 232-digit monster (the largest RSA number broken) took two years with hundreds of classical computers running in parallel – and ended up being solved too late to claim the
$50,000 prize.
In contrast, quantum computing relies on atomic-scale units, or “qubits”, that can be 0, 1 or – weirdly – both, in a state known as a superposition. This allows quantum computers to weigh multiple
solutions at once, making some computations, such as factoring, far more efficient than on a classical computer.
The problem has been building these qubits into a large-enough assembly to make meaningful calculations. The more atoms, the more they jostle together and the harder it is to control each one.
And as superposition is a very delicate state, a small bump will cause an atom to flip to 0 or 1 easily.
The new design, devised by physicists at the Massachusetts Institute of Technology and constructed at the University of Innsbruck in Austria, uses five calcium ions (atoms stripped of an electron)
suspended in mid-air by electric and magnetic fields.
The ions are close enough to one another – about a hundredth the width of a human hair – to still interact. The researchers use laser pulses to flip them between 0, 1 and superposition to perform
faster, more efficient logic operations.
Without any prior knowledge of the answers, the system returned the correct factors (15 = 5 x 3), with a confidence of more than 99%. Previous quantum computers achieved the same result with 12 ions.
And this system is “straightforwardly scalable”, according to Isaac Chuang, a physicist at MIT whose team designed the computer.
A truly practical quantum computer would likely require thousands of atoms manipulated by thousands of laser pulses. Meanwhile, other researchers are working on scalable computer systems using more
conventional technology such as silicon.
“It might still cost an enormous amount of money to build – you won’t be building a quantum computer and putting it on your desktop anytime soon – but now it’s much more an engineering effort, and
not a basic physics question,” says Chuang.
Whatever the cost, the abililty to crack internet security would make a large-scale quantum computer, literally, invaluable.
Read our handy primer on quantum mechanics – Quantum physics for the terminally confused
|
{"url":"https://cosmosmagazine.com/science/physics/will-this-quantum-computer-take-down-internet-banking/","timestamp":"2024-11-12T04:09:22Z","content_type":"text/html","content_length":"89500","record_id":"<urn:uuid:ea4e1be2-e9db-4617-b27e-2301f4003543>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00429.warc.gz"}
|
Antonio Canale and David B. Dunson (2016). Multiscale Bernstein polynomials for densities. No. 3, 1175-1195.
Statistica Sinica 26 (2016), 1175-1195
Antonio Canale^1,2 and David B. Dunson^3
^1University of Turin, ^2Collegio Carlo Alberto and ^3Duke University
Abstract: Our focus is on constructing a multiscale nonparametric prior for densities. The Bayes density estimation literature is dominated by single scale methods, with the exception of Polya trees,
which favor overly-spiky densities even when the truth is smooth. We propose a multiscale Bernstein polynomial family of priors, which produce smooth realizations that do not rely on hard
partitioning of the support. At each level in an infinitely-deep binary tree, we place a beta dictionary density; within a scale the densities are equivalent to Bernstein polynomials. Using a
stick-breaking characterization, stochastically decreasing weights are allocated to the finer scale dictionary elements. A slice sampler is used for posterior computation, and properties are
described. The method characterizes densities with locally-varying smoothness, and can produce a sequence of coarse to fine density estimates. An extension for Bayesian testing of group differences
is introduced and applied to DNA methylation array data.
Key words and phrases: Density estimation, multiresolution, multiscale clustering, multiscale testing, nonparametric Bayes, Polya tree, stick-breaking, wavelets.
|
{"url":"https://www3.stat.sinica.edu.tw/statistica/J26N3/J26N314/J26N314.html","timestamp":"2024-11-10T18:59:29Z","content_type":"text/html","content_length":"4570","record_id":"<urn:uuid:5574fa24-4ac1-4e90-9739-808b554bcd65>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00473.warc.gz"}
|
[GET it solved] What is the accuracy of the testing data, and why is it not
What is the accuracy of the testing data, and why is it not 100%?
Problem 1. Download the dataset datamining.xlsx from LMS. This dataset contains 2,000 cases. This dataset is to be used to predict whether a person in an MIS program will like a data mining course or
not. The fields for each of the 2000 records are as below:
• GMAT: GMAT score of a student
• Bachelor: Field of BS degree (A: Arts, S: Science, E: Engineering)
• Quant, Stats, HBO, Acct: Course rating of the student for each of the courses from 1 (lowest) to 5 (highest)
• E-comm: Flag that is T if student intends to specialize in e-commerce, F otherwise
• Datamine: Course rating of the student for Data Mining
• LikeDM: Flag that is T if course rating for Data Mining is 4 or 5; F otherwise (note that this attribute is derived from “Datamine” attribute, so you should eliminate “Datamine” from exploration
and modeling).
Using RapidMiner, answer the following questions. A sample process is provided as a starter.
a) [20 points] Use the entire data (datamining.xlsx) and explore the relationship between LikeDM and each individual field. What effect does each field seem to have on LikeDM? You can use
scatterplots and histograms to explore the relationships and show only what seems to be important relations.
b) [25 points] Split the data into 65% for training and 35% for testing using Split Validation operator. Click on the operator and change random seed value to “12345”. Create a Decision Tree
(ModelingPredictiveTreeDecision Tree) and make sure to get 100% accuracy on training data. To do this, set criterion to gini index, set the tree depth to high number (e.g. 2000) and uncheck both
“apply pruning” and “apply prepruning,” then answer the following:
1. What is the depth of the tree?
2. How many leaves (decision nodes) does it have?
3. What is the accuracy of the testing data, and why is it not 100%?
c) [15 points] Now change the settings of the decision tree model as follows, then answer the questions:
• Click on the decision tree and choose criterion to “information_gain” and set maximum depth to 8.
• Check “apply prepruning” and set minimal gain to 0.01, minimal leaf size to 2, and minimal size for split to 4 (leave other options as is).
1. What is the accuracy of training and testing? Do you see an improvement in the model? How?
2. Provide two strongest If-Then rules from this decision tree. Please explain why these rules are chosen.
d) [15 points] Try to further improve the performance of the decision tree model by changing the decision tree parameters (you can change tree depth, type of criterion, or minimal size of leaf or
split). What is the performance of the tree you created (both training and testing) and what have you changed in the tree settings? Produce at least two (2) different models.
e) [10 points] Use the models developed above to compare between their performance by filling the table provided. Which model is the best, and why?
Related Questions
. Introgramming & Unix Fall 2018, CRN 44882, Oakland University Homework Assignment 6 - Using Arrays and Functions in C
DescriptionIn this final assignment, the students will demonstrate their ability to apply two ma
. The standard path finding involves finding the (shortest) path from an origin to a destination, typically on a map. This is an
Path finding involves finding a path from A to B. Typically we want the path to have certain properties,such as being the shortest or to avoid going t
. Develop a program to emulate a purchase transaction at a retail store. This program will have two classes, a LineItem class and a Transaction class. The LineItem class will represent an individual
Develop a program to emulate a purchase transaction at a retail store. Thisprogram will have two classes, a LineItem class and a Transaction class. Th
. SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of Sea Ports. Here are the classes and their instance variables we wish to
1 Project 1 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of
. Project 2 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of Sea Ports. Here are the classes and their
instance variables we wish to define:
1 Project 2 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of
|
{"url":"https://www.codeavail.com/What-is-the-accuracy-of-the-testing-data-and-why-is-it-not-","timestamp":"2024-11-14T18:51:57Z","content_type":"text/html","content_length":"60350","record_id":"<urn:uuid:eef6a8e7-fbbc-41a8-80ea-521d9d806162>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00545.warc.gz"}
|
Quizzing Network
(Based on a similar article by V. M. Sreejith in Science Reporter, July 2004)
You are going to enter a into a network of questions based on the lives of famous mathematicians and their works. Start with question number 1, choose the option that you think to be correct and then
go to the question number given against that option. Continue the game that way and REMEMBER to give yourself 1 point only for EACH QUESTION you visit. Evaluate your performance at the end at of the
1. Both the mathematicians given below are suitable for the comment “greatest mathematical analyst of his time”. Out of them who lived in the 17th century?
A. Pierre Simon de Fermat. Go to 11
B. Jean Baptiste Joseph Fourier. Go to 5
2. That is right! Just try this now:
He twice failed his entrance examination to I, Ecole Polytechnique. He did not know some basic mathematics and he did mathematics entirely in his head, to the annoyance of the examiner. Legend has it
that he became so enraged at the stupidity of the examiner that he threw an eraser at him. Whom are we talking about?
A. Joseph Lagrange. Goto 12
B. Evariste Galois. Goto 15
3. Perfectly correct! Given below are the names of two mathematical geniuses who were famous for their contribution to number theory. Who among them is also known as “the man who knew infinity”?
A. Paul Erdos. Go to 10
B. Srinivasa Ramanujan. Go to 17
4. Excellent answer!! Go for the next one:
“I never got pass mark in math. The funny thing is I seem to latch on to mathematical theories without realizing what is happening. And just imagine – mathematicians now use my prints to illustrate
their books. I guess they are quite unaware of the fact that I am ignorant about the whole thing.” Who said this?
A. Leonardo Da Vinci. Go to 16
B. M.C.Escher. Go to 9
5. Incorrect! The father of Applied Mathematics who is credited for Fourier integrals, Fourier series and Fourier transforms etc, J.B.J. Fourier lived in the period 1768-1830.
Match the following:
p) Harish Chandra 1) $$e^{ipi}+1=0$$
q) Leonard Euler 2) Greatest Indian mathematician after Ramanujan
r) Henri Poincare 3) Probability theory
s) R.A.Fisher 4) French mathematician
A. p2 q1 r3 s4 Go to 10
B. p2 q1 r4 s3 Go to 3
6. I am afraid, you are wrong. You may try a simpler one: The Italian lady mathematician Maria Gactaua Agnesi first studied the curve whose equation is: $$xy^2=4a^2(2a-x)$$. This curve is known as:
A. Witch of Agnesi Goto 2
B. Agnesian angel Goto 12
7. False! So sad! Actually Friedrich Gauss authored another mathematical classic “Disquitiones Arithmeticae”.
Try your luck at this: The world renowned mathematicians Nilkanta, Somayaji, Madhava, Paramananda & Jyesthadeva belonged to which South Indian state?
A. Tamil Nadu Goto 10
B. Kerela Goto 3
8. This is your final destination in the entire puzzle. From the table given at the end of the puzzle you can access yourself by comparing the total number of points you had visited with the talent
9. That was the right answer. Escher is widely known and appreciated as a graphic artist. His graphics have appeared on postage stamps, bank notes, jigsaw puzzles and covers of dozens of scientific
publications. He was very popular among mathematicians. This was your last question. You may now go to 8.
10. Once again you made a mistake. Return to 1 & repeat the questions !!
11. Correct! The founder of modern number theory, inventor of analytical geometry, & discoverer of principle of least time in optics, Fermat lived in the period 1601-1665.
Your next question: Who wrote the classic book on mathematical formulation of quantum mechanics titled “Dee Mathematische Grundalen der Quantum Mechanik”?
A. John Van Neumann Go to 14
B. Freidrich Gauss Go to 7
12. Wrong!!
You can go back to 13 and try again.
13. You have entered the final session of the quiz network. Here is your first question: She was born in Paris on April 1,1776. Although unable to attend a university because of discrimination
against her sex, she educated herself by reading the works of Newton and Euler & the lecture notes of Lagrange. In 1804, she wrote to Gauss about her work in number theory under the pseudonym
Monsieur La Blanc, fearing that Gauss would not take seriously the efforts of a woman. About whom we are talking?
A. Sophie Germain Go to 4
B. Emmy Noether Go to 6
14. Excellent answer! That was your second consecutive correct answer. The Hungarian mathematical legend & original creator of Game Theory, Van Neumann wrote that masterpiece in German.
You may now go to 13.
15. Very Good!! That was your last question in this quiz. You may now go to 8.
16. Wrong!Do not be disheartened. Try this:
He was born on 5th August 1802 in Norway.He proved that unlike the situation for equations of degree 4 or less, there is no finite formula for the solution of general fifth degree equation.
Commutativity reminds us of this mathematical genius. Who is “he” referred to here?
A. Niels Henrik Abel Go to 2
B. J.J Sylvester Go to 12
17. Exactly right! The renowned mathematician Robert Kanigel has published the biographical sketch of Ramanujan with this title.
You may now go to 13.
│Points (Questions visited) │Performance │
│4 │Excellent │
│5-7 │Very good │
│8-10 │Good │
│11-13 │Satisfactory │
│more than 13 │Very poor! Must improve!│
• Debashish Sharma, JRF, Dept of Mathematics, NIT Silchar.
|
{"url":"https://gonitsora.com/quizzing-network/","timestamp":"2024-11-12T20:15:29Z","content_type":"text/html","content_length":"36430","record_id":"<urn:uuid:6d516052-8ac6-4065-99c5-df812a68dd4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00325.warc.gz"}
|
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
It was very helpful.
David Figueroa, NY.
Algebrator was much less expensive then traditional Algebra tutors, and allowed me to work at my pace with each problem. If it was not for Algebrator, I fear that I may have failed my Algebra class.
Youre a lifesaver!
Alex Starke, OR
The Algebrator is the perfect algebra tutor. It covers everything you need to know about algebra in an easy and comprehensive manner.
Rolando Contreras, AZ
Search phrases used on 2007-05-09:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• lesson plan laws of exponent
• Greatest Common Factor problems
• 6th grade decimals worksheet
• scale factor word problems
• examples of math trivia mathematics
• foil by radicals algebra problem
• TI 83 plus find fourth root
• solve for x calculator free
• "simplifying algebraic expressions"
• sources of poem in algebra
• Download Algebra de Baldor en PDF
• Solving Quadratic Inequalities with a sign graph
• pizzazz math worksheets for algebra 2
• mcdougal littell 7th teacher's book online
• Grade 10 Mathematics: Trigonometry Questions
• writing linear equations
• revise ks2 transfer test
• square root exponent ti 83
• free step by step algebra solver
• algebra square root
• learn factors and multiples year 8+
• simplify square roots calculator
• ti-84 Plus download programs
• how to solve polynomial functions graph
• function domain and range gcse
• aptitude question & answers
• holt algebra II
• "partial fractions" "x^3 + 3x"
• GCSE math paper
• examples of how to use Excel spreadsheet boolean algebra
• basic physics and math equations
• multiplying square root
• year 11 mathamatics work
• Practise CLEP
• C# Calculate SquareRoot Samples
• word problems add subtract multiply divide integers
• properties of exponents: solver
• worksheets on graphing linear equations
• online monomial quiz
• online factoring assignment grade 10 math
• homework sheets for gr 1 online
• fraction problems with explanations
• maths online exercises
• download vocabulary software for CAT exam
• saxon algebra 1/2 test generator
• Venn diagrams+GCSE
• Fraleigh Abstract Algebra filetype: pdf
• precalculus algebra software
• reverse foil method calculator
• indian 10th standard algebra notes
• write variable expression of system equations
• entering log base 2 into ti-89
• Algebra, weighted averages
• how to solve problems in ratio and proportion
• fun decimal word problems
• mcdougal littell 7th teacher's book online free
• lesson plan in negative exponent
• trigonometry free problems online tutor
• free online year-11 physics assignment
• study skills lesson filetype.ppt
• least common multiple calculator
• TI-83 "convert decimal to fraction"
• printable math sheets ninth grade
• worksheets math free pre-algebra adding subtracting 3 numbers
• printable 8th grade work sheets
• sample statistics math problems
• ti-84 imaginary numbers
• Harcourt Mathematics 12: Advanced Functions Chapter 5 answers
• Plot the graph of this polymonial equation 3a^2 - 10a + 8
• homework help programs
• basic algebra online calculator triangle angle
• free Phone tutors to help me with math
• how to find the greatest possible error algebra
• linear equation in two variable
• negative log, ti-83
• Gr:9 Maths problem online
• free math lessons for advanced sixth graders
• formulaes
• TI 83 Logarithmic
• Elementary and Intermediate Algebra 4th edition
• least common multiple + worksheet"
• grade four algebra worksheet
• free math practice for ks3
• 6th grade finite operational systems worksheet
• TI-82 polynomial root finder
• when would we use algebra in real life
• Free Download MAT Solved Sample Papers
• chemistry addison wesley homework help glossary
• equation solver 4th power
• eqaution math worksheet
• adding and subtracting fractions+ks3 maths+powerpoint
• graphs + yr 8 maths + free worksheets
• 6 grade math help
• lesson plans simplifying radicals
|
{"url":"https://softmath.com/algebra-help/what-are-step-to-take-in-lenea.html","timestamp":"2024-11-09T22:35:53Z","content_type":"text/html","content_length":"35183","record_id":"<urn:uuid:8f121e11-6c72-4d4c-a05a-e6ddf24c3599>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00814.warc.gz"}
|
A discontinuous additive map
A function \(f\) defined on \(\mathbb R\) into \(\mathbb R\) is said to be additive if and only if for all \(x, y \in \mathbb R\)
\[f(x+y) = f(x) + f(y).\] If \(f\) is supposed to be continuous at zero, \(f\) must have the form \(f(x)=cx\) where \(c=f(1)\). This can be shown using following steps:
• \(f(0) = 0\) as \(f(0) = f(0+0)= f(0)+f(0)\).
• For \(q \in \mathbb N\) \(f(1)=f(q \cdot \frac{1}{q})=q f(\frac{1}{q})\). Hence \(f(\frac{1}{q}) = \frac{f(1)}{q}\). Then for \(p,q \in \mathbb N\), \(f(\frac{p}{q}) = p f(\frac{1}{q})= f(1) \
• As \(f(-x) = -f(x)\) for all \(x \in\mathbb R\), we get that for all rational number \(\frac{p}{q} \in \mathbb Q\), \(f(\frac{p}{q})=f(1)\frac{p}{q}\).
• The equality \(f(x+y) = f(x) + f(y)\) implies that \(f\) is continuous on \(\mathbb R\) if it is continuous at \(0\).
• We can finally conclude to \(f(x)=cx\) for all real \(x \in \mathbb R\) as the rational numbers are dense in \(\mathbb R\).
We’ll use a Hamel basis to construct a discontinuous linear function. The set \(\mathbb R\) can be endowed with a vector space structure over \(\mathbb Q\) using the standard addition and the
multiplication by a rational for the scalar multiplication.
Using the axiom of choice, one can find a (Hamel) basis \(\mathcal B = (b_i)_{i \in I}\) of \(\mathbb R\) over \(\mathbb Q\). That means that every real number \(x\) is a unique linear combination of
elements of \(\mathcal B\): \[
x= q_1 b_{i_1} + \dots + q_n b_{i_n}\] with rational coefficients \(q_1, \dots, q_n\). The function \(f\) is then defined as \[
f(x) = q_1 + \dots + q_n.\] The linearity of \(f\) follows from its definition. \(f\) is not continuous as it only takes rational values which are not all equal. And one knows that the image of \(\
mathbb R\) under a continuous map is an interval.
One thought on “A discontinuous additive map”
You must be logged in to post a comment.
|
{"url":"https://www.mathcounterexamples.net/a-discontinuous-additive-map/","timestamp":"2024-11-08T15:30:04Z","content_type":"text/html","content_length":"59958","record_id":"<urn:uuid:c16843f8-e6b0-40c8-ac19-38bcd476eeec>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00874.warc.gz"}
|
Classification margins for cross-validated classification model
M = kfoldMargin(CVMdl) returns classification margins obtained by the cross-validated classification model CVMdl. For every fold, kfoldMargin computes classification margins for validation-fold
observations using a classifier trained on training-fold observations. CVMdl.X and CVMdl.Y contain both sets of observations.
M = kfoldMargin(CVMdl,'IncludeInteractions',includeInteractions) specifies whether to include interaction terms in computations. This syntax applies only to generalized additive models.
Estimate k-fold Margins of Classifier
Find the k-fold margins for an ensemble that classifies the ionosphere data.
Load the ionosphere data set.
Create a template tree stump.
t = templateTree('MaxNumSplits',1);
Train a classification ensemble of decision trees. Specify t as the weak learner.
Mdl = fitcensemble(X,Y,'Method','AdaBoostM1','Learners',t);
Cross-validate the classifier using 10-fold cross-validation.
Compute the k-fold margins. Display summary statistics for the margins.
m = kfoldMargin(cvens);
marginStats = table(min(m),mean(m),max(m),...
marginStats=1×3 table
Min Mean Max
_______ ______ ______
-11.312 7.3236 23.517
Input Arguments
CVMdl — Cross-validated partitioned classifier
ClassificationPartitionedModel object | ClassificationPartitionedEnsemble object | ClassificationPartitionedGAM object
Cross-validated partitioned classifier, specified as a ClassificationPartitionedModel, ClassificationPartitionedEnsemble, or ClassificationPartitionedGAM object. You can create the object in two
• Pass a trained classification model listed in the following table to its crossval object function.
• Train a classification model using a function listed in the following table and specify one of the cross-validation name-value arguments for the function.
includeInteractions — Flag to include interaction terms
true | false
Flag to include interaction terms of the model, specified as true or false. This argument is valid only for a generalized additive model (GAM). That is, you can specify this argument only when CVMdl
is ClassificationPartitionedGAM.
The default value is true if the models in CVMdl (CVMdl.Trained) contain interaction terms. The value must be false if the models do not contain interaction terms.
Data Types: logical
Output Arguments
M — Classification margins
numeric vector
Classification margins, returned as a numeric vector. M is an n-by-1 vector, where each row is the margin of the corresponding observation and n is the number of observations. (n is size(CVMdl.X,1)
when observations are in rows.)
If you use a holdout validation technique to create CVMdl (that is, if CVMdl.KFold is 1), then M has NaN values for training-fold observations.
More About
Classification Margin
The classification margin for binary classification is, for each observation, the difference between the classification score for the true class and the classification score for the false class. The
classification margin for multiclass classification is the difference between the classification score for the true class and the maximal score for the false classes.
If the margins are on the same scale (that is, the score values are based on the same score transformation), then they serve as a classification confidence measure. Among multiple classifiers, those
that yield greater margins are better.
kfoldMargin computes classification margins as described in the corresponding margin object function. For a model-specific description, see the appropriate margin function reference page in the
following table.
Model Type margin Function
Discriminant analysis classifier margin
Ensemble classifier margin
Generalized additive model classifier margin
k-nearest neighbor classifier margin
Naive Bayes classifier margin
Neural network classifier margin
Support vector machine classifier margin
Binary decision tree for multiclass classification margin
Extended Capabilities
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
• This function fully supports GPU arrays for the following cross-validated model objects:
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2011a
R2024b: Specify GPU arrays for neural network models (requires Parallel Computing Toolbox)
kfoldMargin fully supports GPU arrays for ClassificationPartitionedModel models trained using fitcnet.
R2023b: Observations with missing predictor values are used in resubstitution and cross-validation computations
Starting in R2023b, the following classification model object functions use observations with missing predictor values as part of resubstitution ("resub") and cross-validation ("kfold") computations
for classification edges, losses, margins, and predictions.
Model Type Model Objects Object Functions
Discriminant analysis classification model ClassificationDiscriminant resubEdge, resubLoss, resubMargin, resubPredict
ClassificationPartitionedModel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
Ensemble of discriminant analysis learners for classification ClassificationEnsemble resubEdge, resubLoss, resubMargin, resubPredict
ClassificationPartitionedEnsemble kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
Gaussian kernel classification model ClassificationPartitionedKernel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
ClassificationPartitionedKernelECOC kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
Linear classification model ClassificationPartitionedLinear kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
ClassificationPartitionedLinearECOC kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
Neural network classification model ClassificationNeuralNetwork resubEdge, resubLoss, resubMargin, resubPredict
ClassificationPartitionedModel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
Support vector machine (SVM) classification model ClassificationSVM resubEdge, resubLoss, resubMargin, resubPredict
ClassificationPartitionedModel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict
In previous releases, the software omitted observations with missing predictor values from the resubstitution and cross-validation computations.
|
{"url":"https://se.mathworks.com/help/stats/classreg.learning.partition.classificationpartitionedmodel.kfoldmargin.html","timestamp":"2024-11-12T19:29:51Z","content_type":"text/html","content_length":"111379","record_id":"<urn:uuid:c56081b0-2d47-4115-adc3-171d2016695e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00653.warc.gz"}
|
Simple moving average
In this article, you’ll learn about simple moving average of price and how it’s calculated in Composer.
What is Simple Moving Average?
Simple moving average calculates the arithmetic mean price of an asset over a given time period. It can be viewed as a “less-noisy” price of an asset.
The formula for simple moving average is calculated as follows:
For a small-scale example, let’s say we want to calculate the 5-day simple moving average of an asset that’s currently trading at $10.00 on a Friday. The closing price was $6.00 on Monday, and it
went up a dollar each day, so the closing price for each day looks like this:
• Monday: $6.00
• Tuesday: $7.00
• Wednesday: $8.00
• Thursday: $9.00
• Friday: $10.00
While the current price is $10, the 5-day simple moving average would be $8.00. This helps smooth out the volatility of an average and is commonly used to indicate uptrends or downtrends.
Where do you see it in Composer?
Simple moving average is used in our Editor tool, most commonly in “If/Else” statements. An example is shown below:
In this example, we’re comparing the 10-day simple moving average of SPY to the current price of SPY. In calculation, Composer gathers the closing price of SPY for the last 10 days, sums them, and
divides by 10, and compares it to SPY’s current price.
If the 10-day simple moving average of SPY is higher than or equal to the current price, the blocks of the symphony listed below the “If” statement will execute. Otherwise, the blocks listed below
the “Else” statement will execute.
What is the step-by-step calculation?
To calculate the simple moving average of an asset:
• Select the asset and time period
• For each day in the time period, log the closing price of the asset. Sum these together and divide the sum by the number of days in the time period
• The output of this will be the simple moving average of the asset
|
{"url":"https://help.composer.trade/article/69-simple-moving-average","timestamp":"2024-11-14T15:20:17Z","content_type":"text/html","content_length":"21047","record_id":"<urn:uuid:078fa7eb-cd55-47e5-bd48-8e4ac3d8b1a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00260.warc.gz"}
|
Similar Polygons are composed of Similar Triangles
In the words of Euclid:
Similar polygons are divided into similar triangles, and into triangles equal in multitude and in the same ratio as the wholes, and the polygon has to the polygon a ratio duplicate of that which
the corresponding side has to corresponding side.
(The Elements: Book $\text{VI}$: Proposition $20$)
Let $ABCDE$ and $FGHKL$ be similar polygons such that $AB$ corresponds to $FG$.
We need to show that $ABCDE$ and $FGHKL$ are divided into similar triangles, and into triangles equal in multitude and in the same ratio as the wholes.
Also that the area of the polygon $ABCDE$ has to the polygon $FGHKL$ a ratio duplicate of $AB : FG$.
Join up $BE, EC, GL, LH$.
Since $ABCDE$ and $FGHKL$ are similar:
$\angle BAE = \angle GFL$
From Book $\text{VI}$ Definition $1$: Similar Rectilineal Figures:
$BA : AE = GF : FL$
Thus from Triangles with One Equal Angle and Two Sides Proportional are Similar, $\triangle ABE$ is similar to $\triangle FGL$.
So $\angle ABE = \angle FGL$.
But $\angle ABC = \angle FGH$ because $ABCDE$ and $FGHKL$ are similar.
$\angle EBC = \angle LGH$
Because $\triangle ABE$ is similar to $\triangle FGL$:
$EB : BA = LG : GF$
Also, because $ABCDE$ and $FGHKL$ are similar:
$AB : BC = FG : GH$
So from Equality of Ratios Ex Aequali:
$EB : BC = LG : GH$
So from Triangles with One Equal Angle and Two Sides Proportional are Similar, $\triangle EBC$ is similar to $\triangle LGH$.
For the same reason, $\triangle ECD$ is similar to $\triangle LHK$.
So $ABCDE$ and $FGHKL$ have been divided into similar triangles, and into triangles equal in multitude.
Now let $AC, FH$ be joined.
Because $ABCDE$ and $FGHKL$ are similar:
$\angle ABC = \angle FGH$
From Triangles with One Equal Angle and Two Sides Proportional are Similar $\triangle ABC$ is similar to $\triangle FGH$.
Therefore $\angle BAC = \angle GFH$ and $\angle BCA = \angle GHF$.
Also, we have that $\angle BAM = \angle GFN$, and $\angle ABM = \angle FGN$.
So from Sum of Angles of Triangle Equals Two Right Angles $\angle AMB = \angle FGN$ and so $\triangle ABM$ is similar to $\triangle FGN$.
Similarly we can show that $\triangle BMC$ is similar to $\triangle GNH$.
Therefore $AM : MB = FN : NG$ and $BM : MC = FN : NH$.
But from Areas of Triangles and Parallelograms Proportional to Base:
$AM : MC = \triangle ABM : \triangle MBC$.
So from Sum of Components of Equal Ratios:
$\triangle ABM : \triangle MBC = \triangle ABE : \triangle CBE$
$\triangle ABM : \triangle MBC = AM : MC$
$\triangle ABE : \triangle CBE = AM : MC$
For the same reason:
$FN : NH = \triangle FGL : \triangle GLH$
As $AM : MC = FN : NH$, it follows that:
$\triangle ABE : \triangle BEC = \triangle FGL : \triangle GLH$
$\triangle ABE : \triangle FGL = \triangle BEC : \triangle GLH$
We now join $BD$ and $GK$, and by a similar construction show that:
$\triangle BEC : \triangle LGH = \triangle ECD : \triangle LHK$
From Sum of Components of Equal Ratios:
$ABE : FGL = ABCDE : FGHKL$
But from Ratio of Areas of Similar Triangles $\triangle ABE$ has to $\triangle FGL$ a ratio duplicate of $AB : FG$.
Therefore the area of the polygon $ABCDE$ has to the polygon $FGHKL$ a ratio duplicate of $AB : FG$.
In the words of Euclid:
Similarly also it can be proved in the case of quadrilaterals that they are in the duplicate ratio of the corresponding sides. And it was also proved in the case of triangles; therefore also,
generally, similar rectilineal figures are to one another in the duplicate ratio of the corresponding sides.
(The Elements: Book $\text{VI}$: Proposition $20$ : Porism)
Historical Note
This proof is Proposition $20$ of Book $\text{VI}$ of Euclid's The Elements.
|
{"url":"https://proofwiki.org/wiki/Similar_Polygons_are_composed_of_Similar_Triangles","timestamp":"2024-11-02T18:05:02Z","content_type":"text/html","content_length":"52692","record_id":"<urn:uuid:73bd316c-beaa-4e77-bcc7-bf22385a0da9>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00633.warc.gz"}
|
Cohomology in six lines - Quantum Calculus
Cohomology in six lines
Here is the code to compute a basis of the cohomology groups of an arbitrary simplicial complex. It takes 6 lines in mathematica without any outside libraries.
The input is a simplicial complex, the out put is the basis for $H^0,H^1,H^2 etc$. The length of the code compares in complexity with computations in basic planimetric computations in a triangle (
Example [mathematica notebook] for Math E320). We just compute the Dirac operator D, then split up the blocks H[k] of D^2 and compute their kernel. These vector spaces are now equivalent to the
cohomology groups by Hodge.The genious move of Hodge is that rather than talking about equivalence classes of cocycles (which requires some mathematical training to appreciate), one can look at the
kernel of concrete matrices (which we do after three weeks in an intro course of linear algebra). In the following self contained code, the first 4 lines generate a random simplicial complex. Then,
in the next 6 lines, the Dirac and Hodge operator is computed. Finally, the basis of the null spaces of the Laplacians are spit out). The cohomology in the discrete has again and again been
reinvented, but it is definitely due to Betti or Poincare, the key idea being the notion of the incidence matrix d, which implements “div, grad, curl etc”.
The earliest reference for discrete Hodge I could find is the survey lecture “The {Euler} characteristic – a few highlights in its long history” by Benno Eckmann. As a graduate student, I have seen
one one of these survey lectures, and it had been the one about Euler characteristic. The talk was brilliant and the lecture hall at the nearby university had been packed. I never took a course from
Eckmann, he got retired, but Eckmann was still seen a lot at the department when I was a student there. He had been the person who told me that I won the fellowship to spend a year in Israel
(1988-1989). The following code shows also that the topic of cohomology is something which could be introduced early on in a linear algebra course as it is just the process of computing the kernel of
a specific matrix. We just had covered that in our linear algebra course Math 21b.
G=R[10,16];n=Length[G]; Dim=Map[Length,G]-1;f=Delete[BinCounts[Dim],1];
Orient[a_,b_]:=Module[{z,c,k=Length[a],l=Length[b]}, If[SubsetQ[a,b] &&
dext=Table[0,{n},{n}]; dext=Table[Orient[G[[i]],G[[j]]],{i,n},{j,n}];
Dirac=dext+Transpose[dext]; H=Dirac.Dirac; f=Prepend[f,0]; m=Length[f]-1;
cohomology=Map[NullSpace,U]; betti=Map[Length,cohomology]
You can see why there is a lot of theory to compute the cohomology more effectively. A computer does not mind to find the kernel of large matrices, but when dealing with simplicial complexes with
thousands of elements, then the computer has to work hard too.
By the way, various Dirac operators have been considered in the discrete. It appears that discrete Dirac operator as a matrix in the discrete had been overlooked for long. The Dirac operator in the
continuum is a silly beast as one has to use a Clifford algebra in order to factor the Laplacian. In the discrete, such gymnastics is not unnecessary. But it is nice. McKean-Singer for example is
quite simple in the discrete, when following the approach of the Cycon-Froese-Kirsch and Simon book (the later book had been one of the key books for me in graduate school).
|
{"url":"https://www.quantumcalculus.org/cohomology-six-lines/","timestamp":"2024-11-02T07:46:56Z","content_type":"text/html","content_length":"58342","record_id":"<urn:uuid:87cecc03-c1a6-4b70-ab51-069d5326cf7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00259.warc.gz"}
|
MATLAB A Practical Introduction to Programming and Problem Solving 5th Edition Attaway SOLUTION MANUAL
MATLAB A Practical Introduction to Programming and Problem Solving 5th Edition Attaway SOLUTION MANUAL
Solution Manual for MATLAB A Practical Introduction to Programming and Problem Solving, 5th Edition, Stormy Attaway, ISBN: 9780128154793, ISBN: 9780128163450
Table of Contents
Part 1: Introduction to Programming Using MATLAB
1. Introduction to MATLAB
2. Vectors and Matrices
3. Introduction to MATLAB Programming
4. Selection Statements
5. Loop Statements and Vectorizing Code
6. MATLAB Programs
7. String Manipulation
8. Data Structures
Part 2: Advanced Topics for Problem Solving with MATLAB
9. Data Transfer
10. Advanced Functions
11. Introduction to Object-Oriented Programming and Graphics
12. Advanced Plotting Techniques
13. Sights and Sounds
14. Advanced Mathematics
|
{"url":"https://nursingtestbankfor.com/product/matlab-a-practical-introduction-to-programming-and-problem-solving-5th-edition-attaway-solution-manual/","timestamp":"2024-11-03T12:47:21Z","content_type":"text/html","content_length":"71997","record_id":"<urn:uuid:a590566a-5c5f-4113-9fb7-f9be090a3db7>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00075.warc.gz"}
|
Course 2019-2020 a.y. - Universita' Bocconi
30063 - MATEMATICA - MODULO 2 (APPLICATA) / MATHEMATICS - MODULE 2 (APPLIED)
Department of Decision Sciences
For the instruction language of the course see class group/s below
Go to class group/s:
Class group/s taught in English
Suggested background knowledge
A refresher of differential calculus is suggested.
Mission & Content Summary
An increasing number of economic activities entails financial and probabilistic features that cannot be any more neglected. Several car manufacturers directly supply leases. The leasing cost is
summarized in a internal interest rate which represent a sizeable source of the company revenues. Nowadays almost all investment opportunities are accompanied by information on the probability
distribution of their yields to maturity. A recent UE legislation states that some accounting items should be determined on the basis of financial and probabilistic principles too. Knowing what a
probability and a financial law are, is by now an essential component of the background of every student in Economics. The course objective is to provide students with the basic notions of
Probability and Financial Calculus that are required in many Economic, Financial and Management fields. The course consists of three parts: (i) integral calculus – instrumental in facing the second
part; (ii) probability calculus – basic notions and their proper use; (iii) financial calculus – basic notions and their application.
• Integral calculus: antiderivative; indefinite integral; integration methods; definite integral; integral function; generalized integrals and convergence criteria.
• Probability Calculus: classical, empirical and subjective approaches. Axiomatic approach: sample space, events algebra, probability measure. Conditional probability.
• Random numbers and vectors: distribution function, probability and probability density functions. Expected value and variance of a random number. Joint and marginal probability function of a
random vector; stochastic independence and linear correlation; covariance; expected value and variance of a linear combination of random numbers.
• Financial calculus: present and final value: financial laws of one and two variables. Decomposability. Annuities and loan amortization. Consumer credit.
• Fixed income bonds. Interest Rate Term Structure. Duration: financial immunization and volatility of the bond price.
• Financial choices: DCF, NPV and IRR. Generalizations: GNPV, APV and GAPV. Financial leverage. Decomposition of global indices.
Intended Learning Outcomes (ILO)
At the end of the course student will be able to...
• Recognize the proper meaning of standard indices of cost/profitability for a financial operation such as NPV, IRR, etc..
• Identify the proper meaning probabilistic statement and terms concerning random quantities such as uncorrelated random yields, default risk and so on.
• Reproduce the correct procedures for computing integrals, probabilities and financial quantities.
At the end of the course student will be able to...
• Apply the learned calculus methods to compute and/or asses the correctness of quantities which are relevant both in theory and in practice such as: the no arbitrage price of a bullet bond, the
internal effective rate of a loan, the expected returns rate of a portfolio, etc..
• Evaluate the profitability of a financial operation by choosing the proper method/model to adopt.
• Compute a probability measure that is coherent with the available information on the stochastic event/number.
Teaching methods
• Face-to-face lectures
• Exercises (exercises, database, software etc.)
Teaching and learning activities for this course are divided into (1) face-to-face-lectures, (2) in class exercises (3) self-assessment on line materials.
1. During the lectures convenient examples and applications allow students to identify the quantitative patterns and their main logical-mathematical properties.
2. The in class exercises allow students to apply the analytical tools illustrated during the course.
3. Besides the exercises proposed in class, further exercises, such as "mock exams" and "past written exam" are uploaded on-line. The on-line exercises allow students to individually practice and
self-assess their own skills.
Assessment methods
Continuous assessment Partial exams General exam
• Written individual exam (traditional/online) x x
The exam modality is written: the final grade depends exclusively on the student performance in the written exam. The written contains both closed-ended and open-ended questions. Their structure is
designed in order to assess:
• The ability to identify the proper tool to be used in the described framework.
• The ability to correctly apply the chosen tool to compute and/or choose the required result.
• The ability to describe the notions and the methods used.
• The ability to justify in a proper manner the achieved conclusions.
Teaching materials
• L. PECCATI, S. SALSA, A. SQUELLATI, Integral Calculus, Extract from Mathematics for Economics and Business, Milano, EGEA, 2008 (Chapter 7).
• E. CASTAGNOLI, M. CIGOLA, L. PECCATI, Probability. A Brief Introduction, Milano, EGEA, 2009, second edition.
• E. CASTAGNOLI, M. CIGOLA, L. PECCATI, Financial Calculus with Applications, Milano, EGEA, 2013.
Last change 27/05/2019 08:55
|
{"url":"https://didattica.unibocconi.eu/ts/tsn_anteprima.php?cod_ins=30063&anno=2020&ric_cdl=TR07&IdPag=6203","timestamp":"2024-11-09T13:04:05Z","content_type":"text/html","content_length":"172679","record_id":"<urn:uuid:f7cb8d0e-7fce-4e5a-afac-e41a4896967c>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00733.warc.gz"}
|
Lesson plan: KS4 science – rates of reaction in chemistry | Maths and Science | Teach Secondary
From colour-changing veg to exploding cornflour, Dr Joanna Rhodes has some original ideas for practical activities around rates of reaction…
Why teach this?
Rate of reaction provides a link between the particle model students study in physics at the start of KS4 and how a chemical reaction takes place. Students enjoy practical chemistry and rate
practicals extend students’ dexterity in manipulating laboratory equipment such as gas syringes. They are also adaptable for the less well-stocked department as upturned measuring cylinders are
equally as effective and cheap to provide in class sets. Data generated in rate experiments is typically reliable enough to analyse mathematically and cross-curricular links to GCSE mathematics, in
particular the gradient at different points on a curve, lend themselves well to team teaching between faculties. Living in the “Rhubarb Triangle” I succumb to any opportunity to get this leafy
vegetable (yes, it is considered a vegetable not a fruit) into my lessons. The rate of reaction experiment in this lesson, using rhubarb, is one of my favourites. Rate of reaction is a key concept at
KS4 and requires secure knowledge for students who progress onto A-level. It is also a topic that can be taught very practically and adapted for a range of abilities and is particularly suited to
extending your gifted and talented students both chemically and mathematically. Students begin by investigating the factors that can affect the rate of a chemical reaction using their own bodies and
use their discoveries to suggest ways of speeding up some basic practical reactions. Links with the chemical industry could be discussed in the context of controlling reactions that may be
explosively fast as well as speeding up those reactions that would cost too much because they are too slow. The topic lends itself well to demonstrations of impressive catalysis in the case of the
Genie in a Bottle as well as student led discovery learning included in the sequence of practicals in the main part of the lesson. Most rate practicals can easily be adapted to generate data that can
be plotted as a graph extending student mathematical understanding to include the changing gradient of a curve.
Blindfold molecules
In this activity students imagine they are molecules in a beaker. Explain to students that in order to react they must ‘collide’ with another molecule. If possible clear the furniture in the room to
the sides or use an outdoor space or gym to minimize risk of injury. Blindfold a small group of students and ask them to walk around, keeping a tally on how many times they touch or bump another
student in one minute. In the first variation add more students (double the number e.g. from five to ten) and get them to walk around keeping a tally for another minute. Students should record an
increase in the number of ‘collisions’ and so make the link between number of molecules in a given space (i.e. concentration) and the number of reactions (proportional to rate).
By using this model students can investigate:
• i) The effect of temperature - by asking students to move a bit faster i.e. with greater kinetic energy
• ii) The effect of a catalyst - by introducing a student who is not blindfolded and who can guide two other ‘molecules’ together
• iii) The effect of surface area by comparing the number of collisions for two groups of four students holding hands with all eight students moving independently.
After this engaging starter show the groups some sugar cubes and water and ask them to suggest different methods of increasing the rate of sugar dissolving using the principals they have learned. If
you wanted to split the learning over two lessons you could ask students to complete the sugar practical using one method of their choice in groups. Provide beakers, measuring cylinders,
thermometers, Bunsen burners, beakers, tripods and gauzes and pestles and mortars and watch as your students increase temperature, concentration and surface area to help their sugar cubes to
Main Activities
1 – Rapid rhubarb
In this experiment, rhubarb sticks, which contain oxalic acid, are used to reduce and consequently decolourise potassium manganate(VII) solution. The experiment can be used to show how the rate of
reaction is affected by surface area or concentration and is available from the Nuffield Foundation [Additional Resource 1], which contains health and safety guidance especially cautioning against
the use of rhubarb leaves, which contain too much oxalic acid and are harmful. To investigate the effect of surface area cut three 5cm lengths of rhubarb. Leave one complete and divide the others
into two and four pieces respectively. Place the pieces into a beaker containing 50cm3 of acidified potassium manganate (VII) and start the timer. Once the purple colour disappears stop the timer.
This can be repeated for each set of rhubarb pieces and more able students may be able to identify the number of pieces as the independent variable and the time taken as the dependent variable in
order to plot a graph of the relationship. To investigate the effect of concentration make an extract of rhubarb by boiling in a beaker until the rhubarb falls to pieces. Allow it to cool and strain
and filter the mixture keeping the solution you have extracted. Then conduct a similar reaction to the first experiment, initially adding one drop of the extract to 50cm3 of the potassium manganate
(VII) solution and timing how long it takes to decolorise. Repeat for 2, 3, 4 and 5 drops plotting a graph of the results. The concentration of the potassium manganate (VII) solution is not critical
for these experiments; it can be made by dissolving a few crystals in 1 M sulfuric acid, giving a light purple colour. By carrying out these experiments students should be able to observe that as the
surface area or concentration of the rhubarb increases, so does the rate of the reaction. Higher ability students may observe (or be prompted) that putting in more drops of the rhubarb extract has
increased the total volume. You may then like to discuss the implications of this with the students. If the drop volume is small enough compared to the total volume it should not have a significant
effect on the relationship observed.
2- Genie in a bottle
In this experiment the decomposition of hydrogen peroxide into water and oxygen is catalysed by the addition of manganese (IV) oxide leading to a rapid release of oxygen and steam, which appears
dramatically like a genie out of a bottle if a conical flask is used [AR2]. The demonstration with 100vol hydrogen peroxide is safest carried out by a teacher, however students can investigate the
catalysis with lower concentrations of peroxide using a gas syringe or upturned measuring cylinder to collect the gas evolved as the mass of catalyst is changed as described on a superb worksheet
produced by Leicester Grammar School [AR3]. Other substances will also catalyse the decomposition of peroxide; an interesting alternative is to investigate the best catalyst by measuring how much gas
is evolved for each in one minute. Suggestions, some of which develop links with biology, include liver or blood [AR5], iron (III) oxide and potassium iodide. For other examples consult the helpful
practical procedure from the Royal Society of Chemistry [AR4].
In and out of control
Perform a demonstration of a custard/cornflower explosion [AR6] to the class to illustrate the way that surface area can be large enough to trigger and explosive reaction. As a contrast, model the
formation of lab-grown stalagmites [AR7] using a saturated solution of sodium ethanoate falling drop by drop from a burette onto a white tile (it helps to set this up about 20 minutes before the
lesson so a small ‘stalagmite’ has already begun to form). Ask students to suggest why it might be important to control rates of reaction in the chemical industry. Now play the catchy rate of
reaction song written by Mark Rosengarten [AR8] and ask students to make a list of five factors that an engineer could use to speed up a reaction and five factors that an engineer could use to slow
down a chemical reaction. Challenge them to come up with their own song, poem or mnemonic to remember these factors!
Graphs and Gradients
A rate of reaction practical lends itself well to practising cross-curricular mathematical skills. Introduce students to the concept that rate is amount of product formed divided by time and so the
gradient of a concentration against time graph, or mass lost against time (if the reaction produces a gas) is equal to the rate. Students can then be asked to describe how the rate changes as a
reaction proceeds by looking at the change in gradient (steep initial gradient, leveling out and eventually stopping completely). Help students to make the connection between this and the change in
concentration of reactants as the reaction proceeds. A useful activity to check their understanding could be to interpret a set of concentration/time curves to compare the rates of different chemical
reactions. For your highest ability students you could look for the description in prose and to support other student to access the same material you could scaffold the activity with a multiple
choice exercise asking them to select the reaction with the highest initial rate from three examples or the reaction which takes the longest to stop altogether. A key piece of knowledge that is often
tested at KS4 is whether students recognize that speeding the reaction up with a catalyst or higher temperature does not increase the amount of product formed just the speed the reaction reaches
completion. Superb examples are available on the BBC Bitesize website [AR9].
Collision theory
Underpinning the factors that affect rate of reaction is collision theory. In order to react particles must collide with each other with sufficient energy and in the correct orientation. This is
similar to the model used for the starter activity Blindfold Molecules. Ask students to explain, using collision theory, why each of the factors they have listed during the lesson affect rate of
reaction. It is helpful to provide an example such as “If I increase the concentration, the number of particles increases so the frequency of successful collisions increases and the rate increases”.
From here students can modify their answers for each factor including concentration, temperature, pressure (gases) and surface area. Catalysts speed chemical reactions up in a different way by
providing an alternative route for the reaction to take place with a lower activation energy. By asking students to use this information to sketch an energy profile diagram for a reaction with and
without a catalyst you can revise or introduce energy changes in reactions.
Additional Resources
You may also be interested in...
|
{"url":"https://www.teachsecondary.com/maths-and-science/view/lesson-plan-ks4-science-rates-of-reaction-in-chemistry","timestamp":"2024-11-03T10:25:13Z","content_type":"text/html","content_length":"80732","record_id":"<urn:uuid:ae304798-ded7-4803-b28c-73e7eeb941f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00842.warc.gz"}
|
01 Data, Statistics, and Statistical Questions
Statistical questions
In mathematics, most of the questions have definite answers. But in real life, even a simple question such as “How much time does it take you to go to school by bus” may not have a definite answer.
Sometimes it takes 7 minutes and sometimes it takes 9 minutes.Even a simple question like this produces answers that have variability, i.e. the answer varies from day to day (varies every time).
If there was no variability and every day it took exactly 9 minutes for a bus ride, then we would not need any additional investigation on this topic. But, because data varies, we need a separate
branch of mathematics called statistics to answer questions about the data. As we go further in, we’ll learn more about this.
In statistics, we start with a question, since that is what results in getting answers or ‘data’. But the question has to be phrased in such a way that gives us a desired answer, basically that gives
us data we can work with. Such questions are called statistical questions. Confusing? Don’t worry, we’ll slowly go into examples and details to make it clearer about what “desired answer” means.
Let’s start with two questions and decide which ones are ‘statistical’.
A teacher asks the two questions to their class:
Asking every student in class: What grade do you belong to? Asking sixth grade students: What grade do you belong to?
What do you think are possible answers to the two questions?
In the first case, we could get different answers, ranging from 1 to 12. In the second case, we obviously only get 6 as the answer since we are only asking 6th graders.
Do keep in mind who the question is being asked to. That greatly affects the answer you get.
How are the two answers we get different? In the second case, we only get one answer, there is no change in them. In the first case however, there is a wide range of answers we get, meaning there is
some sort of variability in the answers. This type of question is a statistical question, which is what we use. The one where we get the same answer is not that helpful to us since we cannot do
anything more with the single answer we get.
So, a statistical question is the one where you expect variability in answers like above. Answers to such questions give us the data we require so we can further look into it.
Here are some other examples of statistical questions: How many hours do you sleep every day? How many minutes do students in your class spend on homework? What is the favorite food of your class? In
a presidential election, do potential voters support Joe Biden? How do the annual salaries for men and women in similar occupations compare?
And here are some examples of questions that are not statistical. Where in town does our math teacher live? How many minutes of recess do sixth-grade students have each day? How much water can a 1 L
bottle hold at the most?
These questions are not statistical because the answers to these questions do not vary/change. The math teachers live in a particular location and each day the recess is the same, let’s say 20
minutes. The 1 L bottle will always hold 1 L of water at max.
Variables - numerical and categorical
The data we collect from statistical questions consists of observation or measurements on a variable. Similar to algebra, in statistics, when we say variable, it is a characteristic that may be
different from one individual to another or from one instance to another.
For example, in the statistical question “How many minutes do students in your class spend on homework?”, the number of minutes students spend on homework is called a variable because its measurement
varies from one individual to another. In the case of the statistical question “How many hours do you sleep every day?”, the number of hours is a variable because its measurement will change from one
day to another.
Data consist of observations or measurements on a variable. In statistics, a variable is a characteristic that may be different from one individual to another or from one instance to another.
For example, in the statistical question “How many minutes do students in your class spend on homework?”, the number of minutes students spend on homework is called a variable because its measurement
varies from one individual to another. In the case of the statistical question “How many hours do you sleep every day?”, the number of hours is a variable because its measurement will change from one
day to another.
What if the question was “What is the favorite food of your class?”
The answers would be any type of food, like Pizza, Burger, etc.
Can you see a difference between the two different cases we just mentioned?
The answer to both of these questions is “How many minutes do students in your class spend on homework?” and “How many hours do you sleep every day?” are in terms of numbers. The number of minutes
spent on homework is a quantity such as 20 mins, 60 mins, and so on. Similarly, the number of hours you sleep is also a quantity. How do you know they are numerical quantities? Well, if you add any
two of these quantities, you get a third quantity. If you sleep 8 hours today and 7 hours tomorrow, you would sleep a total of 15 hours in two days. So when we have numerical quantities as our
variable, adding them makes sense (which means you can also apply other operations on them). Such variables are called numerical variables.
However, we see that the answer to the other question “What is the favorite food of your class?” is not numerical (since it could be Pizza, Burger, Sandwich or any other food). We cannot possibly add
Pizza and Burger to get a meaningful answer. Similarly, the answers to the question “In a presidential election, do potential voters support Joe Biden?“ are Yes, No, or Maybe. We cannot add them
either. Would Yes and No together mean maybe? Probably not. Such variables are called categorical variables because rather than quantities, they have specific categories such as ‘Pizza’, ‘Burger’,
‘Yes’, ‘No’ and so on as an answer.
((Sometimes categorical variables have a certain order we need to follow. Let’s look at the question: “How would you rate a movie based on the scale: “High”, “Medium”, “Poor”?” The answer to this is
the three categories provided, and we know that one has more value than the other (A “High” is better than “Poor”). Such categorical variables are called ordinal variables, since they have a specific
‘order’. When categorical variables do not have any order, like the one with the favorite food, they are call nominal variables.))
Now that you have started to understand the difference between numerical and categorical variables, let’s look at a tricky question.
You are running a survey and you ask each of the people what their home zip code is. You get answers like 6547, 2356, 9871, 8714, etc.
Is zipcode a numerical or categorical variable?
[Hint: Check whether it makes sense to add two measurements.]
|
{"url":"https://edukimath.com/grade-6/statistics-and-probability/statistical_questions/","timestamp":"2024-11-03T05:49:01Z","content_type":"text/html","content_length":"44743","record_id":"<urn:uuid:6889b62f-b79f-44f1-9ba9-1f13f3a44060>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00397.warc.gz"}
|
Discrete and Continuous DataDiscrete and Continuous DataDiscrete and Continuous Data
Quantitative data can be divided into two types, discrete and continuous data.In this video you will learn the differences between discrete and continuous data.
Discrete and Continuous
Discrete data is data that has distinguishable spaces between values.
Often it is data that is counted.
On a graph the ordered pairs are at specific locations, so the graph is not connected, but is only points.
Some examples of discrete data include,
Number of dogs
Number of people
Tickets you have for a show.
Continuous data is data that falls on a continuum.
On a graph it is all of the points, and the points in between.
It is data that can be measured as finely as possible.
It is often a physical measure.
Discrete data will often will contain fractions or decimals.
Some examples of continuous data include,
Let's look at two examples and see the differences between these types of data.
Family Members
Discrete 5 family members
Continuous: Range between 5 feet and 6 feet tall
Books, books on your bookshelf.
The discrete data is that you have 6 books on your bookshelf.
Continuous data is that they range in height from 15 to 30 cm in length
Discrete or not?
Number of cows on a farm yes discrete
Age of the cows on the farm continuous
Number of points scored in a game. discrete
Amount of oxygen in the atmosphere continuous
Temperature is continuous data
Number text messages sent today discrete
MooMooMath and Science uploads a new Math and Science video everyday
Please visit our channel
0 comments:
|
{"url":"http://www.moomoomathblog.com/2016/11/discrete-and-continuous-data.html","timestamp":"2024-11-06T07:13:01Z","content_type":"application/xhtml+xml","content_length":"83204","record_id":"<urn:uuid:4ca4b096-76e1-49c9-b0ee-29ca80631565>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00302.warc.gz"}
|
AdsPower and Betting - what is arbitrage in betting and how to use multi-accounting here?
Hello! Another industry in which multi-accounting can be used effectively is betting. Betting is gambling on sports or any other events. In today's article we will explain betting arbitrage and why
multiaccounting is needed.
Sports bookmaking is a big business, which has become very popular and legalized in recent years. If you look on a large scale, a bookmaker's office is like a casino, from which only the organizer of
the entire operation can emerge as winner. The win rate and odds are calculated always prevents the players from steadily taking actions in their favor. However, some part of the money can be
successfully earned and withdrawn. There are three main ways.
Serious Analytics
Theoretically, in many disciplines and events in them, especially in cybersports, it is very difficult to calculate the odds of many matches accurately. If you are an expert in any cybersport
discipline, it is quite realistic to find matches where the actual odds are different from what the bookmakers are counting on. It takes a lot of effort and experience, and you don't have to think
about reliability.
The betting business has an unfair side. Obviously, who would not want to make a lot of money in a couple of hours by simply agreeing with the people you need? Contractions happen from time to time.
But if you read this article, you shouldn't try to do it for many reasons.
First, it's illegal. Second, it's almost impossible to find people you can negotiate with without being cheated. Thirdly, even if you succeed, such actions are too obvious and the bookmaker will not
let you withdraw funds.
Arbitrage betting
This is the most realistic way to make money on bets. It comes down to finding favorable odds at different bookmakers and making money on it. Of course, sooner or later you will get banned, even if
you have not broken anything. Because it's unacceptable for a bookmaker to lose money. That's what multi-accounting will help.
Some arbitrage theory and lots of numbers!
A simple formula you will need for arbitrage in betting: 1 / odds = chance of winning. Bookmakers always calculate odds in their favor, without exception. However, odds are different among
bookmakers, which is the essence of arbitrage.
Imagine an event that has two outcomes. For example, in a basketball game, the number of goals scored is either an even or an odd number. Normally, the odds here are 1.9, meaning that the chance of
this event occurring should be 1/1.9 = 53%. Obviously, that is not right. The chance of occurring is 50%, which means that there is a 3% chance in favor of the bookmaker.
Now let's look at an event with three outcomes. For example, the soccer match Everton - Chelsea. The odds for Everton win, draw, Chelsea win are 5.0, 3.9, 1.67 respectively.
According to our formula, let's calculate: 1/5 + 1/3.9 + 1/1.67 = 20% + 25.6% + 60% = 105.6%. This means that the probability of happening of all events is more than 100%, and this is impossible.
5.6% goes in favor of the bookmaker.
So how to make money?
Now imagine that you did the following trick: you opened several sites of different bookmakers' offices and found high odds at 3 different sites. The first one has an Everton win odds of 5.5, the
second one has a draw odds of 4.0, and the third one has a Chelsea win odds of 1.8. Let's convert it all to percentages according to our formula and get 18.2% + 25% + 55.6% = 98.8%. Bingo! One of the
three events will happen with 100% probability, and you have found space for arbitrage.
The next step is to calculate how much to bet on each outcome. For ease of calculation, you can use sites with arbitrage calculators, and there are many such sites. Here is one of them - https://
arbitragecalc.com/. Here, everything is intuitive. You only need to enter the odds of two or three events (possibly more) and the total amount you are willing to spend. In the second row, under the
word Stake, it will be written how much to bet on each odds.
Here is an example with two events, this is a real tennis match.
If you only have 50,000 units, you can bet 8992 on the first outcome and 41007 on the second. Then you are sure to be in the black, earning 1258 units in the currency in which you bet!
And here's an example of the soccer game I wrote above.
As you can see, in both cases you are 100% likely to make money. As a rule, the profit from a single such turnover will be about 1-3% if you choose the earn for sure strategy.
You can distribute the amount of bets differently as well. Then you can either return the bet amount or earn somewhat more than in the earn for sure option. That is, you won't lose for sure and
probably will earn.
In most cases, the events on which you can arbitrage can be found in live betting. The odds change quickly there, so it is better to choose events with two outcomes, such as tennis games. With
sufficient capital and the ability to follow a large flow of information, it is possible to achieve good results.
The main problem
The problem, as I mentioned before, is not only winning money, but also withdrawing it. Even though you do not violate anything, you will still get banned sooner or later, so you need to have a lot
of accounts. Each account will require verification of identity. Since you are not doing anything illegal, you can safely use the documents of people in your closest associates.
Those who have ever been involved in betting know that the support team at betting shops can make a large number of requests for all actions made on the account. In particular, they will ask what
device you used to enter the site, what browser you used, and what IP address. If you have logged into several accounts from the same device you won't be able to withdraw your money.
With a lot of accounts it's very easy to get confused about it all. With AdsPower you'll not only be able to bet as safely as possible by creating an isolated environment for each account, but you'll
also save a lot of time, because all your accounts will be in one place - it's very convenient.
I hope that today's article will be useful for you. If you still have questions - https://linktr.ee/adspower_browser. Your inquiries are always welcome!
किसी भी उद्योग के लिए सर्वश्रेष्ठ मल्टी-लॉगिन ब्राउज़र
|
{"url":"https://www.adspower.com/hi/blog/AdsPower-and-Betting","timestamp":"2024-11-02T12:52:28Z","content_type":"text/html","content_length":"202708","record_id":"<urn:uuid:1897f734-dc1f-480c-ae9e-1fb24a4bcc15>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00680.warc.gz"}
|
Good Question - an Astronomy Net God & Science Forum Message
" Hi Souza "
Hi Box!
" If you don't wanna tell me your answer, fine. "
I do feel bad about not replying to a lot of posts here, yours or otherwise. I hope you'll forgive me for having a job.
" According to your careful prognosis, at what level of dexterity with algebra does a person become able to understand the mysteries of physics that a mathematical physicist comprehends? "
I don't really know how to answer that question, especially as so many mathematical physicists don't comprehend physics!
Why is it that some people can understand something while others can't? I really don't know, and I wish I did.
" It's not hard to understand the mathematical relation of distance between two objects, their masses and the pull of gravity, once the equation has been discovered by an advanced mathematician. "
But that's the easy part. And the good thing (or bad thing, depending on how you look at it) is that you don't have to understand the equations in order to use them. That's the wonder of it, but it's
also the source of all confusion.
" But if a mathematician or physicist arrives at an equation that works, are you saying that mathematicians (who can reproduce the lengthy calculation which got them there after "factoring out") are
the only "regular folks" who will be able understand the shape and causation of the universe's phenomena? "
Haa!!! Right here is the whole problem! The relationship between the equations of physics and the "shape and causation of the universe's phenomena" is far from being a well-understood issue. To some
extent we understand it, but a lot of it ends up becoming a matter of personal opinion.
Let me give you a case in point. I used to think the idea of warped space was ridiculous, and I still think it is. The whole problem is that according to physics spacetime (not space!) is warped, and
spacetime is just a imaginary set of coordinates. Now no one can seriously oppose the idea of a warped set of coordinates, but the issue is what does it mean? Does the fact that the laws of physics
can only be cast in a certain way mean that they perfectly describe the universe? I believe that question cannot be answered, so all we're left with are opinions.
What one needs to keep in mind is that physics is not about opinions, and that's why it doesn't concern itself with philosophical issues. Physics is very practical and is only concerned with that
which no one can seriously argue against.
That's what I think, for what it's worth... which is not much.
|
{"url":"http://www.astronomy.net/forums/god/messages/20697.shtml","timestamp":"2024-11-12T02:54:36Z","content_type":"text/html","content_length":"16738","record_id":"<urn:uuid:3d875b58-d8b3-451b-bf22-b2cfef3bbd08>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00399.warc.gz"}
|
8.1: Basics and examples
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Any mathematical system must have a starting point; we cannot create something out of nothing. The starting point of a mathematical system (or any logical system, for that matter) is a collection of
basic terminology accompanied by a collection of assumed facts about the things the terminology describes.
a label for an object or action that is left undefined
a statement (usually involving primitive terms or terms defined in terms of primitive terms) that is held to be true without proof
a collection of primitive terms and axioms
Primitive Terms
• woozle (noun),
• dorple (noun),
• snarf (verb).
1. There exist at least three distinct woozles.
2. A woozle snarfs a dorple if and only if the dorple snarfs the woozle.
3. Each pair of distinct woozles snarfs exactly one dorple in common.
4. There is at least one trio of distinct woozles that snarf no dorple in common.
5. Each dorple is snarfed by at least two distinct woozles.
In the axiomatic system of Axiom \(\PageIndex{1}\), Axiom 1 is redundant as we may infer from Axiom 4 that there exist three distinct woozles. But there is no harm in including this axiom for
clarity. As well, we will later investigate the effect of altering it.
The axiomatic system of Example \(\PageIndex{1}\) seems like nonsense, but we can actually prove things from it.
There exist at least three distinct dorples.
(In this proof, all references to axioms refer to the axioms of \(\PageIndex{1}\).)
By Axiom 4, there exists a trio \(w_1,w_2,w_3\) of distinct woozles that snarf no dorple in common. Breaking this trio into various pairs and applying Axiom 3, we see that there exists a dorple \
(d_1\) that \(w_1\) and \(w_2\) both snarf in common, there also exists a dorple \(d_2\) that \(w_1\) and \(w_3\) both snarf in common, and there also exists a dorple \(d_3\) that \(w_2\) and \
(w_3\) both snarf in common. These snarfing relationships are illustrated in the diagram below.
Figure \(\PageIndex{1}\): A diagram of woozles snarfing dorples.
Now, suppose \(d_1\) and \(d_2\) were actually the same dorple — then all three woozles would snarf it in common.
Figure \(\PageIndex{2}\): A diagram of woozles snarfing dorples, assuming two of the dorples coincide.
As this would contradict our initial assumption, it must be the case that \(d_1\) and \(d_2\) are distinct. Similar arguments allow us to also conclude that \(d_1 \ne d_3\) and \(d_2 \ne d_3\text
It is often useful to give names to important properties of objects.
a label for an object or action that is defined in terms of primitive terms, axioms, and/or other defined terms
an formal explanation of the meaning of a defined term
Here is a definition relative to the axiomatic system of Example \(\PageIndex{1}\).
snarf buddies
two distinct dorples that snarf a common woozle
A definition allows us to more succinctly communicate ideas and facts about the objects of an axiomatic system.
A pair of snarf buddies snarf a unique woozle in common.
Suppose \(d_1,d_2\) are snarf buddies. By contradiction, suppose they snarf more than one woozle in common: let \(w_1,w_2\) be distinct woozles both snarfed by \(d_1\) and \(d_2\text{.}\) By
Axiom 2, each of \(w_1,w_2\) snarfs each of \(d_1,d_2\text{.}\) But this contradicts Axiom 3, as two distinct woozles cannot snarf more than one dorple in common.
Suppose we replace Axiom 1 in the system of Example \(\PageIndex{1}\) with the following.
1. There exist exactly three distinct woozles.
In the new, modified axiomatic system, our previous two theorems (Theorem \(\PageIndex{1}\) and Theorem \(\PageIndex{2}\)) remain true, because it is still true that there exist at least three
distinct woozles. But we can now also prove the following.
In the axiomatic system of Example \(\PageIndex{1}\) with the above modified version of Axiom 1, there exist exactly three distinct dorples.
You are asked to prove this in the exercises.
A nonsense system like the one in Example \(\PageIndex{1}\) is just that — nonsense — and not much use unless there are actual examples to which the developed theory can be applied.
a system obtained by replacing the primitive terms in an axiomatic system with more “concrete” terms in such a way that all the axioms are true statements about the new terms
If we agree that the axiom statements are still all true with the new terms, then any theorems proved under the abstract system are still valid in the new model system.
Again consider the axiomatic system of Example \(\PageIndex{1}\), still using the modified version of Axiom 1. Let the three distinct woozles be the points \((0,0)\text{,}\) \((1,1)\text{,}\) and \
((2,0)\) in the Cartesian plane. Let dorple now mean line in the plane, and let snarf now mean lies on. Convince yourself that the axioms of the system are all true with this interpretation of the
primitive terms.
Theorem \(\PageIndex{3}\) now says that there exist exactly three distinct lines in the plane which fit into our axiomatic system; can you find their equations?
Figure \(\PageIndex{3}\): A diagram of woozles snarfing dorples.
Using nonsense terms like woozle, dorple, and snarf for the primitive terms in an axiomatic system is usually not a good idea, as it takes all intuition out of the process of discovering statements
that can be deduced from the axioms. It would have been much better if we had used the words point instead of woozle, line instead of dorple, and lies on instead of snarfs as our primitive terms, to
be able to use our intuition about how such objects interact. In such a case, the axioms we choose should be a reflection of our idea of the simplest possible properties about the primitive terms,
properties that everyone could reasonably agree are “true” without proof. However, for the theorems deduced from such an axiomatic system to have the widest possible applicability, we should leave
the words point and line as truly primitive, undefined terms — that is, point and line should not be taken to mean point in the plane and line in the plane, as in the example above, but rather just
left as some abstract, intuitive idea of point and line.
|
{"url":"https://math.libretexts.org/Bookshelves/Combinatorics_and_Discrete_Mathematics/Elementary_Foundations%3A_An_Introduction_to_Topics_in_Discrete_Mathematics_(Sylvestre)/08%3A_Axiomatic_systems/8.01%3A_Basics_and_examples","timestamp":"2024-11-12T02:40:37Z","content_type":"text/html","content_length":"142227","record_id":"<urn:uuid:752ae3d1-5734-4378-8323-e22ba6fbc171>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00352.warc.gz"}
|
Ms Tayke
Nineteen again?
Happy Birthday Ms Tayke!
Ms Tayke does not want anyone to know how old she is. We think she is 388 but she often subtracts 19 from her age to make her feel younger.
How many times can you subtract 19 from 388?
Time left four minutes.
No calculators allowed.
Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page.
Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website.
Educational Technology on Amazon
Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access
to it here is a concise URL for a version of this page without the comments:
However it would be better to assign one of the student interactive activities below.
Here is the URL which will take them to a related student activity.
|
{"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_May18.ASP","timestamp":"2024-11-13T18:31:24Z","content_type":"text/html","content_length":"24854","record_id":"<urn:uuid:e37fcce9-e436-4533-87e0-2c6062637605>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00363.warc.gz"}
|
Principia Mathematica Explained
The Principia Mathematica (often abbreviated PM) is a three-volume work on the foundations of mathematics written by mathematician–philosophers Alfred North Whitehead and Bertrand Russell and
published in 1910, 1912, and 1913. In 1925–1927, it appeared in a second edition with an important Introduction to the Second Edition, an Appendix A that replaced ✱9 with a new Appendix B and
Appendix C. PM was conceived as a sequel to Russell's 1903 The Principles of Mathematics, but as PM states, this became an unworkable suggestion for practical and philosophical reasons: "The present
work was originally intended by us to be comprised in a second volume of Principles of Mathematics... But as we advanced, it became increasingly evident that the subject is a very much larger one
than we had supposed; moreover on many fundamental questions which had been left obscure and doubtful in the former work, we have now arrived at what we believe to be satisfactory solutions."
PM, according to its introduction, had three aims: (1) to analyze to the greatest possible extent the ideas and methods of mathematical logic and to minimize the number of primitive notions, axioms,
and inference rules; (2) to precisely express mathematical propositions in symbolic logic using the most convenient notation that precise expression allows; (3) to solve the paradoxes that plagued
logic and set theory at the turn of the 20th century, like Russell's paradox.^[1]
This third aim motivated the adoption of the theory of types in PM. The theory of types adopts grammatical restrictions on formulas that rules out the unrestricted comprehension of classes,
properties, and functions. The effect of this is that formulas such as would allow the comprehension of objects like the Russell set turn out to be ill-formed: they violate the grammatical
restrictions of the system of PM.
PM sparked interest in symbolic logic and advanced the subject, popularizing it and demonstrating its power.^[2] The Modern Library placed PM 23rd in their list of the top 100 English-language
nonfiction books of the twentieth century.^[3]
Scope of foundations laid
The Principia covered only set theory, cardinal numbers, ordinal numbers, and real numbers. Deeper theorems from real analysis were not included, but by the end of the third volume it was clear to
experts that a large amount of known mathematics could in principle be developed in the adopted formalism. It was also clear how lengthy such a development would be.
A fourth volume on the foundations of geometry had been planned, but the authors admitted to intellectual exhaustion upon completion of the third.
Theoretical basis
As noted in the criticism of the theory by Kurt Gödel (below), unlike a formalist theory, the "logicistic" theory of PM has no "precise statement of the syntax of the formalism". Furthermore in the
theory, it is almost immediately observable that interpretations (in the sense of model theory) are presented in terms of truth-values for the behaviour of the symbols "⊢" (assertion of truth), "~"
(logical not), and "V" (logical inclusive OR).
Truth-values: PM embeds the notions of "truth" and "falsity" in the notion "primitive proposition". A raw (pure) formalist theory would not provide the meaning of the symbols that form a "primitive
proposition"—the symbols themselves could be absolutely arbitrary and unfamiliar. The theory would specify only how the symbols behave based on the grammar of the theory. Then later, by assignment of
"values", a model would specify an interpretation of what the formulas are saying. Thus in the formal Kleene symbol set below, the "interpretation" of what the symbols commonly mean, and by
implication how they end up being used, is given in parentheses, e.g., "¬ (not)". But this is not a pure Formalist theory.
Contemporary construction of a formal theory
The following formalist theory is offered as contrast to the logicistic theory of PM. A contemporary formal system would be constructed as follows:
1. Symbols used: This set is the starting set, and other symbols can appear but only by definition from these beginning symbols. A starting set might be the following set derived from Kleene 1952:
□ logical symbols:
☆ "→" (implies, IF-THEN, and "⊃"),
☆ "&" (and),
☆ "V" (or),
☆ "¬" (not),
☆ "∀" (for all),
☆ "∃" (there exists);
□ predicate symbol: "=" (equals);
□ function symbols:
☆ "+" (arithmetic addition),
☆ "∙" (arithmetic multiplication),
☆ "'" (successor);
□ individual symbol "0" (zero);
□ variables "a", "b", "c", etc.; and
□ parentheses "(" and ")".^[4]
2. Symbol strings: The theory will build "strings" of these symbols by concatenation (juxtaposition).^[5]
3. Formation rules: The theory specifies the rules of syntax (rules of grammar) usually as a recursive definition that starts with "0" and specifies how to build acceptable strings or "well-formed
formulas" (wffs). This includes a rule for "substitution"^[6] of strings for the symbols called "variables".
4. Transformation rule(s): The axioms that specify the behaviours of the symbols and symbol sequences.
5. Rule of inference, detachment, modus ponens : The rule that allows the theory to "detach" a "conclusion" from the "premises" that led up to it, and thereafter to discard the "premises" (symbols
to the left of the line │, or symbols above the line if horizontal). If this were not the case, then substitution would result in longer and longer strings that have to be carried forward.
Indeed, after the application of modus ponens, nothing is left but the conclusion, the rest disappears forever. Contemporary theories often specify as their first axiom the classical or modus
ponens or "the rule of detachment": The symbol "│" is usually written as a horizontal line, here "⊃" means "implies". The symbols A and B are "stand-ins" for strings; this form of notation is
called an "axiom schema" (i.e., there is a countable number of specific forms the notation could take). This can be read in a manner similar to IF-THEN but with a difference: given symbol string
IF A and A implies B THEN B (and retain only B for further use). But the symbols have no "interpretation" (e.g., no "truth table" or "truth values" or "truth functions") and modus ponens proceeds
mechanistically, by grammar alone.
The theory of PM has both significant similarities, and similar differences, to a contemporary formal theory. Kleene states that "this deduction of mathematics from logic was offered as intuitive
axiomatics. The axioms were intended to be believed, or at least to be accepted as plausible hypotheses concerning the world".^[7] Indeed, unlike a Formalist theory that manipulates symbols according
to rules of grammar, PM introduces the notion of "truth-values", i.e., truth and falsity in the real-world sense, and the "assertion of truth" almost immediately as the fifth and sixth elements in
the structure of the theory (PM 1962:4–36):
1. Variables
2. Uses of various letters
3. The fundamental functions of propositions: "the Contradictory Function" symbolised by "~" and the "Logical Sum or Disjunctive Function" symbolised by "∨" being taken as primitive and logical
implication defined (the following example also used to illustrate 9. Definition below) as
p ⊃ q .=. ~ p ∨ q Df. (PM 1962:11)
and logical product defined as
p . q .=. ~(~p ∨ ~q) Df. (PM 1962:12)
4. Equivalence: Logical equivalence, not arithmetic equivalence: "≡" given as a demonstration of how the symbols are used, i.e., "Thus ' p ≡ q ' stands for '(p ⊃ q) . (q ⊃ p)'." (PM 1962:7). Notice
that to discuss a notation PM identifies a "meta"-notation with "[space] ... [space]":^[8]
Logical equivalence appears again as a definition:
p ≡ q .=. (p ⊃ q) . (q ⊃ p) (PM 1962:12),
Notice the appearance of parentheses. This grammatical usage is not specified and appears sporadically; parentheses do play an important role in symbol strings, however, e.g., the notation "(x)"
for the contemporary "∀x".
5. Truth-values: "The 'Truth-value' of a proposition is truth if it is true, and falsehood if it is false" (this phrase is due to Gottlob Frege) (PM 1962:7).
6. Assertion-sign: "'⊦. p may be read 'it is true that' ... thus '⊦: p .⊃. q ' means 'it is true that p implies q ', whereas '⊦. p .⊃⊦. q ' means ' p is true; therefore q is true'. The first of
these does not necessarily involve the truth either of p or of q, while the second involves the truth of both" (PM 1962:92).
7. Inference: PMs version of modus ponens. "[If] '⊦. p ' and '⊦ (p ⊃ q)' have occurred, then '⊦ . q ' will occur if it is desired to put it on record. The process of the inference cannot be reduced
to symbols. Its sole record is the occurrence of '⊦. q ' [in other words, the symbols on the left disappear or can be erased]" (PM 1962:9).
8. The use of dots
9. Definitions: These use the "=" sign with "Df" at the right end.
10. Summary of preceding statements: brief discussion of the primitive ideas "~ p" and "p ∨ q" and "⊦" prefixed to a proposition.
11. Primitive propositions: the axioms or postulates. This was significantly modified in the second edition.
12. Propositional functions: The notion of "proposition" was significantly modified in the second edition, including the introduction of "atomic" propositions linked by logical signs to form
"molecular" propositions, and the use of substitution of molecular propositions into atomic or molecular propositions to create new expressions.
13. The range of values and total variation
14. Ambiguous assertion and the real variable: This and the next two sections were modified or abandoned in the second edition. In particular, the distinction between the concepts defined in sections
15. Definition and the real variable and 16 Propositions connecting real and apparent variables was abandoned in the second edition.
15. Formal implication and formal equivalence
16. Identity
17. Classes and relations
18. Various descriptive functions of relations
19. Plural descriptive functions
20. Unit classes
Primitive ideas
Cf. PM 1962:90–94, for the first edition:
• (1) Elementary propositions.
• (2) Elementary propositions of functions.
• (3) Assertion: introduces the notions of "truth" and "falsity".
• (4) Assertion of a propositional function.
• (5) Negation: "If p is any proposition, the proposition "not-p", or "p is false," will be represented by "~p" ".
• (6) Disjunction: "If p and q are any propositions, the proposition "p or q, i.e., "either p is true or q is true," where the alternatives are to be not mutually exclusive, will be represented by
"p ∨ q" ".
• (cf. section B)
Primitive propositions
The first edition (see discussion relative to the second edition, below) begins with a definition of the sign "⊃"
✱1.01. p ⊃ q .=. ~ p ∨ q. Df.
✱1.1. Anything implied by a true elementary proposition is true. Pp modus ponens
(✱1.11 was abandoned in the second edition.)
✱1.2. ⊦: p ∨ p .⊃. p. Pp principle of tautology
✱1.3. ⊦: q .⊃. p ∨ q. Pp principle of addition
✱1.4. ⊦: p ∨ q .⊃. q ∨ p. Pp principle of permutation
✱1.5. ⊦: p ∨ (q ∨ r) .⊃. q ∨ (p ∨ r). Pp associative principle
✱1.6. ⊦:. q ⊃ r .⊃: p ∨ q .⊃. p ∨ r. Pp principle of summation
✱1.7. If p is an elementary proposition, ~p is an elementary proposition. Pp
✱1.71. If p and q are elementary propositions, p ∨ q is an elementary proposition. Pp
✱1.72. If φp and ψp are elementary propositional functions which take elementary propositions as arguments, φp ∨ ψp is an elementary proposition. Pp
Together with the "Introduction to the Second Edition", the second edition's Appendix A abandons the entire section ✱9. This includes six primitive propositions ✱9 through ✱9.15 together with the
Axioms of reducibility.
The revised theory is made difficult by the introduction of the Sheffer stroke ("|") to symbolise "incompatibility" (i.e., if both elementary propositions p and q are true, their "stroke" p | q is
false), the contemporary logical NAND (not-AND). In the revised theory, the Introduction presents the notion of "atomic proposition", a "datum" that "belongs to the philosophical part of logic".
These have no parts that are propositions and do not contain the notions "all" or "some". For example: "this is red", or "this is earlier than that". Such things can exist ad finitum, i.e., even an
"infinite enumeration" of them to replace "generality" (i.e., the notion of "for all").^[9] PM then "advance[s] to molecular propositions" that are all linked by "the stroke". Definitions give
equivalences for "~", "∨", "⊃", and ".".
The new introduction defines "elementary propositions" as atomic and molecular positions together. It then replaces all the primitive propositions ✱1.2 to ✱1.72 with a single primitive proposition
framed in terms of the stroke:
"If p, q, r are elementary propositions, given p and p|(q|r), we can infer r. This is a primitive proposition."
The new introduction keeps the notation for "there exists" (now recast as "sometimes true") and "for all" (recast as "always true"). Appendix A strengthens the notion of "matrix" or "predicative
function" (a "primitive idea", PM 1962:164) and presents four new Primitive propositions as ✱8.1–✱8.13.
✱88. Multiplicative axiom
✱120. Axiom of infinity
Ramified types and the axiom of reducibility
In simple type theory objects are elements of various disjoint "types". Types are implicitly built up as follows. If τ[1],...,τ[m] are types then there is a type (τ[1],...,τ[m]) that can be thought
of as the class of propositional functions of τ[1],...,τ[m] (which in set theory is essentially the set of subsets of τ[1]×...×τ[m]). In particular there is a type of propositions, and there may be a
type ι (iota) of "individuals" from which other types are built. Russell and Whitehead's notation for building up types from other types is rather cumbersome, and the notation here is due to Church.
In the ramified type theory of PM all objects are elements of various disjoint ramified types. Ramified types are implicitly built up as follows. If τ[1],...,τ[m],σ[1],...,σ[n] are ramified types
then as in simple type theory there is a type (τ[1],...,τ[m],σ[1],...,σ[n]) of "predicative" propositional functions of τ[1],...,τ[m],σ[1],...,σ[n]. However, there are also ramified types (τ[1],...,τ
[m]|σ[1],...,σ[n]) that can be thought of as the classes of propositional functions of τ[1],...τ[m] obtained from propositional functions of type (τ[1],...,τ[m],σ[1],...,σ[n]) by quantifying over σ
[1],...,σ[n]. When n=0 (so there are no σs) these propositional functions are called predicative functions or matrices. This can be confusing because modern mathematical practice does not distinguish
between predicative and non-predicative functions, and in any case PM never defines exactly what a "predicative function" actually is: this is taken as a primitive notion.
Russell and Whitehead found it impossible to develop mathematics while maintaining the difference between predicative and non-predicative functions, so they introduced the axiom of reducibility,
saying that for every non-predicative function there is a predicative function taking the same values. In practice this axiom essentially means that the elements of type (τ[1],...,τ[m]|σ[1],...,σ[n])
can be identified with the elements of type (τ[1],...,τ[m]), which causes the hierarchy of ramified types to collapse down to simple type theory. (Strictly speaking, PM allows two propositional
functions to be different even if they take the same values on all arguments; this differs from modern mathematical practice where one normally identifies two such functions.)
In Zermelo set theory one can model the ramified type theory of PM as follows. One picks a set ι to be the type of individuals. For example, ι might be the set of natural numbers, or the set of atoms
(in a set theory with atoms) or any other set one is interested in. Then if τ[1],...,τ[m] are types, the type (τ[1],...,τ[m]) is the power set of the product τ[1]×...×τ[m], which can also be thought
of informally as the set of (propositional predicative) functions from this product to a 2-element set . The ramified type (τ[1],...,τ[m]|σ[1],...,σ[n]) can be modeledas the product of the type (τ
[1],...,τ[m],σ[1],...,σ[n]) with the set of sequences of n quantifiers (∀ or ∃) indicating which quantifier should be applied to each variable σ[i]. (One can vary this slightly by allowing the σs to
be quantified in any order, or allowing them to occur before some of the τs, but this makes little difference except to the bookkeeping.)
The introduction to the second edition cautions:
One point in regard to which improvement is obviously desirable is the axiom of reducibility ... . This axiom has a purely pragmatic justification ... but it is clearly not the sort of axiom with
which we can rest content. On this subject, however, it cannot be said that a satisfactory solution is yet obtainable. Dr Leon Chwistek [Theory of Constructive Types] took the heroic course of
dispensing with the axiom without adopting any substitute; from his work it is clear that this course compels us to sacrifice a great deal of ordinary mathematics. There is another course,
recommended by Wittgenstein† (†Tractatus Logico-Philosophicus, *5.54ff) for philosophical reasons. This is to assume that functions of propositions are always truth-functions, and that a function
can only occur in a proposition through its values. (...) [Working through the consequences] ... the theory of inductive cardinals and ordinals survives; but it seems that the theory of infinite
Dedekindian and well-ordered series largely collapses, so that irrationals, and real numbers generally, can no longer be adequately dealt with. Also Cantor's proof that 2n > n breaks down unless
n is finite.^[10]
It might be possible to sacrifice infinite well-ordered series to logical rigour, but the theory of real numbers is an integral part of ordinary mathematics, and can hardly be the subject of
reasonable doubt. We are therefore justified (sic) in supposing that some logical axioms which is true will justify it. The axiom required may be more restricted than the axiom of reducibility,
but if so, it remains to be discovered.^[11]
See main article: Glossary of Principia Mathematica.
One author^[2] observes that "The notation in that work has been superseded by the subsequent development of logic during the 20th century, to the extent that the beginner has trouble reading PM at
all"; while much of the symbolic content can be converted to modern notation, the original notation itself is "a subject of scholarly dispute", and some notation "embodies substantive logical
doctrines so that it cannot simply be replaced by contemporary symbolism".^[12]
Kurt Gödel was harshly critical of the notation: "What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they
are necessary for the cogency of the proofs." This is reflected in the example below of the symbols "p", "q", "r" and "⊃" that can be formed into the string "p ⊃ q ⊃ r". PM requires a definition of
what this symbol-string means in terms of other symbols; in contemporary treatments the "formation rules" (syntactical rules leading to "well formed formulas") would have prevented the formation of
this string.
Source of the notation: Chapter I "Preliminary Explanations of Ideas and Notations" begins with the source of the elementary parts of the notation (the symbols =⊃≡−ΛVε and the system of dots):
"The notation adopted in the present work is based upon that of Peano, and the following explanations are to some extent modeled on those which he prefixes to his Formulario Mathematico [i.e., Peano
1889]. His use of dots as brackets is adopted, and so are many of his symbols" (PM 1927:4).^[13] PM changed Peano's Ɔ to ⊃, and also adopted a few of Peano's later symbols, such as ℩ and ι, and
Peano's practice of turning letters upside down.
PM adopts the assertion sign "⊦" from Frege's 1879 Begriffsschrift:^[14]
"(I)t may be read 'it is true that'"^[15] Thus to assert a proposition p PM writes:
"⊦. p." (PM 1927:92)(Observe that, as in the original, the left dot is square and of greater size than the full stop on the right.)
Most of the rest of the notation in PM was invented by Whitehead.^[16]
An introduction to the notation of "Section A Mathematical Logic" (formulas ✱1–✱5.71)
PMs dots^[17] are used in a manner similar to parentheses. Each dot (or multiple dot) represents either a left or right parenthesis or the logical symbol ∧. More than one dot indicates the "depth" of
the parentheses, for example, ".", ":" or ":.", "::". However the position of the matching right or left parenthesis is not indicated explicitly in the notation but has to be deduced from some rules
that are complex and at times ambiguous. Moreover, when the dots stand for a logical symbol ∧ its left and right operands have to be deduced using similar rules. First one has to decide based on
context whether the dots stand for a left or right parenthesis or a logical symbol. Then one has to decide how far the other corresponding parenthesis is: here one carries on until one meets either a
larger number of dots, or the same number of dots next that have equal or greater "force", or the end of the line. Dots next to the signs ⊃, ≡,∨, =Df have greater force than dots next to (x), (∃x)
and so on, which have greater force than dots indicating a logical product ∧.
Example 1. The line
✱3.4. ⊢ : p . q . ⊃ . p ⊃ qcorresponds to
⊢ ((p ∧ q) ⊃ (p ⊃ q)).The two dots standing together immediately following the assertion-sign indicate that what is asserted is the entire line: since there are two of them, their scope is greater
than that of any of the single dots to their right. They are replaced by a left parenthesis standing where the dots are and a right parenthesis at the end of the formula, thus:
⊢ (p . q . ⊃ . p ⊃ q).(In practice, these outermost parentheses, which enclose an entire formula, are usually suppressed.) The first of the single dots, standing between two propositional variables,
represents conjunction. It belongs to the third group and has the narrowest scope. Here it is replaced by the modern symbol for conjunction "∧", thus
⊢ (p ∧ q . ⊃ . p ⊃ q).The two remaining single dots pick out the main connective of the whole formula. They illustrate the utility of the dot notation in picking out those connectives which are
relatively more important than the ones which surround them. The one to the left of the "⊃" is replaced by a pair of parentheses, the right one goes where the dot is and the left one goes as far to
the left as it can without crossing a group of dots of greater force, in this case the two dots which follow the assertion-sign, thus
⊢ ((p ∧ q) ⊃ . p ⊃ q)The dot to the right of the "⊃" is replaced by a left parenthesis which goes where the dot is and a right parenthesis which goes as far to the right as it can without going
beyond the scope already established by a group of dots of greater force (in this case the two dots which followed the assertion-sign). So the right parenthesis which replaces the dot to the right of
the "⊃" is placed in front of the right parenthesis which replaced the two dots following the assertion-sign, thus
⊢ ((p ∧ q) ⊃ (p ⊃ q)).
Example 2, with double, triple, and quadruple dots:
✱9.521. ⊢ :: (∃x). φx . ⊃ . q : ⊃ :. (∃x). φx . v . r : ⊃ . q v rstands for
((((∃x)(φx)) ⊃ (q)) ⊃ ((((∃x) (φx)) v (r)) ⊃ (q v r)))
Example 3, with a double dot indicating a logical symbol (from volume 1, page 10):
p⊃q:q⊃r.⊃.p⊃rstands for
(p⊃q) ∧ ((q⊃r)⊃(p⊃r))where the double dot represents the logical symbol ∧ and can be viewed as having the higher priority as a non-logical single dot.
Later in section ✱14, brackets "[]" appear, and in sections ✱20 and following, braces "" appear. Whether these symbols have specific meanings or are just for visual clarification is unclear.
Unfortunately the single dot (but also ":", ":.", "::", etc.) is also used to symbolise "logical product" (contemporary logical AND often symbolised by "&" or "∧").
Logical implication is represented by Peano's "Ɔ" simplified to "⊃", logical negation is symbolised by an elongated tilde, i.e., "~" (contemporary "~" or "¬"), the logical OR by "v". The symbol "="
together with "Df" is used to indicate "is defined as", whereas in sections ✱13 and following, "=" is defined as (mathematically) "identical with", i.e., contemporary mathematical "equality" (cf.
discussion in section ✱13). Logical equivalence is represented by "≡" (contemporary "if and only if"); "elementary" propositional functions are written in the customary way, e.g., "f(p)", but later
the function sign appears directly before the variable without parenthesis e.g., "φx", "χx", etc.
Example, PM introduces the definition of "logical product" as follows:
✱3.01. p . q .=. ~(~p v ~q) Df.
where "p . q" is the logical product of p and q.
✱3.02. p ⊃ q ⊃ r .=. p ⊃ q . q ⊃ r Df.
This definition serves merely to abbreviate proofs.
Translation of the formulas into contemporary symbols: Various authors use alternate symbols, so no definitive translation can be given. However, because of criticisms such as that of Kurt Gödel
below, the best contemporary treatments will be very precise with respect to the "formation rules" (the syntax) of the formulas.
The first formula might be converted into modern symbolism as follows:^[18]
(p & q) =[df] (~(~p v ~q))alternately
(p & q) =[df] (¬(¬p v ¬q))alternately
(p ∧ q) =[df] (¬(¬p v ¬q))etc.
The second formula might be converted as follows:
(p → q → r) =[df] (p → q) & (q → r)But note that this is not (logically) equivalent to (p → (q → r)) nor to ((p → q) → r), and these two are not logically equivalent either.
An introduction to the notation of "Section B Theory of Apparent Variables" (formulas ✱8–✱14.34)
These sections concern what is now known as predicate logic, and predicate logic with identity (equality).
• NB: As a result of criticism and advances, the second edition of PM (1927) replaces ✱9 with a new ✱8 (Appendix A). This new section eliminates the first edition's distinction between real and
apparent variables, and it eliminates "the primitive idea 'assertion of a propositional function'.^[19] To add to the complexity of the treatment, ✱8 introduces the notion of substituting a
"matrix", and the Sheffer stroke:
• Sheffer stroke: Is the contemporary logical NAND (NOT-AND), i.e., "incompatibility", meaning:
"Given two propositions p and q, then ' p | q ' means "proposition p is incompatible with proposition q", i.e., if both propositions p and q evaluate as true, then and only then p | q evaluates as
false." After section ✱8 the Sheffer stroke sees no usage.
Section ✱10: The existential and universal "operators": PM adds "(x)" to represent the contemporary symbolism "for all x " i.e., " ∀x", and it uses a backwards serifed E to represent "there exists an
x", i.e., "(Ǝx)", i.e., the contemporary "∃x". The typical notation would be similar to the following:
"(x) . φx" means "for all values of variable x, function φ evaluates to true"
"(Ǝx) . φx" means "for some value of variable x, function φ evaluates to true"
Sections ✱10, ✱11, ✱12: Properties of a variable extended to all individuals: section ✱10 introduces the notion of "a property" of a "variable". PM gives the example: φ is a function that indicates
"is a Greek", and ψ indicates "is a man", and χ indicates "is a mortal" these functions then apply to a variable x. PM can now write, and evaluate:
(x) . ψxThe notation above means "for all x, x is a man". Given a collection of individuals, one can evaluate the above formula for truth or falsity. For example, given the restricted collection of
individuals the above evaluates to "true" if we allow for Zeus to be a man. But it fails for:
(x) . φxbecause Russell is not Greek. And it fails for
(x) . χxbecause Zeus is not a mortal.
Equipped with this notation PM can create formulas to express the following: "If all Greeks are men and if all men are mortals then all Greeks are mortals". (PM 1962:138)
(x) . φx ⊃ ψx :(x). ψx ⊃ χx :⊃: (x) . φx ⊃ χx
Another example: the formula:
✱10.01. (Ǝx). φx . = . ~(x) . ~φx Df.
means "The symbols representing the assertion 'There exists at least one x that satisfies function φ' is defined by the symbols representing the assertion 'It's not true that, given all values of x,
there are no values of x satisfying φ'".
The symbolisms ⊃[x] and "≡[x]" appear at ✱10.02 and ✱10.03. Both are abbreviations for universality (i.e., for all) that bind the variable x to the logical operator. Contemporary notation would have
simply used parentheses outside of the equality ("=") sign:
✱10.02 φx ⊃[x] ψx .=. (x). φx ⊃ ψx Df
Contemporary notation: ∀x(φ(x) → ψ(x)) (or a variant)
✱10.03 φx ≡[x] ψx .=. (x). φx ≡ ψx Df
Contemporary notation: ∀x(φ(x) ↔︎ ψ(x)) (or a variant)
PM attributes the first symbolism to Peano.
Section ✱11 applies this symbolism to two variables. Thus the following notations: ⊃[x], ⊃[y], ⊃[x, y] could all appear in a single formula.
Section ✱12 reintroduces the notion of "matrix" (contemporary truth table), the notion of logical types, and in particular the notions of first-order and second-order functions and propositions.
New symbolism "φ ! x" represents any value of a first-order function. If a circumflex "^" is placed over a variable, then this is an "individual" value of y, meaning that "ŷ" indicates "individuals"
(e.g., a row in a truth table); this distinction is necessary because of the matrix/extensional nature of propositional functions.
Now equipped with the matrix notion, PM can assert its controversial axiom of reducibility: a function of one or two variables (two being sufficient for PMs use) where all its values are given (i.e.,
in its matrix) is (logically) equivalent ("≡") to some "predicative" function of the same variables. The one-variable definition is given below as an illustration of the notation (PM 1962:166–167):
✱12.1 ⊢: (Ǝ f): φx .≡[x]. f ! x Pp;
Pp is a "Primitive proposition" ("Propositions assumed without proof") (PM 1962:12, i.e., contemporary "axioms"), adding to the 7 defined in section ✱1 (starting with ✱1.1 modus ponens). These are to
be distinguished from the "primitive ideas" that include the assertion sign "⊢", negation "~", logical OR "V", the notions of "elementary proposition" and "elementary propositional function"; these
are as close as PM comes to rules of notational formation, i.e., syntax.
This means: "We assert the truth of the following: There exists a function f with the property that: given all values of x, their evaluations in function φ (i.e., resulting their matrix) is logically
equivalent to some f evaluated at those same values of x. (and vice versa, hence logical equivalence)". In other words: given a matrix determined by property φ applied to variable x, there exists a
function f that, when applied to the x is logically equivalent to the matrix. Or: every matrix φx can be represented by a function f applied to x, and vice versa.
✱13: The identity operator "=" : This is a definition that uses the sign in two different ways, as noted by the quote from PM:
✱13.01. x = y .=: (φ): φ ! x . ⊃ . φ ! y Dfmeans:
"This definition states that x and y are to be called identical when every predicative function satisfied by x is also satisfied by y ... Note that the second sign of equality in the above definition
is combined with "Df", and thus is not really the same symbol as the sign of equality which is defined".The not-equals sign "≠" makes its appearance as a definition at ✱13.02.
✱14: Descriptions:
"A description is a phrase of the form "the term y which satisfies φŷ, where φŷ is some function satisfied by one and only one argument."^[20] From this PM employs two new symbols, a forward "E" and
an inverted iota "℩". Here is an example:
✱14.02. E ! (℩y) (φy) .=: (Ǝb):φy . ≡[y] . y = b Df.This has the meaning:
"The y satisfying φŷ exists," which holds when, and only when φŷ is satisfied by one value of y and by no other value." (PM 1967:173–174)
Introduction to the notation of the theory of classes and relations
The text leaps from section ✱14 directly to the foundational sections ✱20 GENERAL THEORY OF CLASSES and ✱21 GENERAL THEORY OF RELATIONS. "Relations" are what is known in contemporary set theory as
sets of ordered pairs. Sections ✱20 and ✱22 introduce many of the symbols still in contemporary usage. These include the symbols "ε", "⊂", "∩", "∪", "–", "Λ", and "V": "ε" signifies "is an element
of" (PM 1962:188); "⊂" (✱22.01) signifies "is contained in", "is a subset of"; "∩" (✱22.02) signifies the intersection (logical product) of classes (sets); "∪" (✱22.03) signifies the union (logical
sum) of classes (sets); "–" (✱22.03) signifies negation of a class (set); "Λ" signifies the null class; and "V" signifies the universal class or universe of discourse.
Small Greek letters (other than "ε", "ι", "π", "φ", "ψ", "χ", and "θ") represent classes (e.g., "α", "β", "γ", "δ", etc.) (PM 1962:188):
x ε α
"The use of single letter in place of symbols such as ẑ(φz) or ẑ(φ ! z) is practically almost indispensable, since otherwise the notation rapidly becomes intolerably cumbrous. Thus ' x ε α' will mean
' x is a member of the class α'". (PM 1962:188)
α ∪ –α = V
The union of a set and its inverse is the universal (completed) set.^[21]
α ∩ –α = Λ
The intersection of a set and its inverse is the null (empty) set.
When applied to relations in section ✱23 CALCULUS OF RELATIONS, the symbols "⊂", "∩", "∪", and "–" acquire a dot: for example: "⊍", "∸".^[22]
The notion, and notation, of "a class" (set): In the first edition PM asserts that no new primitive ideas are necessary to define what is meant by "a class", and only two new "primitive propositions"
called the axioms of reducibility for classes and relations respectively (PM 1962:25).^[23] But before this notion can be defined, PM feels it necessary to create a peculiar notation "ẑ(φz)" that it
calls a "fictitious object". (PM 1962:188)
⊢: x ε ẑ(φz) .≡. (φx)
"i.e., ' x is a member of the class determined by (φẑ)' is [logically] equivalent to ' x satisfies (φẑ),' or to '(φx) is true.'". (PM 1962:25)
At least PM can tell the reader how these fictitious objects behave, because "A class is wholly determinate when its membership is known, that is, there cannot be two different classes having the
same membership" (PM 1962:26). This is symbolised by the following equality (similar to ✱13.01 above:
ẑ(φz) = ẑ(ψz) . ≡ : (x): φx .≡. ψx
"This last is the distinguishing characteristic of classes, and justifies us in treating ẑ(ψz) as the class determined by [the function] ψẑ." (PM 1962:188)
Perhaps the above can be made clearer by the discussion of classes in Introduction to the Second Edition, which disposes of the Axiom of Reducibility and replaces it with the notion: "All functions
of functions are extensional" (PM 1962:xxxix), i.e.,
φx ≡[x] ψx .⊃. (x): ƒ(φẑ) ≡ ƒ(ψẑ) (PM 1962:xxxix)
This has the reasonable meaning that "IF for all values of x the truth-values of the functions φ and ψ of x are [logically] equivalent, THEN the function ƒ of a given φẑ and ƒ of ψẑ are [logically]
equivalent." PM asserts this is "obvious":
"This is obvious, since φ can only occur in ƒ(φẑ) by the substitution of values of φ for p, q, r, ... in a [logical-] function, and, if φx ≡ ψx, the substitution of φx for p in a [logical-] function
gives the same truth-value to the truth-function as the substitution of ψx. Consequently there is no longer any reason to distinguish between functions classes, for we have, in virtue of the above,
φx ≡[x] ψx .⊃. (x). φẑ = . ψẑ".Observe the change to the equality "=" sign on the right. PM goes on to state that will continue to hang onto the notation "ẑ(φz)", but this is merely equivalent to φẑ,
and this is a class. (all quotes: PM 1962:xxxix).
Consistency and criticisms
According to Carnap's "Logicist Foundations of Mathematics", Russell wanted a theory that could plausibly be said to derive all of mathematics from purely logical axioms. However, Principia
Mathematica required, in addition to the basic axioms of type theory, three further axioms that seemed to not be true as mere matters of logic, namely the axiom of infinity, the axiom of choice, and
the axiom of reducibility. Since the first two were existential axioms, Russell phrased mathematical statements depending on them as conditionals. But reducibility was required to be sure that the
formal statements even properly express statements of real analysis, so that statements depending on it could not be reformulated as conditionals. Frank Ramsey tried to argue that Russell's
ramification of the theory of types was unnecessary, so that reducibility could be removed, but these arguments seemed inconclusive.
Beyond the status of the axioms as logical truths, one can ask the following questions about any system such as PM:
• whether a contradiction could be derived from the axioms (the question of inconsistency), and
• whether there exists a mathematical statement which could neither be proven nor disproven in the system (the question of completeness).
Propositional logic itself was known to be consistent, but the same had not been established for Principias axioms of set theory. (See Hilbert's second problem.) Russell and Whitehead suspected that
the system in PM is incomplete: for example, they pointed out that it does not seem powerful enough to show that the cardinal ℵ[ω] exists. However, one can ask if some recursively axiomatizable
extension of it is complete and consistent.
Gödel 1930, 1931
In 1930, Gödel's completeness theorem showed that first-order predicate logic itself was complete in a much weaker sense—that is, any sentence that is unprovable from a given set of axioms must
actually be false in some model of the axioms. However, this is not the stronger sense of completeness desired for Principia Mathematica, since a given system of axioms (such as those of Principia
Mathematica) may have many models, in some of which a given statement is true and in others of which that statement is false, so that the statement is left undecided by the axioms.
Gödel's incompleteness theorems cast unexpected light on these two related questions.
Gödel's first incompleteness theorem showed that no recursive extension of Principia could be both consistent and complete for arithmetic statements. (As mentioned above, Principia itself was already
known to be incomplete for some non-arithmetic statements.) According to the theorem, within every sufficiently powerful recursive logical system (such as Principia), there exists a statement G that
essentially reads, "The statement G cannot be proved." Such a statement is a sort of Catch-22: if G is provable, then it is false, and the system is therefore inconsistent; and if G is not provable,
then it is true, and the system is therefore incomplete.
Gödel's second incompleteness theorem (1931) shows that no formal system extending basic arithmetic can be used to prove its own consistency. Thus, the statement "there are no contradictions in the
Principia system" cannot be proven in the Principia system unless there are contradictions in the system (in which case it can be proven both true and false).
Wittgenstein 1919, 1939
By the second edition of PM, Russell had removed his axiom of reducibility to a new axiom (although he does not state it as such). Gödel 1944:126 describes it this way:
This new proposal resulted in a dire outcome. An "extensional stance" and restriction to a second-order predicate logic means that a propositional function extended to all individuals such as "All
'x' are blue" now has to list all of the 'x' that satisfy (are true in) the proposition, listing them in a possibly infinite conjunction: e.g. x[1] ∧ x[2] ∧ . . . ∧ x[n] ∧ . . .. Ironically, this
change came about as the result of criticism from Ludwig Wittgenstein in his 1919 Tractatus Logico-Philosophicus. As described by Russell in the Introduction to the Second Edition of PM:In other
words, the fact that an infinite list cannot realistically be specified means that the concept of "number" in the infinite sense (i.e. the continuum) cannot be described by the new theory proposed in
PM Second Edition.
Wittgenstein in his Lectures on the Foundations of Mathematics, Cambridge 1939 criticised Principia on various grounds, such as:
• It purports to reveal the fundamental basis for arithmetic. However, it is our everyday arithmetical practices such as counting which are fundamental; for if a persistent discrepancy arose
between counting and Principia, this would be treated as evidence of an error in Principia (e.g., that Principia did not characterise numbers or addition correctly), not as evidence of an error
in everyday counting.
• The calculating methods in Principia can only be used in practice with very small numbers. To calculate using large numbers (e.g., billions), the formulae would become too long, and some
short-cut method would have to be used, which would no doubt rely on everyday techniques such as counting (or else on non-fundamental and hence questionable methods such as induction). So again
Principia depends on everyday techniques, not vice versa.
Wittgenstein did, however, concede that Principia may nonetheless make some aspects of everyday arithmetic clearer.
Gödel 1944
Gödel offered a "critical but sympathetic discussion of the logicistic order of ideas" in his 1944 article "Russell's Mathematical Logic". He wrote:
Part I Mathematical logic. Volume I ✱1 to ✱43
This section describes the propositional and predicate calculus, and gives the basic properties of classes, relations, and types.
Part II Prolegomena to cardinal arithmetic. Volume I ✱50 to ✱97
This part covers various properties of relations, especially those needed for cardinal arithmetic.
Part III Cardinal arithmetic. Volume II ✱100 to ✱126
This covers the definition and basic properties of cardinals. A cardinal is defined to be an equivalence class of similar classes (as opposed to ZFC, where a cardinal is a special sort of von Neumann
ordinal). Each type has its own collection of cardinals associated with it, and there is a considerable amount of bookkeeping necessary for comparing cardinals of different types. PM define addition,
multiplication and exponentiation of cardinals, and compare different definitions of finite and infinite cardinals. ✱120.03 is the Axiom of infinity.
Part IV Relation-arithmetic. Volume II ✱150 to ✱186
A "relation-number" is an equivalence class of isomorphic relations. PM defines analogues of addition, multiplication, and exponentiation for arbitrary relations. The addition and multiplication is
similar to the usual definition of addition and multiplication of ordinals in ZFC, though the definition of exponentiation of relations in PM is not equivalent to the usual one used in ZFC.
Part V Series. Volume II ✱200 to ✱234 and volume III ✱250 to ✱276
This covers series, which is PM's term for what is now called a totally ordered set. In particular it covers complete series, continuous functions between series with the order topology (though of
course they do not use this terminology), well-ordered series, and series without "gaps" (those with a member strictly between any two given members).
Part VI Quantity. Volume III ✱300 to ✱375
This section constructs the ring of integers, the fields of rational and real numbers, and "vector-families", which are related to what are now called torsors over abelian groups.
Comparison with set theory
This section compares the system in PM with the usual mathematical foundations of ZFC. The system of PM is roughly comparable in strength with Zermelo set theory (or more precisely a version of it
where the axiom of separation has all quantifiers bounded).
• The system of propositional logic and predicate calculus in PM is essentially the same as that used now, except that the notation and terminology has changed.
• The most obvious difference between PM and set theory is that in PM all objects belong to one of a number of disjoint types. This means that everything gets duplicated for each (infinite) type:
for example, each type has its own ordinals, cardinals, real numbers, and so on. This results in a lot of bookkeeping to relate the various types with each other.
• In ZFC functions are normally coded as sets of ordered pairs. In PM functions are treated rather differently. First of all, "function" means "propositional function", something taking values true
or false. Second, functions are not determined by their values: it is possible to have several different functions all taking the same values (for example, one might regard 2x+2 and 2(x+1) as
different functions on grounds that the computer programs for evaluating them are different). The functions in ZFC given by sets of ordered pairs correspond to what PM call "matrices", and the
more general functions in PM are coded by quantifying over some variables. In particular PM distinguishes between functions defined using quantification and functions not defined using
quantification, whereas ZFC does not make this distinction.
• PM has no analogue of the axiom of replacement, though this is of little practical importance as this axiom is used very little in mathematics outside set theory.
• PM emphasizes relations as a fundamental concept, whereas in modern mathematical practice it is functions rather than relations that are treated as more fundamental; for example, category theory
emphasizes morphisms or functions rather than relations. (However, there is an analogue of categories called allegories that models relations rather than functions, and is quite similar to the
type system of PM.)
• In PM, cardinals are defined as classes of similar classes, whereas in ZFC cardinals are special ordinals. In PM there is a different collection of cardinals for each type with some complicated
machinery for moving cardinals between types, whereas in ZFC there is only 1 sort of cardinal. Since PM does not have any equivalent of the axiom of replacement, it is unable to prove the
existence of cardinals greater than ℵ[ω].
• In PM ordinals are treated as equivalence classes of well-ordered sets, and as with cardinals there is a different collection of ordinals for each type. In ZFC there is only one collection of
ordinals, usually defined as von Neumann ordinals. One strange quirk of PM is that they do not have an ordinal corresponding to 1, which causes numerous unnecessary complications in their
theorems. The definition of ordinal exponentiation α^β in PM is not equivalent to the usual definition in ZFC and has some rather undesirable properties: for example, it is not continuous in β
and is not well ordered (so is not even an ordinal).
• The constructions of the integers, rationals and real numbers in ZFC have been streamlined considerably over time since the constructions in PM.
Differences between editions
Apart from corrections of misprints, the main text of PM is unchanged between the first and second editions. The main text in Volumes 1 and 2 was reset, so that it occupies fewer pages in each. In
the second edition, Volume 3 was not reset, being photographically reprinted with the same page numbering; corrections were still made. The total number of pages (excluding the endpapers) in the
first edition is 1,996; in the second, 2,000. Volume 1 has five new additions:
• A 54-page introduction by Russell describing the changes they would have made had they had more time and energy. The main change he suggests is the removal of the controversial axiom of
reducibility, though he admits that he knows no satisfactory substitute for it. He also seems more favorable to the idea that a function should be determined by its values (as is usual in modern
mathematical practice).
• Appendix A, numbered as *8, 15 pages, about the Sheffer stroke.
• Appendix B, numbered as *89, discussing induction without the axiom of reducibility.
• Appendix C, 8 pages, discussing propositional functions.
• An 8-page list of definitions at the end, giving a much-needed index to the 500 or so notations used.
In 1962, Cambridge University Press published a shortened paperback edition containing parts of the second edition of Volume 1: the new introduction (and the old), the main text up to *56, and
Appendices A and C..
The first edition was reprinted in 2009 by Merchant Books,,, .
Andrew D. Irvine says that PM sparked interest in symbolic logic and advanced the subject by popularizing it; it showcased the powers and capacities of symbolic logic; and it showed how advances in
philosophy of mathematics and symbolic logic could go hand-in-hand with tremendous fruitfulness.^[24] PM was in part brought about by an interest in logicism, the view on which all mathematical
truths are logical truths. Though flawed, PM would be influential in several later advances in meta-logic, including Gödel's incompleteness theorems.
The logical notation in PM was not widely adopted, possibly because its foundations are often considered a form of Zermelo–Fraenkel set theory.
Scholarly, historical, and philosophical interest in PM is great and ongoing, and mathematicians continue to work with PM, whether for the historical reason of understanding the text or its authors,
or for furthering insight into the formalizations of math and logic.
The Modern Library placed PM 23rd in their list of the top 100 English-language nonfiction books of the twentieth century.^[25]
See also
• Book: Enderton, Herbert B. . Herbert Enderton . 2001 . 1972 . A Mathematical Introduction to Logic . 2nd . . San Diego, California . 0-12-238452-0.
• Book: Gödel, Kurt . Kurt Gödel . 1944 . Russell's Mathematical Logic . 123–153 . Schilpp . Paul Arthur . Paul Arthur Schilpp . The Philosophy of Bertrand Russell . 1st . . Chicago . The Library
of Living Philosophers . 5 . 2007378 . 44006786 . 6467049M.
• Book: Gödel, Kurt . 1990 . Feferman . Solomon . Solomon Feferman . etal . Collected Works, Volume II, Publications 1938–1974 . . New York . 0-19-503972-6.
• Book: Grattan-Guinness, Ivor . Ivor Grattan-Guinness . 2000 . The Search for Mathematical Roots 1870–1940 . . Princeton, New Jersey . 0-691-05857-1.
• Book: Hardy, G. H. . G. H. Hardy . 2004 . 1940 . A Mathematician's Apology . A Mathematician's Apology . . Cambridge . 978-0-521-42706-7.
• Book: Kleene, Stephen Cole . Stephen Cole Kleene . 1952 . Introduction to Metamathematics . 6th reprint . . Amsterdam, New York . 9296141 . 53001848.
• Book: Littlewood, J. E. . John Edensor Littlewood . 1986 . Bollobás . Béla . Béla Bollobás . Littlewood's Miscellany . A Mathematician's Miscellany . . Cambridge . 0-521-33058-0 . 0872858.
• Book: van Heijenoort, Jean . Jean van Heijenoort . 1967 . From Frege to Gödel: A Source book in Mathematical Logic, 1879–1931 . 3rd printing . . Cambridge, Massachusetts . 0-674-32449-8.
• Book: Weber . Michel . Michel Weber . Desmond . William Jr. . 2008 . Handbook of Whiteheadian Process Thought, Volume 1 . Heusenstamm . . 978-3-938793-92-3 . September 13, 2023 . Academia.edu.
• Book: Wittgenstein, Ludwig . Ludwig Wittgenstein . 2009 . Major Works: Selected Philosophical Writings . . New York . 978-0-06-155024-9.
External links
Notes and References
1. Book: Principia Mathematica. Whitehead. Alfred North . Bertrand . Russell. Cambridge University Press. 1963. Cambridge. 1.
2. Web site: Principia Mathematica (Stanford Encyclopedia of Philosophy). Irvine. Andrew D.. Andrew David Irvine. 1 May 2003. Metaphysics Research Lab, CSLI, Stanford University. 5 August 2009.
3. Web site: The Modern Library's Top 100 Nonfiction Books of the Century. 30 April 1999. The New York Times Company. 5 August 2009.
4. In his section 8.5.4 Groping towards metalogic Grattan-Guinness 2000:454ff discusses the American logicians' critical reception of the second edition of PM. For instance Sheffer "puzzled that '
In order to give an account of logic, we must presuppose and employ logic ' " (p. 452). And Bernstein ended his 1926 review with the comment that "This distinction between the propositional logic
as a mathematical system and as a language must be made, if serious errors are to be avoided; this distinction the Principia does not make" (p. 454).
5. Book: Linsky, Bernard. The Stanford Encyclopedia of Philosophy. Edward N.. Zalta. 2018. Metaphysics Research Lab, Stanford University. 1 May 2018. Stanford Encyclopedia of Philosophy.
6. See the ten postulates of Huntington, in particular postulates IIa and IIb at PM 1962:205 and discussion at p. 206.
7. The "⊂" sign has a dot inside it, and the intersection sign "∩" has a dot above it; these are not available in the "Arial Unicode MS" font.
8. Wiener 1914 "A simplification of the logic of relations" (van Heijenoort 1967:224ff) disposed of the second of these when he showed how to reduce the theory of relations to that of classes
9. Web site: Principia Mathematica (Stanford Encyclopedia of Philosophy). Irvine. Andrew D.. Andrew David Irvine. 1 May 2003. Metaphysics Research Lab, CSLI, Stanford University. 5 August 2009.
10. Web site: The Modern Library's Top 100 Nonfiction Books of the Century. 30 April 1999. The New York Times Company. 5 August 2009.
|
{"url":"http://everything.explained.today/Principia_Mathematica/","timestamp":"2024-11-11T20:32:10Z","content_type":"text/html","content_length":"82789","record_id":"<urn:uuid:966b1097-2a40-476d-9d67-70b2fe614e33>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00687.warc.gz"}
|
MyOpenMath Assessment
Try Another Version of This Question
Entering Numerical Answers
This question asks for a number answer. Acceptable answers include whole numbers, integers (negative numbers), and decimal values. (These questions will normally require you to do calculations by
hand or with a computer before you enter a final answer.)
In special cases, you may need to enter DNE for "Does not exist", oo for infinity, or -oo for negative infinity.
If your answer is not an exact value, you'll want to enter at least 3 decimal places unless the problem specifies otherwise.
Try it out:
Enter the number 84.8493 below exactly (no rounding)
Enter the number 84.8493 rounded to the nearest hundredth (two decimal places)
Enter the result of `84.8493 divide 0`
|
{"url":"https://www.myopenmath.com/multiembedq.php?id=57&theme=oea&iframe_resize_id=mom1&graphdisp=0","timestamp":"2024-11-14T10:17:35Z","content_type":"text/html","content_length":"21719","record_id":"<urn:uuid:5602c591-29e0-4993-b1fd-cb7407712c1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00173.warc.gz"}
|
NPV and ARR doubt in ACCA MA Revision Mock Exam
Viewing 4 posts - 1 through 4 (of 4 total)
• Author
• October 16, 2019 at 10:31 am #549731
Good Day Sir!
I have attempted the Revision Mock Exam and am not able to calculate the NPV correctly in Section B Question 1 (a) of the exam.
I calculated using the amount $90000 ($120000-$30000) each year at 10% discounting factor for 5 years and $20000 scrap value at 10% discounting factor in the 5th year and got the answer $53520
NPV (which after the results I found to be wrong) and it is showing the right answer to be $53610 NPV. Please explain the correct method of solving the above question.
And also please tell me how to calculate ARR in the same question (b).
Thank you
October 16, 2019 at 4:29 pm #549776
For 1(a) it looks like the difference is purely a rounding difference (did you use the discount tables given, or calculate the factors yourself?). This will not be a problem in the exam because
to avoid rounding problems, questions ask for the answer to (for example) the nearest thousand.
For question 1(b) see the following link
October 16, 2019 at 5:48 pm #549793
I used the given discounting tables only but I calculated the PV separately for each year.
Now I understand both parts but I still have one doubt in 1(a) why was annuity table used instead of present value table.
Thank you
October 17, 2019 at 7:57 am #549840
It is quicker to use the annuity factor because it is an equal cash flow each year.
It doesn’t matter if you discount each year separately (it gives a rounding difference but that doesn’t matter in the exam), but it does take longer and one of the biggest problems in the exam is
the time pressure.
• Author
Viewing 4 posts - 1 through 4 (of 4 total)
• You must be logged in to reply to this topic.
|
{"url":"https://opentuition.com/topic/npv-and-arr-doubt-in-acca-ma-revision-mock-exam/","timestamp":"2024-11-08T02:26:48Z","content_type":"text/html","content_length":"85355","record_id":"<urn:uuid:f0902d71-f71c-4659-b4df-cb9547b0386d>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00657.warc.gz"}
|
CS Fall 2022 Junior
Computer Science - Junior
COURSE #: COMP 3042
Course Description
This course introduces mathematical modeling of computational problems. It covers the common algorithms, algorithmic paradigms, design of algorithms used to solve these problems. The course
emphasizes the relationship between algorithms and programming and introduces basic performance measures and analysis techniques for these problems. It also covers the time complexity and space
complexity of different algorithms in order to find the best algorithm having less time and space complexity for different problems.
Course Learning Outcomes
By the completion of the course, the students should be able to
• Identify the key characteristics of a problem
• Analyze the suitability of a specific algorithm design technique for a problem.
• Apply different design techniques to design an algorithm
• Explain different time analysis techniques and notations of algorithms
• Analyze the time and space complexity of different algorithms
• Compare different algorithms to select a best solution for a given problem.
Course Assessments and Grading
Item Weight
Attendance & Activities 10%
Assignment (5 assignments) 15%
Quizzes (10 quizzes) 25%
Midterm exam (1 midterm exam) 20%
Final exam (1 final exam) 30%
COURSE #: COMP 3041
Course Description
This course teaches the general theory, concepts, and techniques related to the theory of automata. Practical examples related to programming languages are emphasized. Students will have the
opportunity to utilize theoretical aspects of automata theory by performing a medium-scale design project. Topics include Finite Automata, Transition Graphs, Nondeterminism, Finite Automata with
Output, Context-Free Grammars, Regular Grammars, Chomsky Normal Form, Pushdown Automata, Context-Free Languages, Non-Context-Free Languages, Parsing, and Turing Machines.
Course Learning Outcomes
By the completion of the course, the students should be able to:
• Use regular expressions, recursive definitions, finite automata, and transition graphs to understand the concept of formal languages.
• Apply different mechanisms to convert regular expressions to finite automata
• Use Different rules to construct context-free grammar for regular and non-regular languages.
• Apply Chomsky’s normal technique to remove ambiguity from a context-free grammar
• Construct a pushdown automaton and a Turing machine for a computer language.
Course Assessments and Grading
Item Weight
Attendance & Activities 10%
Assignment (5 assignments) 15%
Quizzes (10 quizzes) 25%
Midterm exam (1 midterm exam) 20%
Final exam (1 final exam) 30%
COURSE #: COMP 3021
Course Description
This course focuses on the basic architecture of computer systems including fundamental concepts such as components of the processor, interfacing with memory and I/O devices, organization of
peripherals, and machine-level operations. The course presents detailed deliberation on various system design considerations along with associated challenges commonly employed in computer
architecture such as pipelining, branch prediction, caching, etc., This course provides the students with an understanding of the various levels of abstraction in computer architecture, with emphasis
on instruction set level and register transfer level through practical examples.
Course Learning Outcomes
Upon the successful completion of this course, students will be able to:
• Describe the key components of the computer system along with their functionalities and limitations
• Explain the internal working of processor underneath the software layer and how decisions made in hardware affect the software/programmer
• Examine Instruction Set Architecture (ISA) designs and associated trade-offs
• Analyze factors effecting CPU performance e.g., pipelining and instruction-level parallelism
• Explain the I/O subsystems and memory modules of the computer
• Evaluate design and optimization decisions across the boundaries of different layers and system components
Course Assessments and Grading
Item Weight
Class participation and attendance 10%
Quiz activities 15%
Assignments 15%
Mid exam 30%
Final exam 30%
COURSE #: COMP 3071
Course Description
Artificial intelligence (AI) is a research field that studies how to realize the intelligent human behaviors on a computer. The ultimate goal of AI is to make a computer that can learn, plan, and
solve problems autonomously. In this course students will learn the basic methodologies for the design of artificial agents in complex environments. This course aims to expose students to the
fundamental concepts and techniques that enable them to build smart applications including search strategies, agents, machine learning, planning, knowledge representation, reasoning, information
retrieval and natural language processing.
Course Learning Outcomes
By completion of the course the students should be able to:
• Build an appropriate agent architecture, for a given problem to be solved by an intelligent agent.
• Understand an uninformed/informed search algorithm to solve a given search/optimization problem.
• Apply forward/backward planning algorithms to solve the planning problem.
• Apply resolution/inference to a set of logic statements available in a knowledge base to answer a query.
• Apply simple machine learning algorithms for classifying a set of data.
Course Assessment and Grading
Item Weight (%)
Mid Term exam 20
Final exam 30
Quizzes 15
Homework Assignments 20
Group Project 15
COURSE #: DMNS 3031
Course Description
This course is an introduction to statistics and probability. It is designed to equip students with understanding of foundations of statistics and probability and focuses on using modern statistical
packages in examining relevant applications. The course is a prerequisite for advanced statistics.
Learning Outcomes
At the end of this course, students should be able to:
• Define data for different types and scales of measurements.
• Identify descriptive statistics from inferential statistics
• Define the role of descriptive statistics and inferential statistics in quantitative analyses.
• Compute descriptive statistics for a dataset
• Create appropriate visualizations for different types of data using a statistical package such as R, Excel etc.
• Describe types of random variables, probability distributions and their properties.
• Identify and apply appropriate statistical tests to make valid generalizations about a population based on sample data.
• Interpret the results of statistical tests and outputs from a statistical programming package (R/Excel) to draw valid conclusions and communicate them orally and verbally.
Course Assessments and Grading
Item Weight
And 20%
Project 15%
Class Participation 5%
Midterm Exam 30%
Final exam 30%
|
{"url":"https://ucentralasia.org/schools/school-of-arts-and-sciences/course-catalogues/computer-science/cs-fall-2022-catalogues/cs-fall-2022-junior","timestamp":"2024-11-05T16:45:43Z","content_type":"text/html","content_length":"42497","record_id":"<urn:uuid:b3452452-1284-476b-abbd-9f4c4ea0c461>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00459.warc.gz"}
|
What's Happening to Global Inequality? Maybe Not What You Think
This blog appeared on VoxEU
Rising inequality within major economies of the world has now been well documented. Recent Vox columns include, for example, Dorn and Levell (2022), Lansing and Markiewicz (2018), Ravallion and Chen
(2021), and Kanbur (2020). But what is happening to global inequality, meaning inequality across all citizens of the world?
Leaving to one side the COVID crisis, which we will come to shortly, the broad consensus is that over the last three decades global inequality has fallen, despite the rise in inequality within large
countries such as the US and China. This is discussed, for example, in the work of Anand and Segal (2005), Ravallion (2014), Bourguignon (2015), Lakner and Milanovic (2016), Niño-Zarazúa et al.
(2016), and World Bank (2016). World Bank (2016), based on an update of Lakner-Milanovic, show that the Gini fell from 69.7% in 1988 to 62.5% in 2013. There is also a consensus on the source of this
decline – the fast growth of per capita income in middle income countries such China and India, which has reduced rapidly the between-country component of global inequality, even as within-country
inequality has risen in many countries.
The implications of these contrasting perspectives on inequality are not without importance in the policy domain. Thus, Rogoff (2014) argues: “The same machine that has increased inequality in rich
countries has levelled the global playing field globally for billions.” This is right since global inequality has fallen over the last three decades. However, in our recent work, we argue that the
undoubted decline in global inequality over the last decades has spurred a ‘sunshine’ narrative of falling global inequality that has been rather oversold, in the sense that it is likely to be
temporary. We argue that the decline in global inequality will reverse in the coming years due to a turnaround in the between-country component of inequality. We find there is a potentially startling
global inequality ‘boomerang’, possibly in the mid-to-late 2020s, which would have happened even if there were no pandemic, and that the pandemic is likely to bring forward the global inequality
A new type of Kuznets curve
The famous Kuznets ‘inverse-U’ curve traces inequality as population within a country moves sharply from a low-income rural sector to a high-income urban sector. In this setting it is shown that
overall inequality will first increase and then decrease. The empirical validity of the Kuznets inverse-U has been much debated (Anand and Kanbur 1993). However, in the global setting of the last
three decades the inequality issue is not so much of population movements across countries, but rather of dramatic shifts in relative per capita incomes – which determine the between country
component of global inequality.
Intuitively, think of the world as composed of three countries: low per capita income ‘Africa’, middle per capita income ‘China’, and high per capita income ‘US’. And consider what happens to between
country inequality as China moves, in relative terms, from being close to Africa to becoming close to the US. This move increases the gap between China and Africa while reducing it between China and
the US. These have opposite effects on global inequality. But since the gap between China and the US is much bigger to start with, the effect of reduction in this gap dominates and global inequality
falls. However, at the other end of the process, when China is close to the US, the consequences of further closing this gap are to increase inequality, and with the same reasoning.
There is thus a new type of Kuznets curve, related not to population movements but to relative growths of per capita income. It is a U-curve, not an inverse-U curve. If the middle-income country
grows relative to the low-income and high-income country, global inequality will first decrease and then increase. This theoretical possibility is intuited in Ravallion (2014) and Bourguignon (2015).
Our work formalises the result and derives the location of the turning point for a specific measure of inequality – the mean log deviation (MLD).
So much for the intuition and the theoretical possibility. But what about the empirics? When could the turning point happen? Our work, which takes into account the effects of the pandemic, shows that
this could be as soon as the mid-to-end of this decade.
The ‘ten cents’ database and the past
In order to analyse changes in global income inequality we need household survey data that allows for a global interpersonal comparison of incomes. To this end, our paper exploits what we have termed
as the ‘ten cents database’, which has been built from the World Bank’s tool of harmonised household income and consumption surveys (Arayavechkit et al. 2021). This tool contains household income and
consumption data for between 156 and 162 countries each year over the period 1981–2019, which together cover about 96% of the world’s population. Kanbur et al. (2022) provide further detail, which
also explains the nomenclature of ‘ten cents data base’.
Our first task is to show that our ‘ten cents database’ produces results consistent with available evidence on the past (e.g. Lakner and Milanovic 2016, Milanovic 2016, World Bank 2016). Thus, the
computations from the reconstructed income distributions reveal that global income inequality, as measured by either the Gini coefficient or the mean log deviation (MLD), has been falling markedly
and steadily since the end of the 1990s and up to 2015, with a relative stagnation onwards to 2019 (Figure 1). This gives us confidence in our data set as the basis for projections of inequality, to
which we now turn.
Figure 1. Evolution of global income inequality, 1981–2019
Source: Authors’ calculations based on country-year per capita income or consumption distributions reconstructed from the World Bank’s PovcalNet online tool (March 2021 update).
The future of global inequality to 2040
Projecting forward from 2019 requires us to take account of the pandemic. Following the Lakner et al. (2020) method, we project forward the global distribution of income (or consumption) in 2019 by
applying a pass-through rate of 85% of the country’s GDP per capita growth rate between that year and 2020.
Then, for the period 2021–40, the computation of inequality indices results after each country’s income is extrapolated following the approach of Prydz et al. (2019). That is, each income in the
distribution is multiplied by a factor that represents the corresponding country’s annual growth rate. The analysis accounts for demographic changes by assuming that the population share at each
income level grows yearly at that country’s population growth rate projected by the UN World Population Prospects for the period 2020–40.
We project forward with two ad-hoc growth scenarios. First, an optimistic, return to pre-pandemic long-run growth scenario 1, in the spirit of Pritchett and Summers’ (2014) argument on ‘regression to
the mean’, in which each country’s incomes will grow at the per capita annual average rate observed over 1990–2019. Second, a vaccination-driven post-pandemic growth scenario 2, in which each
country’s incomes will grow at a rate that depends on each country’s share of population fully vaccinated (see discussion in Deb et al. 2021, UNDP 2022a, 2022b, and on the COVID vaccination data, see
Mathieu et al. 2021). The inequality projections are shown in Figure 2.
Figure 2. Evolution of global income inequality under different assumptions, 1981–2040
Note: Vertical lines delimit the change in income inequality between 2019 and 2020. The scenario 1 refers to the return to pre-pandemic long-run growth path in which it is assumed that each country’s
income bins will grow at the per capita annual average rate observed over 1990–2019. The scenario 2 refers to the vaccination-driven post-pandemic growth path in which it is assumed that each
country’s income bins will grow at a rate conditional on each country’s share of fully vaccinated people.
Source: Authors’ calculations based on country-year per capita income or consumption distributions reconstructed from the World Bank’s PovcalNet online tool (March 2021 update).
What do we find? First, between 2019 and 2020, global inequality exhibits a rise. This inequality uptick is consistent with the result reported by Yonzan et al. (2021). It is also consistent with the
finding by Deaton (2021) for the concept of world inequality, i.e. that in which each individual in the world is assigned their corresponding country’s GDP per capita.
Second, the estimates after 2020 show an unambiguous feature: there will be a reversal, or ‘boomerang’, in the recent declining global inequality trend by the early-2030s. Under scenario 1, the
declining trend recorded since 2000 would reach a minimum by the end-2020s, followed by the emergence of a global income inequality boomerang. If, on the other hand, growth is linked to countries’
share of fully vaccinated population (scenario 2), a startling result emerges: the inequality boomerang would occur around 2024 based on the Gini coefficient, while it may be happening immediately
after the first year of the pandemic based on the MLD.
The above boomerang results emerge from an analysis which assumes that inequality within each country remains unchanged. In other words, that growth for each country is ‘distribution-neutral’. In our
detailed work (Kanbur et al. 2022) we also present inequality projections for growth that is not distributionally neutral within each country, by assuming that the change across deciles between the
last two surveys persists into the future. We find that the boomerang in global inequality emerges sooner, and well before the end of this decade.
Our results point towards the potential of a startling global inequality ‘boomerang’ toward the end-2020s or the early-2030s, driven by the path of between-country inequality, as middle-income
countries approach income levels of high-income countries but by the same token pull away from low-income countries. The global inequality boomerang could occur sooner if the access to COVID-19
vaccines across the developing world—which likely prevents a full economic recovery and growth potential—remains unequal. Projections which further extrapolate recent patterns of distributional
non-neutral growth show that the upward turn in global inequality could come even sooner.
The conclusion is that the ‘sunshine narrative’ of declining global inequality needs to be tempered. An inequality boomerang is quite likely.
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional
|
{"url":"https://www.cgdev.org/blog/whats-happening-global-inequality-maybe-not-what-you-think","timestamp":"2024-11-08T05:16:21Z","content_type":"text/html","content_length":"104421","record_id":"<urn:uuid:374d50b0-439b-47ed-b84b-3d7f4907246a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00155.warc.gz"}
|
How to Do "Round Half Up" In Tensorflow?
To perform "round half up" in TensorFlow, you can use the tf.round() function and add 0.5 before rounding. This method ensures that any number greater than or equal to 0.5 will be rounded up to the
nearest integer, while numbers less than 0.5 will be rounded down. By adding 0.5 before using tf.round(), you can achieve the "round half up" behavior in TensorFlow.
What is the syntax for the round_half_up function in TensorFlow?
The syntax for the tf.round_half_up() function in TensorFlow is as follows:
1 tf.math.round_half_up(x, name=None)
• x is the input tensor.
• name is an optional name for the operation.
This function rounds the input tensor x to the nearest integer, rounding halfway cases away from zero.
How to implement rounding in TensorFlow?
To implement rounding in TensorFlow, you can use the tf.round() function. Here is an example of how to use it:
1 import tensorflow as tf
3 # Define a tensor with values that need to be rounded
4 x = tf.constant([1.2, 2.7, 3.5, 4.8])
6 # Round the tensor
7 rounded_x = tf.round(x)
9 # Start a TensorFlow session and run the rounding operation
10 with tf.Session() as sess:
11 result = sess.run(rounded_x)
12 print(result)
This code snippet creates a tensor with some floating-point values, rounds the values using the tf.round() function, and then runs the operation in a TensorFlow session to get the rounded values.
How to handle rounding for large numbers in TensorFlow?
In TensorFlow, you can use the tf.math.round function to round large numbers to a specified number of decimal places. Here's an example of how you can use this function to round a large number to two
decimal places:
1 import tensorflow as tf
3 # Define a large number
4 large_number = tf.constant(1234.56789)
6 # Round the large number to two decimal places
7 rounded_number = tf.math.round(large_number, 2)
9 # Print the rounded number
10 print(rounded_number)
This will output 1234.57, which is the rounded value of the original large number 1234.56789 to two decimal places.
You can adjust the number of decimal places to round to by changing the second argument of the tf.math.round function.
What is the relationship between rounding and error propagation in numerical computations?
Rounding refers to the process of approximating a number to a certain number of decimal places or significant figures. In numerical computations, rounding can introduce errors into the calculations
because the rounded value may not accurately represent the true value of the number being approximated.
Error propagation refers to the way in which errors in the input values of a numerical computation affect the accuracy of the final result. When rounding is involved in numerical computations, errors
can propagate and accumulate throughout the calculation process, leading to potentially significant inaccuracies in the final result.
Therefore, the relationship between rounding and error propagation in numerical computations is that rounding can introduce errors into the calculations, which can then propagate and affect the
accuracy of the final result. It is important for numerical analysts and scientists to carefully consider the effects of rounding on error propagation in their computations in order to minimize
inaccuracies and ensure the reliability of their results.
How to apply rounding to a tensor in TensorFlow?
In TensorFlow, you can apply rounding to a tensor using the tf.round() function. This function takes a tensor as input and returns a new tensor with the elements rounded to the nearest integer.
Here is an example of how to apply rounding to a tensor in TensorFlow:
1 import tensorflow as tf
3 # Create a tensor
4 x = tf.constant([1.1, 2.5, 3.9, 4.6])
6 # Apply rounding to the tensor
7 rounded_tensor = tf.round(x)
9 # Create a TensorFlow session
10 with tf.Session() as sess:
11 # Run the session to evaluate the rounded tensor
12 result = sess.run(rounded_tensor)
13 print(result)
In this example, the tf.constant() function is used to create a tensor with some floating-point values. The tf.round() function is then used to apply rounding to the tensor. Finally, a TensorFlow
session is created to run the computation and evaluate the rounded tensor. The result is printed out, showing the tensor with the elements rounded to the nearest integer.
|
{"url":"https://studentprojectcode.com/blog/how-to-do-round-half-up-in-tensorflow","timestamp":"2024-11-03T13:06:25Z","content_type":"text/html","content_length":"358952","record_id":"<urn:uuid:4f13e68f-f7d1-40f2-bbdc-17bcff680fec>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00199.warc.gz"}
|
3. DOCENTE
3.7. Geogebra question type
Dal sito
Question Type GeoGebra
Please be aware that this plugin is in beta state.
The GeoGebra question type plugin allows teachers to set up questions which can be solved and automatically checked using GeoGebra.
The automated check is currently limited to check against one or more boolean variables in GeoGebra.
There is a german introductory video available at:
Github repo -> See Moodle Git for Administrators for install.
Usage Teacher
Preparing a Worksheet
• Create a worksheet where there is at least one boolean variable which indicates whether the students solution is correct
• Upload the question to GeoGebraTube
Adding a Question in Moodle
• As a teacher, create a GeoGebra question in Moodle
• Supply the URL of the GeoGebraTube worksheet or choose the material using the file picker (only works with GeoGebraTube repository installed)
• Load the Applet. Variables which could be randomized or can be used for checking correctness, will be extracted automatically
• Choose the fraction which goes with the boolean variable
• Save the question and use it for your quiz
Example: Find a Point in the Coordinate System
Finding a point in the coordinate system for students who know how to deal with natural numbers
In GeoGebra and GeoGebraTube
• Hide the Algebra View, show the grid, show only the positive direction for the axis
• Zoom in, such that the grid has a distance of 0.5
• Create a slider with name a, min 0, max 7 and increment 0.1
• Create a slider with name b, min 0, max 4 and increment 0.1
• Type A = (a,b) in the Input Bar
• Type B = (0,0) in the Input Bar
• Type solved = Distance[A,B]<0.1 in the Input Bar
• Hide Point A and the two sliders
• Resize the window, so you can see the x-axis from 0 to 7 and the y-axis from 0 to 4
• Choose Share from the file menu
• Make sure the applet size was detected correctly and press next. You do not need any informations for the student, since you fill in this in Moodle.
• Fill in the fields with information for other teachers. You should choose Shared with link in the visibility section.
You can also find this file under https://www.geogebra.org/material/show/id/Tz7PugnG
In Moodle
• Go to the Question Bank of your Moodle course
• Choose Create a new question...
• Double click on GeoGebra
• Type in a question name
• Type: Drag the blue point to the point A=({a}/{b}).
• Click Choose a link... in the GeoGebra Applet section
• Find and choose the file you just uploaded to GeoGebraTube
□ You will get a list of your files (public and shared with link) when you are logged in to GeoGebraTube in the same browser instance.
□ Alternatively you can copy and paste the share link which is shown on GeoGebraTube
• Click (Re)Load and show applet - the applet will be shown
• Choose yes in the Are there any variables which should be randomized? drop-down list
• If you're lucky, we found the correct variables, which can be randomized for you, otherwise type a,b in the Variables to be randomized input
• In the Answers section set Variable 1 to solved and the grade to 100%
• Save the question and preview the question. Use the question in a Moodle Quiz.
|
{"url":"https://www.a049.it/m36/mod/book/view.php?id=28&chapterid=19","timestamp":"2024-11-04T07:18:30Z","content_type":"text/html","content_length":"51626","record_id":"<urn:uuid:3146ce6f-669e-4620-988f-6b7433abe165>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00742.warc.gz"}
|
World Journal of Mechanics
Vol.2 No.1(2012), Article ID:17690,9 pages DOI:10.4236/wjm.2012.21006
Hard-to-Soft Transition in Radial Buckling of Multi-Concentric Nanocylinders
^1Department of Urban and Environment Engineering, University of Incheon, Incheon, Korea
^2Division of Engineering and Policy for Sustainable Environment, Faculty of Engineering, Hokkaido University, Sapporo, Japan
^3Division of Engineering and Policy for Sustainable Environment, Graduate School of Engineering, Hokkaido University, Sapporo, Japan
^4Division of Applied Physics, Faculty of Engineering, Hokkaido University, Sapporo, Japan
^5Department of Environmental Sciences, University of Yamanashi, Kofu, Japan
Email: ^*tayu@eng.hokudai.ac.jp
Received November 29, 2011; revised January 5, 2012; accepted January 15, 2012
Keywords: Carbon Nanotube; Buckling; Radial Corrugation; High Pressure Phenomenon; Van der Waals Coupling; Multiple Core-Shell Structure; Thin Shell Theory
We investigate the cross-sectional buckling of multi-concentric tubular nanomaterials, which are called multiwalled carbon nanotubes (MWNTs), using an analysis based on thin-shell theory. MWNTs under
hydrostatic pressure experience radial buckling. As a result of this, different buckling modes are obtained depending on the inter-tube separation d as well as the number of constituent tubes N and
the innermost tube diameter. All of the buckling modes are classified into two deformation phases. In the first phase, which corresponds to an elliptic deformation, the radial stiffness increases
rapidly with increasing N. In contrast, the second phase yields wavy, corrugated structures along the circumference for which the radial stiffness declines with increasing N. The hard-to-soft phase
transition in radial buckling is a direct consequence of the core-shell structure of MWNTs. Special attention is devoted to how the variation in d affects the critical tube number N[c], which
separates the two deformation phases observed in N -walled nanotubes, i.e., the elliptic phase for N < N[c] and the corrugated phase for N > N[c]. We demonstrate that a larger d tends to result in a
smaller N[c], which is attributed to the primary role of the interatomic forces between concentric tubes in the hard-to-soft transition during the radial buckling of MWNTs.
1. Introduction
The term “buckling” refers to a deformation through which a pressurized material undergoes a sudden failure and exhibits a large displacement in a direction transverse to the load [1]. A typical
example of buckling occurs when pressing opposite edges of a long, thin elastic beam toward one another. For small loads, the beam is compressed in the axial direction while keeping its linear shape
and the strain energy is proportional to the square of the axial displacement. Beyond a certain critical load, however, it suddenly bends into an arc shape and the strain energy and displacements are
no longer related by a quadratic expression. Besides axial compression, bending and torsion give rise to buckling of elastic objects, where the buckled patterns depend strongly on the geometric and
material parameters.
An interesting class of elastic buckling can be observed in structural pipe-in-pipe cross sections under hydrostatic pressure [2,3]. Pipe-in-pipe (i.e., a pipe inserted inside another pipe)
applications are commonly used in offshore oil and gas production systems in civil engineering. In subsea pipelines in deep water, for instance, buckling resistance to huge external hydrostatic
pressure is a key structural design requirement. Pipe-inpipe systems may be an efficient design solution that meets this strict requirement, because their concentric structures enable the cross
section to withstand high pressure without collapsing.
The above argument regarding macroscopic objects poses a question as to what buckling behavior may be observed in nanometer-scale (10^–^9 m) counterpart objects. In nanomaterial sciences, the
buckling of carbon-based hollow cylinders with nanometric diameters (called carbon nanotubes) has drawn great attention [4]. Extensive studies on carbon nanotube mechanics have been thus far driven
by their exceptional resilience against deformation; that is, the recovery of the original cylindrical shapes of the carbon nanotubes upon unloading, even when subjected to severe loading conditions.
In addition to the excellent strain-relaxation reversibility, carbon nanotubes exhibit high fatigue resistance; therefore, they are a promising medium for the storage of mechanical energy with an
extremely high energy density [5]. Nevertheless, due to their nanometric scales, the similarities and differences in the buckling patterns compared with those of their macroscopic counterparts are
not trivial. This complexity has motivated tremendous efforts toward the analysis of the buckling of carbon nanotubes under diverse loading conditions: axial compression [6-10], radial compression
[11-22], bending [23-28], torsion [29-32], and combinations of these [33].
In this article, we focus our attention on the radial buckling of carbon nanotubes observed under hydrostatic pressure on the order of several hundreds of megapascal. Thin-shell-theory based analysis
on the cross-sectional deformation of nanotubes leads us to the conclusion that the buckled patterns strongly depend on the inter-tube separation
2. What Are “Carbon Nanotubes”?
Carbon nanotubes are one of the most promising nanomaterials, and they consist of layers of graphene sheets that are each a single atom thick (two-dimensional hexagonal lattices of carbon atoms)
rolled up into concentric cylinders [34]. By convention, they are categorized as single-walled nanotubes (SWNTs) or multi-walled nanotubes (MWNTs): the former is made by wrapping one single layer
into one seamless cylinder, while the latter comprise two or more concentric graphitic tubes. The constituent tubes in MWNTs are coupled to one another via the van der Waals (vdW) interaction,
wherein the separation between adjacent concentric tubes is approximately 0.34 nm in equilibrium conditions.
The excellent mechanical properties of carbon nanotubes are characterized by the remarkably high Young’s modulus, which is on the order of terapascal (i.e., several times stiffer than steel), and the
tensile strength, which is as high as tens of gigapascal [33]. These properties are proof that carbon nanotubes are the stiffest and strongest materials on earth. In addition to the marked stiffness,
carbon nanotubes exhibit astounding flexibility when subjected to external hydrostatic pressure. The radial deformation both of SWNTs and MWNTs is an important consequence of this flexibility;
however, the theoretical understanding of the flexibility of MWNTs is still lacking due to their structural complexity.
Emphasis should be placed on the fact that on application of a mechanical deformation, carbon nanotubes show significant changes in their physical and chemical properties [34,35]. Precise knowledge
of their deformation mechanism and available geometry is, therefore, crucial for understanding their structure-property relations and for developing next generation carbon-nanotube-based
3. Formulation
3.1. Continuum Approximation
The aim of this section is to deduce the stable cross-sectional shape of a MWNT under a hydrostatic pressure
The optimal displacements
3.2. Strain-Displacement Relation
Evaluating the functional form of ^1
the following relationship can be obtained:
Hereafter, we assume that
where the following definitions hold true:
3.3. Deformation Energy
We are now ready to derive the explicit form of the deformation energy
in which the component
From Equations (8) and (12) we obtain the following relationship:
which can also be written as follows:
The constant
For quantitative discussions, the values of
^2 Thus, the values of
3.4. Inter-Tube Coupling Energy
The energy associated with the van der Waals (vdW) interaction between adjacent pairs of tubes, designated by
We derive the coefficients
The vdW pressures on the inner and outer tubes of a concentric two-walled tube with radii
In Equation (18),
In the following, we obtain analytical expressions for
where the derivatives in Equation (20) are defined as follows:
Note that
3.5. Pressure-Induced Energy
We finally derive an explicit form of
By substituting Equations (3) and (4) into Equation (23), and by using the periodicity relation
3.6. Critical Pressure Evaluation
This section presents our method for determining the critical pressure
By applying the variational method to
where ^3 because
Substituting these into Equations (25) and (26) leads to the matrix equation
By solving Equation (27) with respect to
where the value of
4. Result and Discussion
4.1. Critical Pressure Curve
Figures 1(a) and (b) show ^4 The increase in
^4Such a decay is also observed for D = 5.0 nm and larger D, in principle, if a sufficiently large N is considered [but omitted in Figure 1(b)].
We emphasize that in Figure 1(a), the softening region (i.e.,
4.2. Sequential Change in Buckling Modes
Figure 2 provides (a) the index
Figure 1. (Color online) Critical pressure curves showing P[c] required to produce radial deformation of N-walled nanotubes with fixed D: (a) D = 3.0 nm, and (b) D = 5.0 nm.
above Figure 2(a) that the deformation mode observed just above
4.3. Hard-to-Soft Transition
Of further interest is that the critical number of tubes
Figure 2. [Upper panel] (a) Stepwise increase in the index n of radial buckling modes. The index n indicates the circumferential wave number of the deformed cross-section. [Bottom panel]
Cross-sectional views of buckled MWNTs under high hydrostatic pressure: (b)-(d) Elliptic deformation mode (n = 2) for N = 5, 10, 20; (e) Radial corrugation mode with n = 8 for N = 25; (f) n = 9 for N
= 35; (g) n = 11 for N = 50.
yields a cusp in the curve of Figure 1(a)]. In contrast, no singularity is observed in the curve of
Figure 3 explains why the singular cusp in the Figure 3 depicts the N-dependence of
Figure 3. Branches of solutions p(N) for the secular equation det(M) = 0 (refer text). The innermost tube diameter is set to be D = 3.0 nm for all curves. For a fixed N, the minimum value of p among
the branches takes a role of the critical pressure p[c] at that N.
sponds toFigure 3 that the cusps in the curves
5. Summary
A thin-shell-theory based analysis has been employed to detect the mechanical hard-to-soft transition relevant to the radial buckling of MWNTs subject to hydrostatic pressure. Various buckled
patterns are found to be available, and the parameters
6. Acknowledgements
The fruitful discussion with S. Ghosh and M. Arroyo on the vdW-interaction formulas is greatly acknowledged. This work was supported by KAKENHI from MEXT, Japan. HS cordially thanks the Inamori
Foundation and the Suhara Memorial Foundation for financial support.
1. D. O. Brush and B. O. Almroth, “Buckling of Bars, Plates, and Shells,” McGraw-Hill, New York, 1975.
2. M. Sato and M. H. Patel, “Exact and Simplified Estimations for Elastic Buckling Pressures of Structural Pipein-Pipe Cross Sections under External Hydrostatic Pressure,” Journal of Marine Science
and Technology, Vol. 12, No. 4, 2007, pp. 251-262. doi:10.1007/s00773-007-0244-y
3. M. Sato, M. H. Patel and F. Trarieux, “Static Displacement and Elastic Buckling Characteristics of Structural Pipe-in-Pipe Cross-Sections,” Structural Engineering and Mechanics, Vol. 30, 2008,
pp. 263-278.
4. H. Shima, “Buckling of Carbon Nanotubes: A State of the Art Review,” Materials, Vol. 5, No. 1, 2012, pp. 47-84. doi:10.3390/ma5010047
5. R. Zhang, Q. Wen, W. Qian, D. Sheng, Q. Zhang and F. Wei, “Superstrong Ultralong Carbon Nanotubes for Mechanical Energy Storage,” Advanced Materials, Vol. 23, No. 30, 2011, pp. 3387-3391.
6. B. I. Yakobson, C. J. Brabec and J. Bernholc, “Nanomechanics of Carbon Tubes: Instabilities beyond Linear Response,” Physical Review Letters, Vol. 76, No. 14, 1996, pp. 2511-2514. doi:10.1103/
7. C. Q. Ru, “Axially Compressed Buckling of a Doublewalled Carbon Nanotube Embedded in an Elastic Medium,” Journal of the Mechanics and Physics of Solids, Vol. 49, No. 6, 2001, pp. 1265-1279.
8. B. Ni, S. B. Sinnott, P. T. Mikulski and J. A. Harrison, “Compression of Carbon Nanotubes Filled with C[60], CH[4], or Ne: Predictions from Molecular Dynamics Simulations,” Physical Review
Letters, Vol. 88, 2002, pp. 205505: 1-205505:4. doi:10.1103/PhysRevLett.88.205505
9. M. J. Buehler, J. Kong and H. J. Gao, “Deformation Mechanism of Very Long Single-Wall Carbon Nanotubes Subject to Compressive Loading,” Journal of Engineering Materials and Technology, Vol. 126,
No. 3, 2004, pp. 245-249. doi:10.1115/1.1751181
10. A. Pantano, M. C. Boyce and D. M. Parks, “Mechanics of Axial Compression of Singleand Multi-Wall Carbon Nanotubes,” Journal of Engineering Materials and Technology, Vol. 126, No. 3, 2004, pp.
279-284. doi:10.1115/1.1752926
11. J. Tang, J. C. Qin, T. Sasaki, M. Yudasaka, A. Matsushita and S. Iijima, “Compressibility and Polygonization of Single-Walled Carbon Nanotubes under Hydrostatic Pressure,” Physical Review
Letters, Vol. 85, No. 9, 2000, pp. 1887-1889. doi:10.1103/PhysRevLett.85.1887
12. A. Pantano, D. M. Parks and M. C. Boyce, “Mechanics of Deformation of Singleand Multi-Wall Carbon Nanotubes,” Journal of the Mechanics and Physics of Solids, Vol. 52, No. 4, 2004, pp. 789-821.
13. J. A. Elliott, L. K. W. Sandler, A. H. Windle, R. J. Young and M. S. P. Shaffer, “Collapse of Single-Wall Carbon Nanotubes Is Diameter Dependent,” Physical Review Letters, Vol. 92, 2004, pp.
095501:1-095501:4. doi:10.1103/PhysRevLett.92.095501
14. H. Shima and M. Sato, “Multiple Radial Corrugations in Multiwall Carbon Nanotubes under Pressure,” Nanotechnology, Vol. 19, 2008, pp. 495705:1-495705:8. doi:10.1088/0957-4484/19/49/495705
15. J. Peng, J. Wu, K. C. Hwang, J. Song and Y. Huang, “Can a Single-Wall Carbon Nanotube Be Modeled as a Thin Shell?” Journal of the Mechanics and Physics of Solids, Vol. 56, No. 6, 2008, pp.
2213-2224. doi:10.1016/j.jmps.2008.01.004
16. H. Shima and M. Sato, “Pressure-Induced Structural Transitions in Multi-Walled Carbon Nanotubes,” Physica Status Solidi (a), Vol. 206, 2009, pp. 2228-2233. doi:10.1002/pssa.200881706
17. M. Sato and H. Shima, “Buckling Characteristics of Multiwalled Carbon Nanotubes under External Pressure,” Interaction and Multiscale Mechanics: An International Journal, Vol. 2, 2009, pp.
18. A. P. M. Barboza, H. Chacham and B. R. A. Neves, “Universal Response of Single-Wall Carbon Nanotubes to Radial Compression,” Physical Review Letters, Vol. 102, 2009, pp. 025501:1-025501:4.
19. H. Shima, M. Sato, K. Iiboshi, S. Ghosh and M. Arroyo, “Diverse Corrugation Pattern in Radially Shrinking Carbon Nanotubes,” Physical Review B, Vol. 82, 2010, pp. 085401:1-085401:7. doi:10.1103/
20. M. Sato, H. Shima and K. Iiboshi, “Core-Tube Morphology of Multiwall Carbon Nanotubes,” International Journal of Modern Physics B, Vol. 24, No. 1-2, 2010, pp. 288- 294. doi:10.1142/
21. X. Huang, W. Liang and S. Zhang, “Radial Corrugations of Multi-Walled Carbon Nanotubes Driven by Inter-Wall Nonbonding Interactions,” Nanoscale Research Letters, Vol. 6, 2011, pp. 53-58.
22. H. Shima, S. Ghosh, M. Arroyo, K. Iiboshi and M. Sato, “Thin-Shell Theory Based Analysis of Radially Pressurized Multiwall Carbon Nanotubes,” Computational Materials Science, Vol. 52, No. 1,
2012, pp. 90-94. doi:10.1016/j.commatsci.2011.04.005
23. S. Iijima, C. Brabec, A. Maiti and J. Bernholc, “Structural Flexibility of Carbon Nanotubes,” Journal of Chemical Physics, Vol. 104, No. 5, 1996, pp. 2089-2092. doi:10.1063/1.470966
24. M. R. Falvo, G. J. Clary, R. M. Taylor II, V. Chi, F. P. Brooks Jr., S. Washburn and R. Superfine, “Bending and Buckling of Carbon Nanotubes under Large Strain,” Nature, Vol. 389, 1997, pp.
582-584. doi:10.1038/39282
25. P. Poncharal, Z. L. Wang, D. Ugarte and W. A. de Heer, “Electrostatic Deflections and Electromechanical Resonances of Carbon Nanotubes,” Science, Vol. 283, No. 5407, 1999, pp. 1513-1516.
26. Y. Shibutani and S. Ogata, “Mechanical Integrity of Carbon Nanotubes for Bending and Torsion,” Modelling and Simulation in Materials Science and Engineering, Vol. 12, No. 4, 2004, pp. 599-610.
27. A. Kutana and K. P. Giapis, “Transient Deformation Regime in Bending of Single-Walled Carbon Nanotubes,” Physical Review Letters, Vol. 97, 2006, pp. pp.245501: 1-245501:4. doi:10.1103/
28. H. K. Yang and X. Wang, “Bending Stability of Multi-Wall Carbon Nanotubes Embedded in an Elastic Medium,” Modelling and Simulation in Materials Science and Engineering, Vol. 14, No. 1, 2006, pp.
99-116. doi:10.1088/0965-0393/14/1/008
29. I. Arias and M. Arroyo, “Size-Dependent Nonlinear Elastic Scaling of Multiwalled Carbon Nanotubes,” Physical Review Letters, Vol. 100, 2008, pp. 085503:1-085503:4. doi:10.1103/
30. Q. Wang, “Torsional Buckling of Double-Walled Carbon Nanotubes,” Carbon, Vol. 46, No. 8, 2008, pp. 1172- 1174. doi:10.1016/j.carbon.2008.03.025
31. M. Arroyo and I. Arias, “Rippling and a Phase-Transforming Mesoscopic Model for Multiwalled Carbon Nanotubes,” Journal of the Mechanics and Physics of Solids, Vol. 56, No. 4, 2008, pp. 1224-1244.
32. B. W. Jeong and S. B. Sinnott, “Unique Buckling Responses of Multi-Walled Carbon Nanotubes Incorporated as Torsion Springs,” Carbon, Vol. 48, No. 6, 2010, pp. 1697-1701. doi:10.1016/
33. H. Shima and M. Sato, “Elastic and Plastic Deformation of Carbon Nanotoubes,” Pan Stanford Publishing, Singapore, 2012.
34. R. Saito, M. S. Dresselhaus and G. Dresselhaus, “Physical Properties of Carbon Nanotubes,” World Scientific Publishing Company, 1998.
35. A. Loiseau, P. Launois. P. Petit, S. Roche and J. -P. Salvetat, “Understanding Carbon Nanotubes: From Basics to Application,” Springer-Verlag, Berlin, 2006.
36. C. Q. Ru, “Column Buckling of Multiwalled Carbon Nanotubes with Interlayer Radial Displacements,” Physical Review B, Vol. 62, 2000, pp. 16962-16967. doi:10.1103/PhysRevB.62.16962
37. C. Y. Wang, C. Q. Ru and A. Mioduchowski, “Axially Compressed Buckling of Pressured Multiwall Carbon Nanotubes,” International Journal of Solids and Structures, Vol. 40, No. 15, 2003, pp.
3893-3911. doi:10.1016/S0020-7683(03)00213-0
38. H. S. Shen, “Postbuckling Prediction of Double-Walled Carbon Nanotubes under Hydrostatic Pressure,” International Journal of Solids and Structures, Vol. 41, No. 9-10, 2004, pp. 2643-2657.
39. X. Q. He, S. Kitipornchai and K. M. Liew, “Buckling Analysis of Multi-Walled Carbon Nanotubes: A Continuum Model Accounting for van der Waals Interaction,” Journal of the Mechanics and Physics of
Solids, Vol. 53, No. 2, 2005, pp. 303-326. doi:10.1016/j.jmps.2004.08.003
40. N. Silvestre, “Length Dependence of Critical Measures in Single-Walled Carbon Nanotubes,” International Journal of Solids and Structures, Vol. 45, No. 18-19, 2008, pp. 4902-4920. doi:10.1016/
41. N. Silvestre, C. M. Wang, Y. Y. Zhang and Y. Xiang, “Sanders Shell Model for Buckling of Single-Walled Carbon Nanotubes with Small Aspect Ratio,” Composite Structures, Vol. 93, No. 7, 2011, pp.
1683-1691. doi:10.1016/j.compstruct.2011.01.004
42. S. S. Gupta, F. G. Bosco and R. C. Batra, “Wall Thickness and Elastic Moduli of Single-Walled Carbon Nanotubes from Frequencies of Axial, Torsional and Inextensional Modes of Vibration,”
Computational Materials Science, Vol. 47, 2010, pp. 1049-1059. doi:10.1016/j.commatsci.2009.12.007
43. K. N. Kudin, G. E. Scuseria and B. I. Yakobson, “C[2]F, BN, and C Nanoshell Elasticity from ab initio Computations,” Physical Review B, Vol. 64, 2001, pp. 235406: 1-235406:10. doi:10.1103/
44. W. B. Lu, B. Liu, J. Wu, J. Xiao, K. C. Hwang, S. Y. Fu and Y. Huang, “Continuum Modeling of van der Waals Interactions between Carbon Nanotube Walls,” Applied Physics Letters, Vol. 94, 2009, pp.
101917:1-101917:3. doi:10.1063/1.3099023
45. L. A. Girifalco, M. Hodak and R. S. Lee, “Carbon Nanotubes, Buckyballs, Ropes, and a Universal Graphitic Potential,” Physical Review B, Vol. 62, No. 19, 2000, pp. 13104-13110. doi:10.1103/
46. K. Koziol, M. Shaffer and A. Windle, “Three-Dimensional Internal Order in Multiwalled Carbon Nanotubes Grown by Chemical Vapor Deposition,” Advanced Materials, Vol. 17, 2005, pp. 760-763.
47. C. Ducati, K. Koziol, S. Friedrichs, T. J. V. Yates, M. S. Shaffer, P. A. Midgley and A. H. Windle, “Crystallographic Order in Multi-Walled Carbon Nanotubes Synthesized in the Presence of
Nitrogen,” Small, Vol. 2, No. 6, 2006, pp. 774-784. doi:10.1002/smll.200500513
^*Corresponding author.
^1Throughout this subsection, the tilde
^2The tube is made out of a monoatomic graphitic layer, and consequently, the notion of a tube thickness becomes elusive.
^3The fact that the sum equals zero determines the functional form of
|
{"url":"https://file.scirp.org/Html/6-4900092_17690.htm","timestamp":"2024-11-12T15:49:04Z","content_type":"application/xhtml+xml","content_length":"137189","record_id":"<urn:uuid:7981904f-5685-4aee-8e63-4b79cfc0068c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00148.warc.gz"}
|
Multiplication Worksheets: Special Series
Squares and binary progression multiplication worksheets. These are common multiplication facts worth memorizing.
Perfect Squares
Perfect Cubes
Binary Sequences up to 256
Squares, Cubes and Powers of Two
Multiplication plays a special role in developing numbers known as perfect squares and perfect cubes. These are values who have as factors precisely three of exactly the same integer. These
multiplicationworksheets introduce these squares and cubes as simple multiplication facts, laying the foundation for understand how to take square roots and cube roots of these same products later.
Another interesting series of numbers is the binary sequence or the binary progression. Binary numbers play an enormously important role in computer science, and they come into play when encoding or
compressing information digitally. These multiplication worksheets deal strictly with multiplicands that are powers of two, and therefore the resulting products are also powers of two. Familiarity
with these values is a useful for young software developers or anybody with an interest in computers or information theory.
|
{"url":"https://www.dadsworksheets.com/worksheets/multiplication-special-series.html","timestamp":"2024-11-08T21:44:53Z","content_type":"text/html","content_length":"98005","record_id":"<urn:uuid:84b71bad-7205-4151-ac9c-f6f5d3f2063f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00653.warc.gz"}
|
5 cm to km (Convert 5 centimeters to kilometers)
First, note that cm is the same as centimeters and km is the same as kilometers. Thus, when you are asking to convert 5 cm to km, you are asking to convert 5 centimeters to kilometers.
A centimeter is smaller than a kilometer. Simply put, cm is smaller than km. In fact, a centimeter is "10 to the power of -5" smaller than a kilometer.
Since a centimeter is 10^-5 smaller than a kilometer, it means that the conversion factor for cm to km is 10^-5. Therefore, you can multiply 5 cm by 10^-5 to get 5 cm converted to km.
Here is the answer with the math showing you how to convert 5 cm to km by multiplying 5 by the conversion factor of 10^-5.
5 x 10^-5
= 0.00005
5 cm
= 0.00005 km cm to km Converter
Need to convert another cm to km? No problem! Submit another measurement of centimeters (cm) that you want to convert to kilometers (km).
6 cm to km
Go here for the next measurement of centimeters (cm) on our list that we have converted to kilometers (km).
As you may have concluded from learning how to convert 5 cm to km above, "5 centimeters to kilometers", "5 cm to km", "5 cm to kilometers", and "5 centimeters to km" are all the same thing.
Privacy Policy
|
{"url":"https://convertermaniacs.com/centimeter-to-kilometer/convert-5-cm-to-km.html","timestamp":"2024-11-10T15:21:13Z","content_type":"text/html","content_length":"6149","record_id":"<urn:uuid:16a985a7-8429-4686-ab2c-b9215c7f63b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00660.warc.gz"}
|
Comparison of Pearson vs Spearman Correlation Coefficients
Aspect Pearson Correlation Coefficient Spearman Correlation Coefficient
Purpose Measures linear relationships Measures monotonic relationships
Assumptions Variables are normally distributed, linear relationship Variables have monotonic relationship, no assumptions on distribution
Calculation Method Based on covariance and standard deviations Based on ranked data and rank order
Range of Values -1 to 1 -1 to 1
Interpretation Strength and direction of linear relationship Strength and direction of monotonic relationship
Sensitivity to Outliers Sensitive to outliers Less sensitive to outliers
Data Types Appropriate for interval and ratio data Appropriate for ordinal variables and non-normally distributed data
Sample Size The Pearson correlation coefficient isn’t the most efficient choice for small sample This method works well with smaller samples and doesn’t require normality
sizes. assumptions.
Usage Assessing linear associations, parametric tests Assessing monotonic associations, non-parametric tests
What is Pearson Correlation Coefficient?
The Pearson correlation coefficient also known as linear correlation is a statistical measure that quantifies the strength and direction of a linear relationship between two continuous variables. It
ranges from -1 to 1, with values close to -1 indicating a strong negative linear relationship, values close to 1 indicating a strong positive linear relationship, and 0 indicating no linear
What is Spearman Correlation Coefficient?
The Spearman correlation coefficient is a statistical measure that assesses the strength and direction of a monotonic relationship between two variables. It ranks the data rather than relying on
their actual values, making it suitable for non-normally distributed or ordinal data. It ranges from -1 to 1, where values close to -1 or 1 indicate a strong monotonic relationship, and 0 indicates
no monotonic relationship. Spearman correlation is valuable for detecting and quantifying associations when linear relationships are not assumed or when dealing with ranked or ordinal scale.
Example of Spearman’s Rank Correlation
Spearman’s Rank Correlation:
Let’s say we want to determine the relationship between the study time (in hours) and the exam scores (out of 100) of a group of students. We have the following data for five students:
Student Study Time (hours) Exam Score
A 10 75
B 8 60
C 12 85
D 6 55
E 9 70
First, we rank the study time and exam scores separately:
Student Study Time (hours) Rank (Study Time) Exam Score Rank (Exam Score)
A 10 3 75 3
B 8 4 60 5
C 12 1 85 1
D 6 5 55 6
E 9 2 70 4
Now, we calculate the differences between the ranks for each pair of data points:
• P=Rank of Study Time−Rank of Exam Score, Di=Rank of Study Timei−Rank of Exam Scorei
Next, we square each (Di) value:
The sum of ��2Di2 is 0+1+0+1+4=60+1+0+1+4=6.
So, the Spearman’s Rank Correlation coefficient (ρ) between study time and exam scores is 0.7, indicating a strong positive correlation.
Practical application of correlation using R?
Determining the association between Girth and Height of Black Cherry Trees (Using the existing dataset “trees” which is already present in r and can be accessed by typing the name of the dataset,
list of all the data set can be seen by using the command data() )
Below is the code to compute the correlation:
Loading the Dataset
> data <- trees
> head(data, 3)
Girth Height Volume
1 8.3 70 10.3
2 8.6 65 10.3
3 8.8 63 10.2
Creating a Scatter Plot Using ggplot2 Library
> library(ggplot2)
> ggplot(data, aes(x = Girth, y = Height)) + geom_point() +
+ geom_smooth(method = "lm", se =TRUE, color = 'red')
Test for Assumptions of Correlation
Here two assumptions are checked which need to be fulfilled before performing the correlation (Shapiro test, which is test to check the input variable is following the normal distribution or not, is
used to check whether the variables i.e. Girth and Height are normally distributed or not)
> shapiro.test(data$Girth)
Shapiro-Wilk normality test
data: data$Girth
W = 0.94117, p-value = 0.08893
> shapiro.test(data$Height)
Shapiro-Wilk normality test
data: data$Height
W = 0.96545, p-value = 0.4034
p–value is greater than 0.05, so we can assume the normality
> cor(data$Girth,data$Height, method = "pearson")
[1] 0.5192801
> cor(data$Girth,data$Height, method = "spearman")
[1] 0.4408387
Testing the Significance of the Correlation
For Pearson
> Pear <- cor.test(data$Girth, data$Height, method = 'pearson')
> Pear
Pearson's product-moment correlation
data: data$Girth and data$Height
t = 3.2722, df = 29, p-value = 0.002758
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.2021327 0.7378538
sample estimates:
For Spearman
> Spear <- cor.test(data$Girth, data$Height, method = 'spearman')
> Spear
Spearman's rank correlation rho
data: data$Girth and data$Height
S = 2773.4, p-value = 0.01306
alternative hypothesis: true rho is not equal to 0
sample estimates:
Since the p-value is less than 0.05 (For Pearson it is 0.002758 and for Spearman, it is 0.01306, we can conclude that the Girth and Height of the trees are significantly correlated for both the
coefficients with the value of 0.5192801 (Pearson) and 0.4408387 (Spearman).
Pearson vs Spearman Correlation – Final Verdict
As we can see both the correlation coefficients give the positive correlation value for Girth and Height of the trees. Still, the value given by them is slightly different because Pearson correlation
coefficients measure the linear relationship between the variables. In contrast, Spearman correlation coefficients measure only monotonic relationships, relationship in which the variables tend to
move in the same/opposite direction but not necessarily at a constant rate. In contrast, the rate is constant in a linear relationship.
Hope you like the article! Understanding the differences between Pearson vs Spearman correlation methods is essential for data analysis. Pearson measures linear relationships, while Spearman assesses
monotonic relationships. For correlation examples, use Pearson for continuous data and Spearman for ordinal data. The formulas for both correlations vary, influencing when to use each method
effectively. Knowing correlation Pearson vs Spearman helps ensure accurate results in your analyses.
Q1. What is the purpose of Pearson and Spearman correlation?
A. The Pearson and Spearman correlation measures the strength and direction of the relationship between variables. Pearson correlation assesses linear relationships, while Spearman correlation
evaluates monotonic relationships.
Q2. When should I use Spearman correlation?
A. Spearman correlation is useful when the relationship between variables is not strictly linear but can be described by a monotonic function. It is commonly used when dealing with ordinal or
non-normally distributed data.
Q3. Are Spearman correlations more powerful than Pearson correlations?
It is inaccurate to say that Spearman correlations are inherently more powerful than Pearson correlations. The choice between the two depends on the specific characteristics and assumptions of the
data and the research question being addressed.
Q4. When should I use Pearson correlation?
A. Pearson correlation is best for measuring the linear relationship between two quantitative variables that are normally distributed and have no outliers.
Q5. How Spearman different from Kendall?
A. Kendall’s tau and Spearman’s rank are similar correlation coefficients for non-normal data. Here’s the key difference:
Kendall’s tau: More robust to outliers, better for small samples (uses concordant/discordant pairs).
Spearman’s rank: Might give slightly higher values, but more sensitive to outliers (uses rank differences).
Free Courses
Responses From Readers
Thanks a lot. This is really useful.
Flash Card
What is correlation in statistics?
Correlation is a bivariate statistical measure that describes the association between two variables. It indicates how one variable behaves when there is a change in another variable. Positive
correlation occurs when both variables increase or decrease together, while negative correlation occurs when one variable increases as the other decreases. Zero correlation means changes in one
variable have no effect on the other.
What does a correlation of zero indicate in statistics?
Flash Card
Why are correlation coefficients important in data science and machine learning?
Correlation coefficients help uncover hidden relationships between variables, such as factors influencing house prices. They assist in identifying patterns and selecting the most relevant data for
machine learning models, enhancing their efficiency. They aid in feature selection by showing how models interpret data and identifying potential issues.
How do correlation coefficients enhance machine learning models?
Flash Card
What is Spearman's correlation used for?
Spearman's correlation assesses the strength and direction of a monotonic relationship between two variables. It evaluates how one variable tends to increase or decrease as the other changes, without
assuming a linear relationship. It is useful for data that does not form a perfect line, revealing underlying trends.
What type of relationship does Spearman's correlation assess?
Flash Card
How does Pearson correlation differ from Spearman correlation?
Pearson correlation measures linear relationships, while Spearman correlation measures monotonic relationships. Pearson assumes variables are normally distributed and have a linear relationship,
whereas Spearman does not assume normal distribution. Pearson is based on covariance and standard deviations, while Spearman uses ranked data and rank order. Pearson is sensitive to outliers, whereas
Spearman is less sensitive.
Which correlation method is less sensitive to outliers?
Flash Card
What is the Pearson correlation coefficient?
The Pearson correlation coefficient quantifies the strength and direction of a linear relationship between two continuous variables. It ranges from -1 to 1, with values close to -1 indicating a
strong negative linear relationship and values close to 1 indicating a strong positive linear relationship. A value of 0 indicates no linear relationship.
What does a Pearson correlation coefficient of 0 indicate?
Flash Card
What is the Spearman correlation coefficient?
The Spearman correlation coefficient measures the strength and direction of a monotonic relationship between two variables. It ranks data rather than relying on actual values, making it suitable for
non-normally distributed or ordinal data. It ranges from -1 to 1, with values close to -1 or 1 indicating a strong monotonic relationship and 0 indicating no monotonic relationship.
What type of data is the Spearman correlation coefficient particularly suitable for?
Flash Card
Can you provide an example of Spearman's rank correlation?
Consider a study on the relationship between study time and exam scores for five students. By ranking study time and exam scores, calculating differences, and squaring these differences, the
Spearman's rank correlation coefficient is found to be 0.7. This indicates a strong positive correlation between study time and exam scores.
What does a Spearman's rank correlation coefficient of 0.7 indicate?
Flash Card
How is correlation applied practically using R?
In R, the correlation between the Girth and Height of Black Cherry Trees can be determined using the \"trees\" dataset. A scatter plot can be created using the ggplot2 library to visualize the
relationship. The Shapiro test checks for normal distribution of variables, and correlation is calculated using Pearson and Spearman methods. The significance of the correlation is tested, showing
significant correlation for both coefficients.
Which R library is used to create scatter plots for visualizing correlation?
Flash Card
What is the final verdict on Pearson vs Spearman correlation?
Both Pearson and Spearman correlation coefficients indicate a positive correlation between Girth and Height of trees. Pearson measures linear relationships, while Spearman measures monotonic
relationships. The choice between them depends on the data type and the nature of the relationship being assessed.
What is a key difference between Pearson and Spearman correlation?
Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!
|
{"url":"https://www.analyticsvidhya.com/blog/2021/03/comparison-of-pearson-and-spearman-correlation-coefficients/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2021/05/how-to-transform-features-into-normal-gaussian-distribution/","timestamp":"2024-11-08T18:38:47Z","content_type":"text/html","content_length":"435563","record_id":"<urn:uuid:97f3850f-a9e0-4e2c-b1fe-c4537f036ea5>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00752.warc.gz"}
|
ThmDex – An index of mathematical definitions, results, and conjectures.
▼ Set of symbols
▼ Alphabet
▼ Deduction system
▼ Theory
▼ Zermelo-Fraenkel set theory
▼ Set
▼ Binary cartesian set product
▼ Binary relation
▼ Map
▼ Operation
▼ N-operation
▼ Binary operation
▼ Enclosed binary operation
▼ Groupoid
▼ Semigroup
▼ Standard N-operation
▼ Mean
▼ Complex mean
▼ Real mean
▶ R3568: Real AM-GM inequality
▶ R4666: Real GM-HM inequality
▶ R4118: Real arithmetic expression for unsigned real geometric mean
▶ R5185: Tight lower bound to a finite product of positive real numbers
▶ R5182: Tight upper bound to a finite product of unsigned real numbers
▶ R5211: Tight upper bound to a product of three unsigned real numbers
▶ R5210: Tight upper bound to a product of two unsigned real numbers
▶ R1557: Weighted real AM-GM inequality
|
{"url":"https://thmdex.org/d/2455","timestamp":"2024-11-15T04:36:20Z","content_type":"text/html","content_length":"11465","record_id":"<urn:uuid:ab8748cd-f197-448f-bc02-af69a4f400e7>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00788.warc.gz"}
|
pd64 and externals, deken etc?
Here's a quick primer:
When we say 32-bit or 64-bit architecture, we refer to the "word size". "word" is the natural unit of data processing for a given CPU. The word size is typically the size of the CPU's (integer)
registers. Most importantly, it defines the size of pointers and thus the max. addressable memory within a given (virtual) memory space.
(There are some curious exceptions, such as x32 mode on amd64 platforms, but these can be ignored for practical purposes.)
Note that a 32-bit computer can still work with 64-bit data types, they just won't fit into a single register. Also note that on modern platforms floats/doubles are usually kept in special SIMD
registers that are much wider than their integer counterparts (e.g. 256 bits with AVX on Intel).
The main takeaway is that word size and float size are completely orthogonal. In other words: a 32-bit program can use 64-bit integers or 64-bit doubles.
(On some platforms this comes with a performance cost, though. For example, the ESP32 is a 32-bit processor whose FPU only natively supports 32-bit floats, so 64-bit doubles are emulated in
So, Windows 11, the current version, is only for 64 bit architectures, right? so one cannot install pd 32 bit on it.
No. 64-bit Windows has an emulation layer for 32-bit applications (the WoW subsystem). Just like Apple used to emulate PPC on its 32-bit Intel processors (Rosetta 1) and now emulates amd64 code on
Apple Silicon (Rosetta 2). This is done to make the transition to another CPU architecture easier for the user because they can run their old programs.
On 30.05.2024 03:14, Alexandre Torres Porres wrote:
While we're at it, as I only know about macOS, let me ask about windows and linux...
So, Windows 11, the current version, is only for 64 bit architectures, right? so one cannot install pd 32 bit on it.
As for Linux, is there any most recent version only for 64-bit architecture?
Em qua., 29 de mai. de 2024 às 21:13, Alexandre Torres Porres porres@gmail.com escreveu:
I see, not being well versed, I see I'm again confusing 64-bit
architecture with double precision. And I'm still confused on what
"CPU-architecture" actually means. Let's see if I get things straight.
Older macs with intel chips can run two different architectures:
*i386* (Intel 32bit) and *amd64* (Intel 64bit). This depends on
the OS and the last to allow i386 was Mojave (10.14). Newer arm64
is obviously only 64bit.
I'm positive this is what it is, but it still strikes me as a bit
uncanny that a computer can have more than one architecture, as my
intuition would tell me that the architecture of my chip can only
be of one type. So I guess I don't really get the concept of
Anyway, things like this should be made clearer for dummies like
me in the manual and stuff.
Em qua., 29 de mai. de 2024 às 18:44, IOhannes m zmölnig
<zmoelnig@iem.at> escreveu:
On 5/29/24 21:03, Alexandre Torres Porres wrote:
> Can't .pd_linux, .pd_darwin, .d_fat, .dll be 64 bit? As well
as .m_amd64,
> .d_arm64 and .l_arm64 and stuff? I mean, probably they
"can", but the idea
> was to create new extension possibilities to distinct single
and double
> precision, right?
depends on what you mean by "can".
technically they could.
practically, Pd64 will *not* load an external that ends with
see the mailinglist archives and the github issues for a lengthy
discussion why it is like this.
> While we're at it, can i386 be 64? really? As in
> .darwin-i386-64.so and .windows-i386-64.dll?
sure, why not?
the "double" floattype has been around for some time.
a quick wikipedia check shows that one of the first (C)PU to
IEEE 754 (the floating point standard that defines "double"
floats as we
know them) was the Intel 8087, a 16bit processor (and famous
co-processor for the 8086)
It would be capable of running Pd64 (".cpm-x86_16-64.so").
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management ->
Pd-list@lists.iem.at mailing list UNSUBSCRIBE and account-management ->https://lists.puredata.info/listinfo/pd-list
|
{"url":"https://lists.iem.at/hyperkitty/list/pd-list@lists.iem.at/thread/NZ6S4UUVV6RRR6JLZOW66PLPA7AD545M/","timestamp":"2024-11-11T03:51:09Z","content_type":"text/html","content_length":"200305","record_id":"<urn:uuid:7564bef7-8508-4cf3-8694-e6370a7262ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00369.warc.gz"}
|
Selfish behaviour in queues and some open source graphical simulation software
In 1969 Naor, wrote a really nice paper called
'The Regulation of Queue Size by Levying Tolls'
. In this paper Naor considered a system with a single server queue:
• With an arrival rate $\lambda$ (customers per time unit)
• A service rate $\mu$ (customers per time unit)
• and a reward and cost for service that can actually just be considered as a "value for service": $\beta$
Naor then considered two types of customers: Selfish and Optimal.
It is relatively straightforward to see that Selfish customers should join if and only if:
$$\frac{n+1}{\mu}\leq \beta$$
where $n$ is the number of other customers in the system upon arrival.
What is slightly less straightforward is that Optimal customers should join if and only if $n\leq n^*$ where:
\[\frac{n^*(1-\rho)-\rho(1-\rho^{n^*})}{(1-\rho)^2}\leq \beta \mu < \frac{(n^*+1)(1-\rho)-\rho(1-\rho^{n^*+1})}{(1-\rho)^2}\]
(where $\rho=\lambda/\mu$)
It's a really cool result and one that has given rise to a lot more research (including what I mainly enjoy looking at).
I was asked recently be a colleague to give a 15 minute talk about my research to her second year OR class who will have just seen some queueing theory. I decided to talk about Naor's paper and
thought that it would be nice if I could give a graphical representation of the customers arriving at the queue (similar to the DES package:
). So I spent some time writing a simulation engine and using the in built
Turtle library to get some graphics.
A part from some of the optional plotting (matplotlib), this only uses base python libraries.
Here's a gif from an early prototype:
Here's a video discussing Naor's result and showing demonstrating everything with my simulation model:
The code is all up on github and it really could do with some improving so it would be great if anyone wanted to contribute: https://github.com/drvinceknight/Simulating_Queues
|
{"url":"https://drvinceknight.blogspot.com/2013/11/selfish-behaviour-in-queues-and-some.html","timestamp":"2024-11-08T12:39:50Z","content_type":"application/xhtml+xml","content_length":"54484","record_id":"<urn:uuid:2a16776a-fe93-479b-8cef-498bb97ce5dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00651.warc.gz"}
|
More on the danger revealed by duplicated keys
In my last post, I suggested that the existence duplicate keys generated by a particular device could lead an attacker to reverse engineer that device to learn the details of its entropy generation
weakness. He could then replicate that weak process to generate large numbers of RSA keys and see if they match RSA public keys that haven been published or obtained some other way. One can do even
better, however. Instead of generating RSA keys to test, just generate primes, following the method the device uses to generate the first prime of an RSA pair. There is no need to spend time
generating the second prime and forming the product. Each prime generated would then be tested to see if it divides any of the real RSA keys. As Heninger points out, this test can be performed fairly
quickly on all the keys at once by computing the product of all the real RSA keys.
There is a further speed up possible. In generating primes to create an RSA key, one selects an odd number from some starting point (ideally random) and tests that number for primality, repeating the
process until a prime is found. Getting certainty or a very high probability that a trial number is indeed prime takes considerable computation, involving a series of ever more stringent and
expensive tests, see for example http://www.openssl.org/docs/apps/genrsa.html. For our purposes, however, a lower probability of primality suffices. The risk is that by limiting the testing, we might
select a prime that the real software would have rejected and thus possibly miss a factorable key.That risk should be balanced by the possibility of testing many more keys.
Another possible speed up might be to multiply a large number of test keys together and test their product against the product of the real keys in one big operation using a GCD algorithm. If a
divisor is found, it can then be tested again the real keys to find the one (or more) that is broken.
Testing candidate RSA primes is likely the most expensive computation in this process. However that operation is easily distributed to many processors working in parallel and might even be suitable
for implementation on General Purpose Graphics processors (GPGPUs). Several groups have worked on using GPGPUs for modular arithmetic on large numbers of the type involved in primality tests. For a
nice review, see:
However, the work I've seen attempts to use all the GPU processors for a single large number modular multiplication. It might be faster to test many candidate primes at once, perhaps from different
entropy starting points, each on its own graphics processor, or SIMD unit.
Absent the hard work of implementing these suggestions, their potential to break real RSA keys is speculative. However, the possibility of such an attack is real enough to convince me that all keys
generated by devices that have exhibited inadvertent key duplication are suspect and should be replaced as soon as possible. In my next post, I plan to offer some suggestions for generating enough
strong entropy prior to key generation on systems that allow manual entropy input.
3 comments:
1. really valuable information. Thank you for sharing it from Hire Iphone Developer.
2. It really nice post. Such kind of post will definitely help to create a general purpose Graphic processor.
locksmiths Dublin
3. Hey Arnold, great post - do you have a website by chance? locksmiths dublin
|
{"url":"https://diceware.blogspot.com/2012/02/more-on-danger-revealed-by-duplicated.html","timestamp":"2024-11-12T06:08:11Z","content_type":"text/html","content_length":"66913","record_id":"<urn:uuid:814f48cc-9a5f-408e-b3c5-1f1f0eaf559f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00718.warc.gz"}
|
How Many 20 Oz In A Gallon?
In our increasingly fossil fuel dependent society, figuring out how much gas you have to spend can be tricky! Most people these days have GPS systems in their cars that tell them where they are at
any given time, but what about when they’re going somewhere or coming back home?
Some car brands will release information about your gas mileage for different vehicles, but this only tells you average numbers across all conditions. It would also need to know how fast you were
driving before it could calculate how far you traveled!
There is a way to get more precise information about how many miles per gallon (mpg) you are getting every time you fill up though. This info comes from something called gasoline octane rating.
Octane ratings determine how powerful the fuel is and how efficient your engine burns it. The higher the number, the less efficiently the fuel combusts and the hotter it becomes which means better
performance for the vehicle.
The national standard for determining an octane level of regular unleaded gasoline is 87 octane. Anything above 90 is considered premium grade or high-octane gas. A lot of areas around the country
actually use 93 as the norm instead of 88!
This article will go into detail about why this matters and how to read yours correctly.
Conversions of volume to weight
When you are talking about fluid consumption, it is important to know what liquid we are talking about!
If you are drinking water then your normalizer should be half a gallon per day which is one bottle every two hours. If you drink milk then try to aim for three cups per cup of coffee or tea, or any
other beverage!
For soda, plain water is better than diet drinks as this can sometimes have added sugar content that could contribute to obesity. One popular brand has 4 ounces (or 8 tablespoons) of sugar in each 16
ounce can!
To calculate how many bottles of water you use daily, just subtract your current level from a full bottle and divide by 2. This will give you the amount of time until you reach a new goal!”
This article clearly explained why being aware of our fluid intake is very important.
Converting volume to area
Now that we have done some calculations using diameter, we can move onto another way to determine how many bottles of liquid you have by looking at how much space they take up.
Volume is determined by two things; how much liquid there is and how full the container is. The first one is called total liquid content while the second one is known as vessel or container capacity.
To calculate the amount of space a bottle takes up, you need to know its height, width and diameter. All three of these are found under the ‘volume’ section above!
Height + Width + Diameter = Volume
Knowing this, we can now find the ratio between the length (height, width) and the diameter of the bottle. This will give us the dependent variable which we can then plug into the equation above to
get the final result.
The dependent variable is the volume divided by the surface area so we would multiply both sides of the equation by the same thing. We will use the ratio here!
This gives us our new expression for determining how many bottles you have per square foot. Since we already calculated the radius in our earlier equations, all we need to do now is divide the
height, width and radius together to solve for the depth.
Remember, when solving quadratics, make sure to add and subtract the opposite side before finding the solution.
Exact number of 20 oz in a gallon
The amount of liquid you have in your car depends on how much gas you have, what type of vehicle you have, and whether or not you are fully loaded with drinks and snacks.
The easiest way to determine this is by looking at the fuel gauge. It will tell you exactly how much gas there is in the tank!
But what about when you run out of gas? Or if you need more than one liter of gasoline?
Fortunately, it’s easy to find the answers to these questions in math. And while some people may think that figuring out fluid levels is too complicated, we will break down all the steps here for you
so that you can easily understand them.
So let us begin!
How many liters in 1 US gallon?
A standard U.S. gallon contains 3.78 l of water, so we can use that as our base unit for measuring liquids. This means that one liter equals 1/3.78 = 0.318 of a U.S. gallon.
Converting from gallons to litres takes into account the difference in volume between the two units, and multiplies by 8.93 to get the conversion factor. So 1 US gallon -> 0.318 * 8.93 = 2.848
This also applies to other fluids such as alcohol (like petrol) where they measure their fluid level in “oz.
Approximate number of 20 oz in a gallon
The average person consumes around 2 gallons of water per day, which is about 648 litres or 168 pounds of water every week!
The vast majority of this (around 4-5 gallons) is consumed at work, for taking showers, washing hands, etc. Another 1 to 3 gallons are typically spent during the night when we wake up and need more
The last 0.5–1 gallon is usually spent while sleeping, and some people drink an additional quart of water before bed. This means that most of us spend less than 10% of our daily intake drinking
With all these numbers factoring in, the average person uses about 5 ounces of pure liquid alcohol (i.e. vodka, gin, whiskey, etc.
Tips for measuring liquid properly
The second most important factor when it comes to knowing how many ounces are in a gallon is learning how to measure liquids correctly. You should know that there are two ways to determine this!
The first way is by using a darby gun. A Darby gun contains rods that are sized appropriately depending on what you want to test. For example, if you wanted to find the amount of water in milk then
you would use distilled water which has no dissolved particles. By moving the rod through the liquid, you can determine how much water there is.
For another example, if you wanted to check the density of olive oil then you could use sunflower oil as a carrier fluid. Again, we need to make sure our darby gun isn’t mixed up with water or oil so
those must be dried and pre-weighed. After the oil is poured into the tube, pull out the rod and see how much oil there is!
This method only works for denser liquids than water though so it cannot be used to figure out how much air there is in a container. This article will not go into more detail about measurement types
but I do recommend looking into them as they are very helpful!
The second way to do this is via a gascope. A gascope is similar to a glass meter burette except it does not require any power source.
Know the difference between a gallon and a quart
There are two main ways to determine how many ounces of fluid you have in a given volume. The first is by looking at the liquid’s density, or weight, compared to water. If the ratio is less than
one-to-one (less dense), then you have more liquid than water!
The second way is using the height measurement for the liquid in relation to the neck of the container it is in. For example, if the top of the liquid reaches as high as the rim of the bottle, you
have filled up that amount of space, so there are just enough inches left over to measure half the diameter of the bottle — which is why we use “half full” as our definition of what an empty bottle
Pour liquid into the nearest ounce amounts
There is no exact way to determine how many ounces of a specific product you have, but there are some rules of thumb that work well. First, remember that one pound equals 454 grams!
Converting from pounds to ounces is easy when using dry ingredients such as flour or powdery substances like sugar. Simply divide the number of pounds by 2, then multiply that amount by 16. For
example, if you had two pounds of dried rice, which we would assume for this calculation is a total weight of four cups, then two divided by two is equal to one cup, so add sixteen to get twenty-one
ozs per cup.
For liquids, there is not an easy rule of thumb for determining how much water you have.
Check the expiration date
Recent changes to the way fuel is measured makes comparing gas prices very difficult. When gasoline was first introduced, it was sold in what we call octane levels or “octanes” for your car.
The number before the -es was an indication of how powerful the liquid is and which cars can run it with no problems. A higher number means more power so therefore engines that work better with this
fuel are made longer because they need less frequent re-fueling!
As time went on people noticed that although the price per liter remained the same, the amount you get in your tank changed.
This is due to the fact that there is not just one grade of high performance fuel, there are two! The lower numbers refer to 100% regular unleaded gasoline while the higher ones indicate 95+ percent
So if you pay the exact same amount per gallon at both stations, you will get slightly different amounts depending on which fuel station you go to. This is why it is important to know how many
gallons you get from a full tank!
A lot of sites and sources use the standard international definition of 1 US gallon equals 3.8 L. That does not always make sense though as some countries have different definitions! For example,
France uses the metric system where 1 US gallon equals 0.
|
{"url":"https://cumbernauld-media.com/how-many-20-oz-in-a-gallon/","timestamp":"2024-11-09T10:45:57Z","content_type":"text/html","content_length":"162936","record_id":"<urn:uuid:a88f9339-65d0-43c9-ae67-6d3eabddb8c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00187.warc.gz"}
|
Excel Formula Python - Search and Subtract
In this tutorial, we will learn how to write an Excel formula in Python that searches for matching numbers in a specific column and subtracts the corresponding values from another column. This can be
useful when working with large datasets and you need to perform calculations based on specific conditions. We will use the COUNTIF, IF, and SUMIF functions in Excel to achieve this. By using Python,
we can automate this process and perform the calculations programmatically.
To implement this formula, we will iterate through the values in column V and check if there are any matching numbers. If a match is found, we will subtract the value in the same row from column W.
This will allow us to perform the desired calculations efficiently and accurately.
Now, let's dive into the step-by-step explanation of the formula and see how it works in practice. We will also provide examples to illustrate the results of the formula for different scenarios. By
the end of this tutorial, you will have a clear understanding of how to write an Excel formula in Python to search for matching numbers and subtract corresponding values.
An Excel formula
=IF(COUNTIF(V:V, V1)>1, W1-SUMIF(V:V, V1, W:W), "")
Formula Explanation
This formula uses the IF function, COUNTIF function, and SUMIF function to search for numbers in column V that match each other. If a match is found, it subtracts the value in the same row as the
matching numbers in column W.
Step-by-step explanation
1. The COUNTIF function is used to count the number of occurrences of the value in cell V1 in column V. If the count is greater than 1, it means there are multiple occurrences of the same number in
column V.
2. The IF function is used to check if the count is greater than 1. If it is, it means there are matching numbers in column V.
3. If there are matching numbers, the SUMIF function is used to calculate the sum of the values in column W that correspond to the matching numbers in column V.
4. The value in cell W1 is subtracted from the sum calculated in step 3 to get the final result.
5. If there are no matching numbers, an empty string ("") is returned.
For example, if we have the following data in columns V and W:
| V | W |
| 1 | 10 |
| 2 | 20 |
| 3 | 30 |
| 2 | 40 |
| 4 | 50 |
| 5 | 60 |
| 5 | 70 |
The formula =IF(COUNTIF(V:V, V1)>1, W1-SUMIF(V:V, V1, W:W), "") would return the following results:
• For the first row, where V1 is 1 and W1 is 10, there are no matching numbers in column V, so an empty string is returned.
• For the second row, where V1 is 2 and W1 is 20, there is one matching number (2) in column V. The sum of the corresponding values in column W is 40. So, the result would be 20 - 40 = -20.
• For the third row, where V1 is 3 and W1 is 30, there are no matching numbers in column V, so an empty string is returned.
• For the fourth row, where V1 is 2 and W1 is 40, there is one matching number (2) in column V. The sum of the corresponding values in column W is 40. So, the result would be 40 - 40 = 0.
• For the fifth row, where V1 is 4 and W1 is 50, there are no matching numbers in column V, so an empty string is returned.
• For the sixth row, where V1 is 5 and W1 is 60, there are two matching numbers (5) in column V. The sum of the corresponding values in column W is 130. So, the result would be 60 - 130 = -70.
• For the seventh row, where V1 is 5 and W1 is 70, there are two matching numbers (5) in column V. The sum of the corresponding values in column W is 130. So, the result would be 70 - 130 = -60.
The formula handles cases where there are multiple occurrences of the same number in column V and subtracts the corresponding values in column W accordingly.
|
{"url":"https://codepal.ai/excel-formula-generator/query/3cu2uefO/excel-formula-python-search-subtract","timestamp":"2024-11-07T20:24:58Z","content_type":"text/html","content_length":"95629","record_id":"<urn:uuid:9cbedc70-ee35-48b1-bbd1-1d18941fa2a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00399.warc.gz"}
|
How do you use the tangent line approximation to approximate the value of ln(1004) ? | Socratic
How do you use the tangent line approximation to approximate the value of #ln(1004)# ?
1 Answer
A formula for a tangent line approximation of a function f, also called linear approximation , is given by
$f \left(x\right) \approx f \left(a\right) + f ' \left(a\right) \left(x - a\right) ,$
which is a good approximation for $x$ when it is close enough to $a$.
I'm not sure, but I think the question is about approximate the value $\ln \left(1.004\right)$. Could you verify it please? Otherwise we will need to know an approximation to $\ln \left(10\right) .$
In this case, we have $f \left(x\right) = \ln \left(x\right)$, $x = 1.004$ and $a = 1$.
$f ' \left(x\right) = \frac{1}{x} \implies f ' \left(1\right) = 1$ and $\ln \left(1\right) = 0$, we get
$\ln \left(1.004\right) \approx \ln \left(1\right) + 1 \cdot \left(1.004 - 1\right) = 0.004 .$
Impact of this question
5223 views around the world
|
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-use-the-tangent-line-approximation-to-approximate-the-value-of-ln-100-1","timestamp":"2024-11-05T07:32:28Z","content_type":"text/html","content_length":"34157","record_id":"<urn:uuid:c68b242b-e462-44bc-b58c-9f24972c41af>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00837.warc.gz"}
|
How Much Does A Tree Weigh?
Trees hold a majesty that nothing else in nature seems to hold. If you have a love or curiosity for trees, you may be wondering how much a tree weighs. We've researched this question thoroughly and
have some important information about the weight of trees for you.
Living trees can weigh anywhere from about 1000 pounds to about 2 million pounds. This is a very broad range because tree weight is influenced by several different factors. Some of these factors
• Diameter
• Height
• Hardwood or Softwood
• Volume
• Density
• Leaves or no leaves
Now you have a range of what trees can weigh and what factors can affect their weight, but you may want to know how to calculate tree weight. We elaborate on that and more in this post! Keep reading
to learn how to determine tree weight, what the heaviest tree is, which tree has the thickest bark, and which tree has the thickest trunk.
How Do You Calculate the Weight of a Tree?
To calculate the weight of a tree, you will need to use a specific formula. However, let's talk about the factors that influence tree weight before discussing the formula to calculate tree weight.
Tree diameter directly correlates with tree weight. This means that when tree diameter increases, tree weight also increases. Therefore, trees with smaller diameters weigh less, and trees with larger
diameters weigh more. For example, a hardwood tree with a 12 inch diameter weighs about 1500 pounds, whereas a hardwood tree with a 26 inch diameter weighs about 8400 pounds.
If you're looking at two trees that are considerably different in size, you can tell which one has the larger diameter. You may even be able to estimate the diameter of a tree by looking at it.
However, if want or need to know the exact diameter of a tree in order to determine weight, there's a way to find it.
To find the diameter of a tree, you must first determine the circumference at breast height. The circumference is the full curve length of the tree, and breast height is about 55 inches about ground.
So, to find the circumference at breast height, you simply measure the curve length of a tree at about 55 inches above the ground. Obviously, to measure the tree, you will need a tape measurer.
Click here to see a tape measurer designed for finding the circumference of objects on Amazon.
After finding the circumference at breast height, you can calculate the diameter at breast height (DBH) by dividing the circumference by pi.
Tree height and tree weight directly affect each other, just like tree diameter and tree weight do. Thus, taller trees generally weigh more, and shorter trees generally weigh less. For example, an 80
foot tall hardwood tree weighs about 20,000 pounds, whereas a 50 foot tall pine tree weighs about 2,000 pounds.
If you want to see how tall a tree is, it may be difficult to actually measure it. Because of this, it may be best to estimate the height of the tree you're analyzing. You can make good estimates by
consulting the internet or a book for the height of that specific tree.
Click here to find a book that includes the heights of trees on Amazon.
Hardwood or Softwood
Whether a tree is hardwood or softwood significantly affects its weight. Hardwood trees weigh more than softwood trees do. For example, beech trees are hardwood trees, and they weigh about 45 pounds
per cubic foot, whereas cypress trees are softwood trees and weigh about 32 pounds per cubic foot.
As the volume of a tree increases, the weight of a tree also increases. Because of this, volume significantly impacts weight. In addition to being a factor that influences tree weight, volume is a
key factor in the formula for calculating tree weight.
Finding the volume of a tree is even more difficult than finding the height. There is virtually no way that you will be able calculate the volume without any mistakes. So, in order to determine a
good volume estimate, it is best to search for the information in a book or contact an expert, such as a forester.
As with diameter, height, and volume, density also directly affects tree weight. Trees with larger densities generally weigh more. Like volume, density is an important component in the formula for
calculating a tree. So, it is necessary to find the density if you want to figure out how much a tree weighs.
You can search the internet or a book for density estimates of specific trees.
Leaves or No Leaves
Whether or not a tree has leaves greatly influences a tree's weight. The presence of leaves contributes to a tree's weight, making it heavier, whereas the absence of leaves makes it lighter. For
example, magnolia trees have a good amount of leaves, so the weight of these leaves will contribute greatly to the overall weight of the tree.
However, there are some trees that have little to no leaves, like tamarisk trees. Tamarisk trees are generally lighter than trees with a lot of leaves because they have a very small amount of leaves,
which means they have less that contributes to their overall weight.
Like volume and density, the weight of leaves is an important part of the formula for calculating tree weight. There are two ways that you can find the weight of leaves.
The first way is to collect a small sample of leaves from the specific tree you are examining. Next, count the number of leaves in this small sample. Then, determine the total number of leaves on the
tree. You can do this by searching the internet or a book for the average number of leaves the specific tree has. Finally, multiply the number of leaves in the sample by the total number of leaves on
the tree, which should give you the weight of the leaves.
The other way that you could find the weight of leaves is to simply search for the answer on the internet or in a book. This may be easier than the first method.
Formula for Calculating Tree Weight
Finally, now that we've talked about all of the factors that affect a tree's weight, let's delve into the formula for calculating tree weight.
The formula for calculating the weight of a tree is fairly simple. It is as follows: (volume x density) + leaf weight. So, in order to calculate a tree's weight, you must multiple its volume by its
density, and then add its leaf weight to this number.
What is the Heaviest Tree?
The world's heaviest tree is the giant sequoia. The heaviest giant sequoia is nicknamed "General Sherman," and it is located in Sequoia National Park. It weighs about 2.7 million pounds, an
astonishing weight for a tree! Giant sequoias grow to extreme heights and widths. On average, sequoias grow to about 250 feet tall and have diameters of 30 feet wide.
What Tree has the Thickest Bark?
In addition to being the heaviest trees, giant sequoias also have the thickest bark of any known tree. On average, at the base of the tree, the outer layer of the bark exceeds two feet in thickness.
Which Tree has a Thick Trunk?
Baobab trees, which are native to Africa, India, and Australia, have very thick trunks that give them a unique look. A baobab tree's trunk can have a diameter of 29 feet and a circumference of 82
However, the baobab tree is not the tree with the thickest trunk; the tree that takes this title is the Mexican cypress. The Mexican cypress has a diameter of about 38 feet, and it has a
circumference of about 119 feet.
In Closing
There is a broad range for what trees can weigh, and a variety of factors impact the weight of a tree. The formula for calculating tree weight involves multiplying tree volume by tree density, and
then adding leaf weight. With this formula, you should be able to determine how heavy any tree is if you ever need to do so.
Before you go, check out our other articles:
19 Gorgeous Large Flowering Trees
11 Beautiful Trees Under 10 Feet
One comment
1. Those are not all the factors, you forgot the trees roots!
|
{"url":"https://gardentabs.com/how-much-does-a-tree-weigh/","timestamp":"2024-11-07T22:12:46Z","content_type":"text/html","content_length":"155854","record_id":"<urn:uuid:9b46c3ee-1223-4247-8c21-83801247d7e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00452.warc.gz"}
|
Focal length Sentence Examples
• They are placed at a distance apart less than the focal length of a, so that the wires of the micrometer, which must be distinctly seen, are beyond b.
• Foucault, who employed a scale of equal bright and dark alternate parts; it was found to be proportional to the aperture and independent of the focal length.
• If we suppose the diameter of the lens to be given (2R), and its focal length f gradually to increase, the original differences of phase at the image of an infinitely distant luminous point
diminish without limit.
• Throughout the operation of increasing the focal length, the resolving power of the instrument, which depends only upon the aperture, remains unchanged; and we thus arrive at the rather startling
conclusion that a telescope of any degree of resolving power might be constructed without an object-glass, if only there were no limit to the admissible focal length.
• The distance f i, which the actual focal length must exceed, is given by d (f1 2 R2) x; so that f1 = 2 R2/X (1) Thus, if X = p j, R= i ?, we find f1= 800 inches.
• As the minimum focal length increases with the square of the aperture, a quite impracticable distance would be required to rival the resolving power of a modern telescope.
• Calculation shows that, if the aperture be s in., an achromatic lens has no sensible advantage if the focal length be greater than about II in.
• If we suppose the focal length to be 66 ft., a single lens is practically perfect up to an aperture of 1 .
• When parallel rays fall directly upon a spherical mirror the longitudinal aberration is only about one-eighth as great as for the most favourably shaped single lens of equal focal length and
• Owing principally to differences in the length of the inch in various countries this method had great inconveniences, and now the unit is the refractive power of a lens whose focal length is one
• A lens of twice its strength has a refractive power of 2 D, and a focal length of half a metre, and so on.
• The width of each of the portions aghc and acfe cut away from the lens was made slightly greater than the focal length of lens X tangent of sun's greatest diameter.
• Here, in order to fulfil the purposes of the previous models, the distance of the centres of the lenses from each other should only slightly exceed the tangent of sun's diameter X focal length of
• On the other hand it is not necessary to reset the telescope after each reversal of the segments.4 When Bessel ordered the Konigsberg heliometer, he was anxious to have the segments made to move
in cylindrical slides, of which the radius should be equal to the focal length of the object-glass.
• Struve also points out that by attaching a fine scale to the focusing slide of the eye-piece, and knowing the coefficient of expansion of the metal tube, the means would be provided for
determining the absolute change of the focal length of the object-glass at any time by the simple process of focusing on a double star.
• The amount of separation is very small, and depends on the thickness of the glass, the index of refraction and the focal length of the telescope.
• The instrument has a focal length of 54 ft.
• The collimator of a spectroscope should be detached, or moved so as to admit of the introduction of an auxiliary slit at a distance from the collimator lens equal to its focal length.
• The sharpness of image in Kepler's telescope is very inferior to that of the Galilean instrument, so that when a high magnifying power is required it becomes essential to increase the focal
• James Bradley, on 27th December 1722, actually measured the diameter of Venus with a telescope whose objectglass had a focal length of 2124 ft.
• The magnifying power obviously depends on the proportion of the focal length of the object-lens to that of the eye-lens, that is, magnifying power where F is the focal length of the object-lens
and e that of the eye-lens.
• But while an achromatic combination of o 60 and 0.102 alone will yield an objective whose focal length is only 1.28 times the focal length of the negative or extra dense flint lens, the triple
combination will be found to yield an objective whose focal length is 73 times as great as the focal length of the negative light flint lens.
• Hence impossibly deep curvatures would be required for such a triple objective of any normal focal length.
• The magnifying power of the telescope is = Ff /ex, where F and f are respectively the focal lengths of the large and the small mirror, e the focal length of the eye-piece, and x the distance
between the principal foci of the two mirrors (=Ff in the diagram) when the instrument is in adjustment for viewing distant objects.
• Every time, therefore, that a speculum is repolished, the future quality of the instrument is at stake; its focal length will probably be altered, and thus the value of the constants of the
micrometer also have to be redetermined.
• In this case the image is formed without secondary magnification and the focal length is 25 ft.
• In this case the equivalent focal length is 150 ft.
• For example, it is possible, with one thick lens in air, to achromatize the position of a focal plane of the magnitude of the focal length.
• For infinitely distant objects the radius of the chromatic disk of confusion is proportional to the linear aperture, and independent of the focal length (vide supra," Monochromatic Aberration of
the Axis Point "); and since this disk becomes the less harmful with an increasing image of a given object, or with increasing focal length, it follows that the deterioration of the image is
proportional to the ratio of the aperture to the focal length, i.e.
• In the neighbourhood of 550 pu the tangent to the curve is parallel to the axis of wave-lengths; and the focal length varies least over a fairly large range of colour, therefore in this
neighbourhood the colour union is at its best.
• This is effected by the power of accommodation of the eye, which can so alter the focal length of its crystalline lens that images of objects at different distances can be produced rapidly and
distinctly one after another upon the retina.
• The eye is strained in bringing its focal length to the smallest possible amount, and when this strain is long continued it may cause pain.
• Since H' P = F 0, = y, from the focal length of the simple microscope, the visual angle w' is given by tan w'/y=I/f'=V, (I) in which f', = H' F', is the image-side focal length (see Lens).
• Triplets are employed when the focal length of the simple microscope was less than in.
• Let O01=y, O'01' =y', the focal distance of the image F I 'O' =A, and the image-side focal length f l ', then the magnification M =y /y=o/,/1' (3) The distance A is called the " optical tube
• In immersion systems the object-side focal length is greater than the imageside focal length.
• The image viewed through the eyepiece appears then to the observer under the angle w", and as with the single microscope tan w" = I /f 2 ' (4) where f' 2 is the image-side focal length of the
• The lens nearer the eye, which has about the same focal length as the collective lens, is distant from it by about its focal length.
• By the magnification of the objective is meant the ratio of the distance of distinct vision to the focal length of the objective.
• The distance of the concave mirror from the stage plate is about equal to its focal length.
• By a correct choice of the focal length of the illuminating lens in relation to the focal length of the mirror, it is possible to choose the size of the image of the source of light so that the
whole object-field is uniformly lighted.
• The size of these details in the image depends only on the magnification of the objective, M and can by appropriate choice of the focal length of the objective be brought to the right value.
• It There are many methods for determining the focal length of the objective.
• The same method can be used to determine the focal length of the eyepiece.
• The focal length of an objective can be more simply determined by placing an objective micrometer on the stage and reproducing on a screen some yards away by the objective which is to be
• If the size of the image of a known interval of the objective micrometer is determined by an ordinary scale, and the distance of the image from the focal plane of the objective belonging to it is
measured, then the focal length can be calculated from the ratio y/y'=fl', in which y is the size of the object, y' that of the image, and xi' the distance of the image from the focal plane
belonging to it.
• A convex lens has a focal length of 150 mm.
• The larger blocking filter allows you to extend the focal length without vignetting but does not enhance detail or lower the bandpass.
• A 100mm diameter, 500mm focal length refracting telescope was used, equipped with a solar filter.
• He inserted a slip of metal, of variable breadth, at the focus of the telescope, and observed at what part it exactly covered the object under examination; knowing the focal length of the
telescope and the width of the slip at the point observed, he thence deduced the apparent angular breadth of the object.
• The focal length of the objective and the distance between the optical centre of the lens and the webs are so arranged that images of the divisions are formed in the plane of the webs, and the
pitch of the screw is such that one division of the scale corresponds with some whole number of revolutions of the screw.
• The extension of the image away from the axis or size of field available for covering a photographic plate with fair definition is a function in the first place of the ratio between focal length
and aperture, the longer focus having the greater relative or angular covering power, and in the second a function of the curvatures of the lenses, in the sense that the objective must be free
from coma at the foci of oblique pencils or must fulfil the sine condition (see Aberration).
• By compounding two lenses or lens systems separated by a definite interval, a system is obtained having a focal length considerably less than the focal lengths of the separate systems. If f and
f' be the focal lengths of the combination, and f2, f2 the focal lengths of the two components, and A the distance between the inner foci of the components, then f = - f,f2/4, f' =fi f27 0 (see
• If your subject needs time to warm up, then start the session shooting with a longer focal length.
• A rotation of this amount should therefore be easily visible, but the limits of resolving power are being approached; and the conclusion is independent of the focal length of the mirror, and of
the employment of a telescope, provided of course that the reflected image is seen in focus, and that the full width of the mirror is utilized.
|
{"url":"https://sentence.yourdictionary.com/focal-length","timestamp":"2024-11-06T18:50:37Z","content_type":"text/html","content_length":"361987","record_id":"<urn:uuid:55b7dd45-cd2f-4e6e-adb0-fcbfdb7937ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00594.warc.gz"}
|
confidence interval
Confidence interval interpretation calculator
The confidence interval can be used only if the number of successes np Use the TI, 83+, or 84+ calculator command invNorm(,0,1) to find Z Remember that the area to the right of Z is and the area to
the left of Z is Interpretation. 41 A Confidence Interval for A Population Proportion. During an election year, we see articles in the newspaper that state confidence intervals in terms of
proportions or percentages. For example, a poll for a particular candidate running for president might show that the candidate has 40% of the vote within three percentage points (if the sample is
large enough). May 31, · Example 1 - Confidence Interval for Variance Calculator The mean replacement time for a random sample of 12 microwaves is years with a standard deviation of years. Construct
a 95% confidence interval for the population standard deviation.
If the ratio equals to 1, the 2 groups are equal. It shifts the point estimate from 0. R and GraphPad use a sample size-dependent adjustment when calculating confidence intervals, which makes a
notable difference for small sample sizes. Confidence Interval for a Proportion: Motivation The reason to create a confidence interval for a proportion is to capture this web page uncertainty when
estimating a population proportion.
Your Answer
To find the area between two points we :. Jensen, Tom. Arrow down to and enter The Z-value is a test statistic for Z-tests that measures the difference between an observed statistic and its
hypothesized population parameter in units of the standard deviation.
Scientific control Randomized experiment Randomized controlled trial Random assignment Blocking Interaction Factorial experiment. In the earliest modern controlled clinical trial of a medical
treatment for acute strokepublished by Dyken and White inthe investigators were unable to reject interpdetation null hypothesis of no effect of cortisol on stroke. Your email address will not be
published. What confidence interval interpretation calculator T critical value mean? Skip to main content. Chapter Review Some statistical measures, like many survey questions, measure qualitative
link than confidence interval interpretation calculator data.
Assume that the children in the confidence interval interpretation calculator class are a random sample of the population.
Posted on April 21, December 2, by Zach. Monday night beginning ice-skating class.
This is what is computed by this risk ratio calculator. List two difficulties the company might have in obtaining random results, if this survey were done by email. Main article: Confidence band.
Negative events in exposed group.
The purpose: Confidence interval interpretation calculator
Lisinopril used for migraines 321
Confidence interval interpretation calculator Does nifedipine cause tachycardia
CAN YOU TAKE 2.5 Confidence interval interpretation calculator OF CRESTOR 160
DOES ASHWAGANDHA INTERFERE WITH SLEEP Does doxazosin cause ed
The Overflow Blog.
Suppose randomly selected people are surveyed to determine if they own a tablet. How do you find the critical value? Determine confidence interval interpretation calculator level of confidence used
to construct the interval of the population proportion confidence interval interpretation calculator dogs that compete in professional events. Series A, Mathematical and Physical Sciences,pp. Jensen,
Tom. The shaded area under the Student's t distribution curve is equal to the level of significance.
Bearing in mind, what is the margin of error in a confidence interval?
Cross-sectional study Cohort study Natural experiment Quasi-experiment. Skip to content Menu. Add a comment.
|
{"url":"https://digitales.com.au/blog/wp-content/review/mens-health/confidence-interval-interpretation-calculator.php","timestamp":"2024-11-03T17:05:56Z","content_type":"text/html","content_length":"35454","record_id":"<urn:uuid:0b429090-ef51-4e59-802a-2ad71acbd63f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00845.warc.gz"}
|
Modern Physics - PHY 243
Effective: 2022-05-01
Course Description
Covers principles of modern physics including in-depth coverage of relativity, quantum physics, solid state, and nuclear physics.
Lecture 3 hours. Total 3 hours per week.
3 credits
The course outline below was developed as part of a statewide standardization process.
General Course Purpose
PHY 243 is the third and last semester of calculus-based University Physics. It covers the advances made in physics during the first half of the twentieth century that led to revolutionary paradigm
shifts in the understanding of nature; these advances continue to drive technologies used today.
Course Prerequisites/Corequisites
Prerequisites: PHY 242 with a grade of C or better or departmental approval.
Course Objectives
• The Foundations of Modern Physics
□ State phenomena that cannot be explained by classical physics, thus motivating the need for a new theory.
□ Establish experimental evidence by which the existence of atoms and their properties is known.
• The Special Theory of Relativity
□ Explain and apply the fundamental concepts of event and reference frame.
□ Explain how the principle of relativity leads to the relativity of simultaneity and length and thus to time dilation and length contraction.
□ Use the Lorentz transformations of position and velocity.
□ Define and calculate relativistic energy and momentum.
□ Recognize the significance of Einstein's famous equation E = mc2.
• Photons: Light Waves Behaving as Particles
□ Explain the photoelectric effect experiment and its implications.
□ Explain and apply the photon model of light.
• Wave Properties of Matter
□ State the evidence for matter waves and the de Broglie wavelength.
□ Explain why the de Broglie standing wave of a confined particle requires energy quantization.
□ Explain and apply Bohr's stationary-state model of the atom.
□ Use the Bohr model to explain discrete spectra and the observed differences between absorption and emission spectra.
□ Apply Bohr's model of the hydrogen atom to explain its properties.
• Quantum Mechanics
□ Define the wave function as the descriptor of particles in quantum mechanics.
□ Explain probabilistic interpretation of the wave function.
□ Explain and apply the idea of normalization.
□ Recognize the limitations on knowledge imposed by the Heisenberg uncertainty principle
□ Define the Schrodinger equation as the "law" of quantum mechanics.
□ Recognize that solutions of the Schrodinger equation give the allowed energies and wave functions for a physical situation that is modeled by the potential energy function U(x).
□ Interpret wave functions and energy levels.
□ Explain quantum phenomena such as bonding and tunneling.
• Atomic Structure
□ Interpret the quantum-mechanical solution of the hydrogen atom.
□ Explain the basis for the shell model of atoms.
□ Demonstrate a qualitative understanding of the energy-level structure of multielectron atoms and the periodic table of the elements.
□ Explain the emission and absorption of light.
□ Explain the meaning of the lifetimes of excited states and their exponential decay.
□ Demonstrate qualitative understanding of lasers.
• Nuclear Physics
□ Explain the size and structure of the nucleus.
□ Describe the properties of the strong force.
□ Apply and interpret a simple shell model of the nucleus.
□ Define and apply radioactive decay and half-lives.
□ Interpret radiation dose and biological applications of nuclear physics.
Major Topics to be Included
• The Foundations of Modern Physics
• The Special Theory of Relativity
• Photons: Light Waves Behaving as Particles
• Wave Properties of Matter:
• Quantum Mechanics
• Atomic Structure
• Nuclear Physics
|
{"url":"https://courses.vccs.edu/courses/PHY243-ModernPhysics/detail","timestamp":"2024-11-02T04:50:35Z","content_type":"application/xhtml+xml","content_length":"12544","record_id":"<urn:uuid:803ace74-d020-4016-96de-039a22004a12>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00250.warc.gz"}
|
NPO problems: definitions and preliminaries
Next: Terminology for graph problems Up: Introduction Previous: Introduction   Index
NPO problems: definitions and preliminaries
The basic ingredients of an optimization problem are the set of instances or input objects, the set of feasible solutions or output objects associated with any instance, and the measure defined for
any feasible solution. On the analogy of the theory of NP-completeness, we are interested in studying a class of optimization problems whose feasible solutions are short and easy-to-recognize. To
this aim, suitable constraints have to be introduced. We thus give the following definition.
Definition 1
An NP optimization problem A is a fourtuple (I,sol,m,goal) such that
1. I is the set of the instances of A and it is recognizable in polynomial time.
2. Given an instance x of I, sol(x) denotes the set of feasible solutions of x. These solutions are short, that is, a polynomial p exists such that, for any x and for any y such that
3. Given an instance x and a feasible solution y of x, m(x,y) denotes the positive integer measure of y. The function m is computable in polynomial time and is also called the objective function.
The class NPO is the set of all NP optimization problems.
The goal of an NPO problem with respect to an instance x is to find an optimum solution, that is, a feasible solution y such that
In the following, opt will denote the function mapping an instance x to the measure of an optimum solution.
An NPO problem is said to be polynomially bounded if a polynomial q exists such that, for any instance x and for any solution y of x, class NPO PB is the set of polynomially bounded NPO problems.
Next: Terminology for graph problems Up: Introduction Previous: Introduction   Index Viggo Kann
|
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node2.html","timestamp":"2024-11-04T10:55:06Z","content_type":"text/html","content_length":"6969","record_id":"<urn:uuid:381d1b99-f628-4d96-a968-a906fc1f4b62>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00025.warc.gz"}
|
A=(A∩B)∪(A−B) and A∪(B−A)=(A∪B)
9. Using properties of sets, sh... | Filo
Question asked by Filo student
9. Using properties of sets, show that (i) (ii) . 10. Show that need not imply . 11. Let and be sets. If and for some set , show that . (Hints and use Distributive law )
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
1 mins
Uploaded on: 5/11/2023
Was this solution helpful?
Found 5 tutors discussing this question
Discuss this question LIVE for FREE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Integration
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text 9. Using properties of sets, show that (i) (ii) . 10. Show that need not imply . 11. Let and be sets. If and for some set , show that . (Hints and use Distributive law )
Updated On May 11, 2023
Topic Integration
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 59
Avg. Video Duration 1 min
|
{"url":"https://askfilo.com/user-question-answers-mathematics/9-using-properties-of-sets-show-that-i-ii-10-show-that-need-35303634353836","timestamp":"2024-11-09T00:36:11Z","content_type":"text/html","content_length":"309286","record_id":"<urn:uuid:af6af480-344b-4c53-96d7-0884ad3f4d42>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00067.warc.gz"}
|
Divisibility Test Calculator - Calculator6.com
Divisibility Test Calculator
The Divisibility Test Calculator quickly and easily checks whether the number you enter is exactly divisible by a given number.
The Divisibility Test Calculator quickly and easily checks whether the number you enter is exactly divisible by a given number. Enter the number you want to test and the number whose divisibility you
want to check and the calculator will show you whether the result is exactly divisible or not. This online tool is the ideal solution to speed up your mathematical analysis and check the divisibility
of numbers.
What is a Divisibility Test?
Divisibility tests are rules used to determine whether a number is exactly divisible by another number. In math, these tests are very useful for quickly and easily determining the divisibility
relationship between numbers. Divisibility tests are usually taught in elementary and middle school math classes and provide practical solutions for calculations with numbers.
Importance of Divisibility Tests:
Divisibility tests make it easy to find factors of numbers, simplify fractions and solve mathematical problems. These rules speed up calculations and make operations more practical.
Divisibility Rules
Divisibility rules are mathematical methods used to quickly and easily determine whether a number is exactly divisible by another number. These rules are especially useful when working with large
numbers and simplify operations.
The most commonly used divisibility rules in mathematics:
Divisibility by 2:
A number is exactly divisible by 2 if its last digit is 0, 2, 4, 6 or 8.
Example: 46 (last digit 6) → 46 is exactly divisible by 2.
Divisibility by 3:
A number is exactly divisible by 3 if the sum of its digits is 3 or a multiple of 3.
Example: 123 (1 + 2 + 3 = 6) → 123 is exactly divisible by 3.
Divisibility by 4:
If the last two digits of a number are 00 or a multiple of 4, that number is exactly divisible by 4.
Example: 312 (last two digits 12) → 312 is exactly divisible by 4.
Divisibility by 5:
A number is exactly divisible by 5 if its last digit is 0 or 5.
Example: 75 (last digit 5) → 75 is exactly divisible by 5.
Divisibility by 6:
If a number is exactly divisible by both 2 and 3, it is also exactly divisible by 6.
Example: 54 (divisible by 2: last digit 4, divisible by 3: 5 + 4 = 9) → 54 is exactly divisible by 6.
Divisibility by 8:
If the last three digits of a number are 000 or a multiple of 8, that number is exactly divisible by 8.
Example: 1,000 (last three digits 000) → 1,000 is exactly divisible by 8.
Divisibility by 9:
If the sum of the digits of a number is 9 or a multiple of 9, that number is exactly divisible by 9.
Example: 243 (2 + 4 + 3 = 9) → 243 is exactly divisible by 9.
Divisibility by 10:
A number is exactly divisible by 10 if its last digit is 0.
Example: 90 (last digit 0) → 90 is exactly divisible by 10.
Divisibility by 11:
A number is exactly divisible by 11 if the alternating sum of its digits (plus or minus one) is 0 or a multiple of 11.
Example: 2728 ((2 – 7 + 2 – 8) = -11) → 2728 is exactly divisible by 11.
These rules help you quickly determine exactly which numbers are divisible by which numbers, making mathematical operations easier. They offer practical solutions, especially for calculations with
large numbers.
The Importance of Divisibility Tests in Mathematics
Divisibility tests are methods used in mathematics to quickly and easily determine whether numbers are exactly divisible by other numbers according to certain rules. These tests play an important
role both in basic math education and in solving advanced mathematical problems.
Benefits of Divisibility Tests:
• Quick and Easy Calculation: Divisibility rules offer quick and practical solutions when working with large numbers. Thanks to these rules, we can determine whether a number can be divided by a
certain number in a short time.
• Convenience in Basic Math Education: In elementary and middle school mathematics, divisibility rules help students understand the structure of numbers and the relationships between them. These
rules form the basis of number theory.
• Number Theory and Factorization: Divisibility tests are one of the basic tools of number theory. These tests play an important role in prime factorization and factorization of numbers. They are
especially used to determine prime numbers.
• Simplification of Fractions: Divisibility rules provide great convenience in simplifying fractions. These rules are used to find the common divisors of the numerator and denominator and to
simplify fractions.
• Problem Solving and Analytical Thinking: Divisibility rules improve analytical thinking and problem solving skills in solving mathematical problems. These rules allow complex problems to be
solved in simple steps.
• Mathematical Proofs and Theorems: Divisibility rules form the basis of many mathematical proofs and theorems. For example, they are used to determine prime numbers or whether a number is a
perfect square.
Application Areas of Divisibility Tests:
• Cryptography: In cryptography, factorization and prime factorization of numbers are important. Divisibility tests are at the heart of encryption algorithms.
• Computer Science: Divisibility rules are used in algorithm design and data structures. It is especially important in large number processing and data security.
• Engineering and Physics: In engineering and physics problems, divisibility rules are used to ensure the accuracy of measurements and calculations. These rules also play a role in error analysis
and optimization processes.
Divisibility tests are used as a fundamental tool in many areas of mathematics, making mathematical operations more understandable and manageable. The convenience of these tests is of great
importance both in education and in practical applications.
|
{"url":"https://www.calculator6.com/divisibility-test-calculator/","timestamp":"2024-11-05T02:46:12Z","content_type":"text/html","content_length":"236542","record_id":"<urn:uuid:85ef0985-c44a-4c3c-914c-c93be6e1f440>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00649.warc.gz"}
|
UpStudy (Fomerly CameraMath) - AI Homework Helper
Ratio Calculators
What is ratio in math?
•The definition of ratio:
A ratio says how much of one thing there is compared to another thing. When describing a ratio, the first number is known as the 'antecedent' and the second is the 'consequent'. So, in the ratio 5:1,
the antecedent is 5 and the consequent is 1.
Ratio can be shown in different ways:
1. Use the ":" 5:1.
2. The ratio of A to B: 5 to 1.
3. A fraction with A as numerator and B as denominator that represents the quotient.
•The different types of ratios:
1. Compounded Ratio: The compounded ratio of the two ratios a : b and c : d is the ratio ac : bd, and that of a : b, c : d and e : f is the ratio ace : bdf
2. Duplicate Ratio: The duplicate ratio of the ratio a : b is the ratio a^2: b^2
3. Reciprocal Ratio: The reciprocal ratio of a:b is (1/a):(1/b), where a≠0 and b≠0
4. Ratio of equalities: If the antecedent and consequent are equal then the ratio is called ratio of equality, like 6:6.
5. Ratio of Inequalities: If the antecedent and consequent are not equal then the ratio is called the ratio of inequality, like 4:7.
Proportion vs. Ratio
•Proportion:The proportion is usually 100% as the proportional coefficient, which is used to describe the proportion or distribution of each component in the whole, such as what accounts for a large
proportion of what. It emphasizes the relationship between one thing and other things in terms of quantity, size, etc.
•Ratio:A ratio is a comparison of two numerical values; it is a portion of a number. A ratio may also be expressed as a fraction or a division problem. It emphasizes the ratio of whole to part and
part to part.
How to find the ratio?
Ratios can describe quantity, measurements or scale. Usually, there are three steps to find the ratio:
•Simplify ratios or create an equivalent ratio when one side of the ratio is empty.
•Solve ratios for the one missing value when comparing ratios or proportions.
•Compare ratios and evaluate as true or false to answer whether ratios or fractions are equivalent.
Now, let’s see some examples of finding ratio.
Example 1:
Find the x value of the ratio 10 : x = 12 : 16
10x = 1216 ➡ 5x = 68
x = 5×86 = 406 = 203
Example 2:
In a gym class, 15 students are playing baseball, 11 students are playing basketball, and 4 students are playing football. What is the ratio of the number of students playing football to the number
of baseball in this class?
Solution: This problem gives us all the information we need to express the ratio:
4 students are playing football : 15 students are playing baseball = 4 : 15
Example 3:
Andrew and James have 400 sweets and they need to share them in the ratio 5:3. How many sweets does each of them receive?
5 + 3 = 8
400 divided by 8 = 50
Andrew:50 x 5 = 250, James: 50 x 3 = 150.
|
{"url":"https://cameramath.com/calculators/ratio","timestamp":"2024-11-06T23:14:56Z","content_type":"text/html","content_length":"36566","record_id":"<urn:uuid:9babead7-a3dc-4a21-b74b-e344389f5eea>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00298.warc.gz"}
|
PPT - Perspective Drawing PowerPoint Presentation, free download - ID:1708626
1. Perspective Drawing One Point Perspective and Two-Point Perspective
2. Perspective: the science of painting and drawing so that objects represented have apparent depth and distance…The Merriam-Webster Dictionary Robert Yarber Beyond Harm 1987, acrylic on canvas
3. Perspective • Vanishing Point • The single point on the horizon where all the lines on the ground level seem to come together • Horizon Line • The place where the land and the sky meet. •
Orthogonal Line • Lines that connect to the vanishing point • Vantage Point • A broad overall view of a place.
4. The Eye Level is the horizontal level in line with your eyes when you’re looking straight ahead. Eye Level La Montaque Sainte-Victoire, 1898-1900 Paul Cezanne
5. Normal Eye Level Lower Eye Level Higher Eye Level The Eye Level in the picture tells the viewer the vantage point of the artist when they painted the picture.
6. Perspective • Linear Perspective: • Based on the way the human eye sees the world. • Objects that are closer appear larger, more distant objects appear smaller. • To create the illusion of space
the artists creates a vanishing point on the horizon line. • Objects are drawn using orthogonal lines, which lead to the vanishing points.
7. One Point Linear Perspective Vanishing Point Eye Level & Horizon Converging Lines Size and Space Variation
8. Perspective • Can you locate the Horizon Line? • How did you determine this? • Can you find the vanishing point in this picture?
9. Perspective The red line is the Horizon Line.
10. Perspective Can you locate the vanishing point?
12. One Point Perspective Horizon Line Perspective Lines
21. Letters in Perspective • Miss Fawcett
22. Your Assignment Create a Composition (Pleasing arrangement of all Elements of art and Design.) Using Perspective Shapes. 5 Points You may use letters but they must be creatively designed. Have a
clear Horizon Line and Vanishing point 6 Points Be Creative- Make it Interesting and Unique 10 Points Hard work and Effort 10 Points
24. Perspective Objects seen at an angle would be drawn with two-point perspective using two vanishing points. Artwork with two-point perspective often has vanishing points "off the page".
25. Perspective Lines leading to the vanishing points are called orthogonal.
26. Perspective In two-point perspective the front edge of the form is seen as the closest point.
27. Perspective Draw a horizon line towards the top of your paper.
28. Perspective Draw two vanishing points on the horizon line near the page edges.
29. Perspective Now draw a vertical line this is your front edge. Draw it in near the bottom middle of the page, so you have plenty of room to add more forms to your building.
30. Perspective Now connect the both ends of the front edge to both vanishing points. These are orthogonal. Draw lightly so you can erase!
31. Perspective Draw two vertical lines between the orthogonal where you want the back edges of your form to appear.
32. Perspective Now join the back and top corners to the opposite vanishing point to complete the top of the form.
33. Perspective Erase the extra orthogonal. Now you have a form drawn in two-point perspective!
35. Perspective • Your First Assignment: • Create a drawing of boxes in 2-point perspective. • Stack Forms on top of each other • Add Design or Texture to your box forms.
41. Two point perspective box (1) above, (2) on, and (3) below eye level. Example of what you will create.
|
{"url":"https://fr.slideserve.com/arlen/perspective-drawing","timestamp":"2024-11-13T18:22:41Z","content_type":"text/html","content_length":"94388","record_id":"<urn:uuid:c215635e-50b3-45eb-be36-319cf059d8e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00782.warc.gz"}
|
t-Test, Chi-Square, ANOVA, Regression, Correlation...
t-test for independent samples
Load data set
What is a t-test for independent samples (Unpaired t-test)?
The t-test for independent samples is a statistical test that determines whether there is a difference between two unrelated groups.
The t-test for independent samples is used to make a statement about the population based on two independent samples. To make this statement, the mean value of the two samples is compared. If the
difference in means is large enough, it is assumed that the two groups differ.
Why do you need the unpaired t-Test?
Say you want to test if there is a difference between two groups in the population, for example, if there is a difference in salary between men and women. Of course it is not possible to ask all men
and women for their salary, so we take a sample. We create a survey and send it randomly to people. In order to be able to make a statement about the population based on this sample, we need the
independent t-test.
How does the unpaired t-test work?
The unpaired t-test puts the mean difference in relation to the standard error of the mean. The standard error of the mean indicates how much the mean value scatter, it indicates how far the sample
mean of the data is likely to be from the true population mean. If the fluctuation of the mean value is large, this is an indication that a large difference in the mean values of the two groups is
very likely, even by chance.
Therefore, the larger the mean difference in the two groups is and the smaller the standard error of the mean, the less likely it is that the given mean difference in the two samples is due to
What are independent samples?
Independent samples exist if no case or person from one group can be assigned to a case or person from the other group. This is the case, for example, when comparing the group of women and the group
of men, or the group of psychology students with those of math students.
Paired vs unpaired t-test
The main difference between the paired and the unpaired t-test is the sample.
• If you have one and the same sample that you survey at two points in time, you use an paired t-test.
• If you want to compare two different groups, whether they come from one sample or two samples, you use an unpaired t-test.
Examples for the Unpaired t-test
There are many applications for the independent t-test, it is an important test e.g. in biostatistics or marketing.
Medical example:
For a pharmaceutical company, you want to see if a drug XY helps you lose weight or not. This is done by giving 20 people the medicine and 20 people a placebo.
Social science example:
You want to find out if there is a difference between the health of people with and without university degrees.
Technical example:
For a screw factory you want to find out if two production lines produce screws of the same weight. To test this, you weigh 50 screws from one machine and 50 screws from the other machine and compare
Research question and hypotheses
If you want to know whether two independent groups are different, you have to calculate an independent t-test. Before the t-test can be calculated, however, you first have to formulate a research
question and define the hypotheses.
Research Question for the independent t-test
With the research question you limit your object of investigation. In a t-test for independent samples the general question is: Is there a statistically significant difference between the mean values
of two groups?
For the examples above, the research questions arise:
• Does drug XY help with weight loss?
• Is there a difference in the health of people with and without university degrees?
• Do both production plants produce screws of the same weight?
Hypotheses for the unpaired t-Test
The next step is to derive the hypotheses to be tested from the question. Hypotheses are assumptions about reality whose validity is possible but not yet proven. Two hypotheses are always formulated
that assert exactly the opposite. These two hypotheses are the null hypothesis and the alternative hypothesis.
Null hypothesis H[0] Alternative hypothesis H[1]
There is no mean difference between the two groups in the population. There is a mean difference between the two groups in the population.
Two population means are equal. The two population means are not equal.
The two groups are from the same population. The two groups are not from the same population.
H[0]: μ[1] = μ[2] H[1]: μ[1]≠ μ[2]
Example: There is no difference between the salary of men and women. Example: There is a difference between the salary of men and women.
Two-tailed and one-tailed t-test
There are two main types of t-tests: two-tailed and one-tailed tests. One-tailed tests can be further divided into left-sided and right-sided tests. The choice between using a one-tailed or
two-tailed test, and whether the test should be left- or right-sided, depends on the specific research question or hypothesis.
Two-tailed t-Test
A two-tailed test is used when you want to test if a value significantly differs in either direction from a certain value. It's useful when you're uncertain about the direction in which the change
might occur or when you're interested in finding differences without predicting a specific direction.
One-tailed t-Test
A one-tailed test is used when a specific direction of change or difference is of interest.
• Left-sided t-Test: This test checks if the observed value is significantly lower than expected or consistent with the null hypothesis. A left-tailed test is used when you expect the result to be
smaller or lower than a certain value or average. For example, a researcher might use a left-sided test to examine if a new treatment significantly reduces blood pressure.
• Right-sided t-Test: This test checks if the observed value is significantly higher than expected. It's used when you expect the result to be greater or higher than a certain value or average. For
example, a researcher might use a right-sided test to determine if a new training method leads to significantly higher muscle growth.
In summary, decide if your research aims to test for differences in both directions (two-tailed) or in a specific direction (one-tailed). If you're testing in a specific direction, determine whether
you expect the values to be lower or higher than the average or the null hypothesis (left- or right-sided).
Assumptions unpaired t-Test
To calculate an independent t-test you need one independent variable (e.g. gender) that has two characteristics or groups (e.g. male and female) and one metric dependent variable (e.g. income). These
two groups should be compared in the analysis. The question is, is there a difference between the two groups with regard to the dependent variable (e.g. income). The assumptions are now the
1. There are two dependent groups or samples
As the name of this t-test suggests, the samples must be independent. This means that a value in one sample must not influence a value in the other sample.
• Measuring the weight of people who have been on a diet and people who have not been on a diet.
• Measuring the weight of a person before and after a certain diet.
2. The variables are interval scaled
For the t-test for independent samples, the mean value of the sample must be calculated, this is only meaningful if the variable is metric scaled.
• The weight of a person (in kg)
• The educational level of a person
3. The variables are normally distributed
The t-test for independent samples gives the most accurate results when the data from each group are normally distributed. However, there are exceptions in special cases.
• The weight, age or height of a person.
• The number after throwing a die
4. The variance within the groups should be similar
Since the variance is needed to calculate the t value, the variance within each group should be similar.
• Weight, age or height of a person
• The stock market crisis in "normal" times and in a recession
Assumptions not met?
If the assumptions for the independent t-test are not met, the calculated p-value may be incorrect. However, if the two samples are of equal size, the t-test is quite robust to a slight skewness of
the data. The t-test is not robust if the variances differ significantly.
If the variables are not normally distributed, the Mann-Whitney U test can be used. The Mann-Whitney U-Test is the non-parametric counterpart of the independent t-test.
Calculate t-test for independent samples
Depending on whether the variance between the two groups is assumed to be equal or unequal, a different equation for the test statistic t is obtained. Checking whether the variances are equal or not
is done with the Levene-Test. The null hypothesis in the Levene-Test is that the two variances are not different. If the p-value of the levene-test is less than 5%, it is assumed that there is a
difference in the variances of the two groups.
Equations for equal variance (homogeneous)
If the Levene test yields a p-value of greater than 5%, it is assumed that both groups have equal variance and the test statistics are:
The p-value can then be determined from the table with the t distribution. The number of degrees of freedom is given by
where n[1] and n[2] are again the number of cases in the two samples.
Formula for unequal Variance (heterogeneous)
The test statistic t for a t-test for independent samples with unequal variance is calculated by
The p-value then follows from the table with the t-distribution, where the degrees of freedom are obtained via the following equation:
Confidence interval for the true mean difference
The calculated mean difference in the independent t-test has been calculated using the sample. Now it is of course of interest in which range the true mean difference lies. To determine within which
limits the true difference is likely to lie, the confidence interval is calculated.
The 95% confidence interval for the true mean difference can be calculated by the following formula:
where t^* is the t value obtained at 97.5% and degrees of freedom df.
One-sided and two-sided unpaired t-test
As explained in the article on hypothesis, there are one-sided and two-sided hypotheses (also called directional and non-directional hypotheses). To accommodate this, there is also a one-sided and
two-sided t-test for independent samples. By default, the two-sided unpaired t-test is calculated, which is also output in DATAtab.
To obtain the one-sided t-test for independent samples, the p-value must be divided by two. Now it depends on whether the data tend "in the direction" of the hypothesis or not. If the hypothesis says
that the mean of one group is larger or smaller than the mean of the other group, this must also be seen in the result. If this is not the case, 1 minus the halved p-value must be calculated.
Effect size unpaired t-test
The effect size in an unpaired t-test is usually calculated using the Hedges g, also called d. In the independent t-test calculator on DATAtab you can easily get the effect size.
What do you need the effect size for?
The calculated p-value depends very much on the sample size. For example, if there is a difference in the population, the larger the sample size, the more clearly the p-value will "show" this
difference. If the sample size is chosen very high, even very small differences, which may no longer be relevant, can be "detected" in the population. To standardize this, the effect size is used in
addition to the p-value.
Calculate t-test for independent samples with DATAtab
A lecturer would like to know whether the statistics exam results in the summer semester differ from those in the winter semester. To this end, she creates an overview with the points achieved per
Research question:
Is there a significant difference between the examination results in the summer and winter semester?
Null hypothesis H0:
There is no difference between the two samples. There is no difference between the statistics exam results in the summer semester and in the winter semester
Alternative hypothesis H1:
There is a difference between the two samples. There is a difference between the statistics exam results in the summer semester and in the winter semester
Summer semester Winter semester
After copying the above sample data into the Hypothesis Test Calculator on DATAtab, you can calculate the t-test for independent samples. The results for the t-test example look like this:
Group statistics
n Mean Standard deviation Standard error of the mean
Summer semester 13 52.077 11.026 3.058
Winter semester 11 46.182 16.708 5.038
Unpaired t-test
t df p
Summer semester & Winter semester Equal variance 1.035 22 0.312
Unequal variance 1 16.824 0.331
95% confidence interval
Mean value difference Standard error Lower Upper
of difference
Summer semester & Winter semester Equal variance 5.895 5.893 -6,328 18.118
Unequal variance 5.895 5.893 -6.55 18.34
How to interpret a t-test for independent samples?
To make a statement about whether your hypothesis is significant or not, one of the following two values is used
• p-value (2-tailed)
• lower and upper confidence interval of the difference
In this t-test example, the p-value (2-tailed) is 0.312 or 31%. This means that the probability that you draw a sample where both groups differ more than the groups in the example is 31%. Since the
significance level was set at 5 %, it is thus lower than 31 %. For this reason, no significant difference is assumed between the two samples and they therefore come from the same population.
The second way to determine whether or not there is a significant difference is to use the confidence interval of the difference. If the lower and upper limits runs through zero, there is no
significant difference. If this is not the case, there is a significant difference. In this t-test example, the lower value is -6.328 and the upper value is 18.118. Since zero is between the two
values, there is no significant difference.
It is common practice to first display the two samples in a chart before calculating a t-test for independent samples. For this purpose, a boxplot is suitable which visualizes the Measurement of
Central Tendency and Measurement of Variability of the two independent samples very well.
To calculate an independent t-test online, you can also use the independent t-test calculator.
Report a t-test for independent samples
Reporting a t-test for independent samples in APA (American Psychological Association) style involves presenting key details about your statistical test in a clear, concise manner. Here's a general
guideline on how to report the results of an independent samples t-test according to APA style:
Test Statistic:
Clearly state that you are using an independent samples t-test. Report the degrees of freedom in parentheses after the "t" statistic, then provide the value of t.
Significance Level:
This is typically reported as "p" followed by the exact value or a comparison
Effect Size:
It's good practice to include an effect size (like Cohen's d) alongside the t-test result. This provides an indication of the magnitude of the difference between groups.
Means and Standard Deviations:
Report the means and standard deviations for each group. This gives a context to the t-test result.
Sample Size:
You can also mention the number of participants in each group, especially if this wasn't previously stated.
Here's a template:
An independent samples t-test was conducted to compare [variable] in [group 1] and [group 2]. There was a significant difference in the scores for [group 1] (M = [mean], SD = [standard deviation])
and [group 2] (M = [mean], SD = [standard deviation]); t([degrees of freedom]) = [t value], p = [exact p value] (two tailed). The magnitude of the differences in the means (mean difference = [mean
difference], 95% CI: [lower limit, upper limit]) was [small, medium, large], with a Cohen's d of [d value].
For example, consider you conducted an independent samples t-test comparing test scores between males and females. Assume you found the following results:
• Males: M = 50, SD = 10, n = 30
• Females: M = 55, SD = 9, n = 30
• t(58) = -2.5, p = .015, Cohen's d = 0.5
The results would be reported as:
An independent samples t-test was conducted to compare test scores in males and females. There was a significant difference in the scores for males (M = 50, SD = 10) and females (M = 55, SD = 9); t
(58) = -2.5, p = .015 (two tailed). The magnitude of the differences in the means (mean difference = -5, 95% CI: [provide the confidence interval limits here]) was medium, with a Cohen's d of 0.5.
Statistics made easy
• many illustrative examples
• ideal for exams and theses
• statistics made easy on 412 pages
• 5rd revised edition (April 2024)
• Only 8.99 €
Free sample
"It could not be simpler"
"So many helpful examples"
|
{"url":"https://datatab.net/tutorial/unpaired-t-test","timestamp":"2024-11-05T00:47:27Z","content_type":"text/html","content_length":"91453","record_id":"<urn:uuid:b06e7c83-9a28-48df-8b57-7cdbfb421742>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00166.warc.gz"}
|
Nuclear Physics and Radiations
1. Nuclear Physics and Radiations
Nuclear Physics and Radiations
The students will learn about the structure of the nucleus, the properties and mutual interaction between nucleons and the way their organization determines the nuclei properties. Students will get
acquainted with radioactivity as a natural process and its applications. Some aspects of applied nuclear physics will also be discussed as medical applications and material characterization.
In laboratory sessions, the students will become acquainted with practical aspects of radiation detection, involving different kind of detectors, the associated electronics and data acquisition.
General characterization
Responsible teacher
João Duarte Neves Cruz
Weekly - 4
Total - Available soon
Teaching language
Elementary Calculus;
Elementary Quantum Mechanics;
Elementary Electromagnetism;
Elementary Atomic Physics.
Notes from the lecturer.
Introductory Nuclear Physics – Kenneth S. Krane, John Wiley & Sons, New York (1988), ISBN 0-471-85914-1
Nuclear Physics – Principles and Applications, John S. Lilley, John Wiley & Sons, New York (2005), ISBN 0-471-97936-8
Radiation Detection and Measurement, 3rd ed. – Glenn F. Knoll, John Wiley & Sons, New York (2000), ISBN 0-471-07338-5
Física Nuclear – Theo Mayer-Kuckuk, ed. Calouste Gulbenkian, Lisboa (1979), ISBN 972-31-0598-5
Introdução à Física Atómica e Nuclear, Vol. II – L. Salgueiro e J.G. Ferreira, ed. Univ. Lisboa (1975).
Teaching method
Available soon
Evaluation method
Available soon
Subject matter
Fundamental particles and interactions. The weak interaction. The interaction between nucleons.
Angular, magnetic dipolar and electric quadrupolar moments.
Nuclear properties: the nuclear radius, charge and mass distributions. Mass, binding energy, semi-empirical mass formula.
The shell model of the nucleus; predictions and failures. Reference to collective models.
Radioactivity. Types of radioactive decay. Concepts and laws of radioactive decay. Natural radioactivity. Radioactive chains. Radioactive dating.
Alpha decay: energetics and experimental data. The theoretical model. Conservation of angular momentum and parity: selection rules. Alpha spectrometry.
Beta decay: energetics and experimental data. The Fermi model. Selection rules. Beta spectrometry.
Gamma decay: energetics. Classic and quantum models of radiation. Selection rules and internal conversion. Gamma spectrometry.
Nuclear Fission. Properties: prompt and delayed neutrons; instability of fragments.
Radiation detection; detectors; nuclear spectrometries. Nuclear Reactions.
Programs where the course is taught:
|
{"url":"https://guia.unl.pt/en/2023/fct/program/1051/course/12576","timestamp":"2024-11-09T01:23:50Z","content_type":"text/html","content_length":"20336","record_id":"<urn:uuid:a39b1396-b6de-44f4-81b1-45f149df98a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00326.warc.gz"}
|
Worldline approach to vector and antisymmetric tensor fields II
We extend the worldline description of vector and antisymmetric tensor fields coupled to gravity to the massive case. In particular, we derive a worldline path integral representation for the
one-loop effective action of a massive antisymmetric tensor field of rank p (a massive p-form) whose dynamics is dictated by a standard Proca-like lagrangian coupled to a background metric. This
effective action can be computed in a proper time expansion to obtain the corresponding Seeley-DeWitt coefficients a[0], a[1], a [2]. The worldline approach immediately shows that these coefficients
are derived from the massless ones by the simple shift D→D+1, where D is the spacetime dimension. Also, the worldline representation makes it simple to derive exact duality relations. Finally, we use
such a representation to calculate the one-loop contribution to the graviton self-energy due to both massless and massive antisymmetric tensor fields of arbitrary rank, generalizing results already
known for the massless spin 1 field (the photon).
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
• Duality in Gauge Field Theories
• Gauge Symmetry
• Sigma Models
Dive into the research topics of 'Worldline approach to vector and antisymmetric tensor fields II'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/worldline-approach-to-vector-and-antisymmetric-tensor-fields-ii","timestamp":"2024-11-05T04:32:17Z","content_type":"text/html","content_length":"50289","record_id":"<urn:uuid:16aaec2a-31f4-4ab9-a425-d4133fde5976>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00293.warc.gz"}
|
room H503
The Landau paradigm of phase transitions states that any continuous (second order) phase transition is a symmetry breaking transition. Originally this was formulated for symmetries that form groups,
e.g. the critical Ising model is the transition between the $\mathbb{Z}_2$ symmetric and spontaneously broken phases. In recent years a new class of symmetries, called categorical or non-invertible,
have emerged in quantum systems -- with impact ranging from high energy and condensed matter physics to mathematics, and quantum computing. I will explain how these symmetries generalize the Landau
paradigm and how new phases and phase transitions are predicted, which have potential future experimental implementations in cold atom systems.
|
{"url":"https://triangle.mth.kcl.ac.uk/?week=-1","timestamp":"2024-11-05T13:27:32Z","content_type":"text/html","content_length":"16248","record_id":"<urn:uuid:52e3812d-414a-4baa-9d60-dcb4ae713779>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00172.warc.gz"}
|
Homotopic curves
Topic: Homotopic curves
Here is a generalization of Ivan's example, Reflection. Let (X1(x,y),Y1(x,y)) be a continuous mapping of the plane to itself and (f(t),g(t)) a parametric curve. A transformed curve is the curve (X1(f
(t),g(t)),Y1(f(t),g(t))). The two curves are homotopic in the sense that the original can be continuously deformed into the second. (The Reflection example uses X1(x,y)=y and Y1(x,y)=x).) An
interesting transformation is the "polar" function: X1(x,y)=y*cos(x) and Y1(x,y)=y*sin(x). The attached file illustrates this; you can animate on the constant c and thus see the continuous
deformation. Enjoy.
Re: Homotopic curves
Sorry, I just noticed there is no attached file. Here it is.
|
{"url":"https://forum.padowan.dk/viewtopic.php?pid=1927","timestamp":"2024-11-08T05:30:02Z","content_type":"text/html","content_length":"10181","record_id":"<urn:uuid:0dfedaad-8521-47a9-a1dd-a5db7bbbbf3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00327.warc.gz"}
|
Nonlinear modes disentangle glassy and
SciPost Submission Page
Nonlinear modes disentangle glassy and Goldstone modes in structural glasses
by Luka Gartner, Edan Lerner
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Edan Lerner
Submission information
Preprint Link: http://arxiv.org/abs/1610.03410v1 (pdf)
Date submitted: 2016-10-12 02:00
Submitted by: Lerner, Edan
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
• Condensed Matter Physics - Theory
Specialties: • Statistical and Soft Matter Physics
Approaches: Theoretical, Computational
One outstanding problem in the physics of glassy solids is understanding the statistics and properties of the low-energy excitations that stem from the disorder that characterizes these systems'
microstructure. In this work we introduce a family of algebraic equations whose solutions represent collective displacement directions (modes) in the multi-dimensional configuration space of a
structural glass. We explain why solutions of the algebraic equations, coined nonlinear glassy modes, are quasi-localized low-energy excitations. We present an iterative method to solve the algebraic
equations, and use it to study the energetic and structural properties of a selected subset of their solutions constructed by starting from a normal mode analysis of the potential energy of a model
glass. Our key result is that the structure and energies associated with harmonic glassy vibrational modes and their nonlinear counterparts converge in the limit of very low frequencies. As nonlinear
modes never suffer hybridizations, our result implies that the presented theoretical framework constitutes a robust alternative definition of `soft glassy modes' in the thermodynamic limit, in which
Goldstone modes overwhelm and destroy the identity of low-frequency harmonic glassy modes.
Current status:
Has been resubmitted
Reports on this Submission
Report #1 by Anonymous (Referee 1) on 2016-12-16 (Invited Report)
• Cite as: Anonymous, Report on arXiv:1610.03410v1, delivered 2016-12-16, doi: 10.21468/SciPost.Report.51
1- Studying a hot research topic
2- Proposing an extension of previous method to compute non-linear glassy modes
1- Merit of the extension is not well demonstrated
Low frequency glassy modes originate from disordered structure of glasses thought to be related to thermodynamic, dynamic, and mechanical properties of glassy systems.
However, the glassy modes obtained by standard approaches such as the normal mode analysis are often hybridized with Goldstone modes.
Especially, at the thermodynamic limit, the standard approaches would completely fail to identify the low frequency glassy modes.
Thus, disentangling the glassy modes and Goldstone modes is an important research topic.
In the paper by Gartner and Lerner, the authors propose a numerical method to extract the low frequency non-linear glassy modes from disordered glassy configurations.
The non-linear glassy modes are obtained by minimization of the (n-th order) cost function which is designed so that solutions of the mode satisfy two characteristic features of known harmonic glassy
modes, 1) low energy (small stiffness) and 2) localization (correlated with large expansion coefficients).
Whereas the third order (n=3) cost function introduced in the previous papers (Refs.[11,28]) corresponds to energy barrier between adjacent potential energy minima, there is no clear physical
interpretation for higher order (n>3) cost functions.
Hence, the authors numerically check whether the obtained modes have same structural and energetic properties of the harmonic glassy modes.
As a result, the obtained modes, called non-linear glassy modes, share same spatial decay profiles and energy variations with the harmonic glassy modes, which justifies the proposed method.
By analyzing energetics of the non-linear glassy modes for n>2, the authors advocate that n=4 modes are the best candidates to represent soft glassy excitations in glasses.
The motivation, problems, and proposed strategy, are well explained in the paper.
Also, numerical results are reasonable.
However, I have concerns about statement and technical issues described below.
a) n=4 modes are special?
In the paper, the authors assert that n=4 modes are the best candidates of soft glassy modes mainly because n=4 modes have the lowest stiffness among all non-linear modes with n>2.
However, it seems that this conclusion is not strongly supported by the presented data.
Because, according to FIG. 7(c) and (d), the stiffness as a function of n is nearly flat with taking similar values for n>3.
In addition to this, the authors demonstrated in Refs. [11,28] that n=3 modes also can predict soft region (quadrupole-like region in d=2) from hybridized modes.
From these observations, one would expect that structure of all non-linear modes of any n-th order are similar (not only statistical sense shown in FIG. 5) as long as they are obtained from a same
given configuration.
If this expectation is true, the statement that n=4 modes are special might be weaken.
With this in mind, I would like to know data comparing structures of the mode with different orders from the same input, for example, by using the inner product of the modes as shown in FIG. 8(a).
b) Initial input modes for FIG. 11(a)
FIG. 11(a) shows disentangling the nonliner glassy modes and Goldstone modes.
In this plot, the authors employ the non-affine displacement response to shear deformation as the input for the iterative method.
Naively, one might think that starting from the harmonic modes (hybridized with Goldstone modes) is straightforward, because this way indeed fits the idea of disentangling the two modes.
Thus, it would be better to explain physical or technical reasons why the authors chose the non-affine displacement field.
Requested changes
1- Mistype in the caption of FIG. 1
before: the "horizontal" dash-dotted lines
after: the "vertical" dash-dotted lines
We thank the referee for carefully reading the manuscript, and for helping us improve its clarity. Please find below our responses to the referee’s comments, which are enclosed between horizontal
a) $n\!=\!4$ modes are special?
In the paper, the authors assert that $n\!=\!4$ modes are the best candidates of soft glassy modes mainly because $n\!=\!4$ modes have the lowest stiffness among all non-linear modes with $n\!>\!2$.
However, it seems that this conclusion is not strongly supported by the presented data. Because, according to FIG. 7(c) and (d), the stiffness as a function of $n$ is nearly flat with taking similar
values for $n\!>\!3$.
We thank the referee for raising this interesting issue for discussion. Firstly, we would like to clarify that we have made the measurements and reported in the original manuscript that the
stiffnesses of all $n\!>\!2$ nonlinear modes are, in more than 99% of the instances studied (a few thousands), larger than the 4th order modes’ stiffnesses. We certainly did not conclude that 4th
order modes are the softest amongst all $n\!>\!2$ modes based on the data presented in Fig.7, which shows an analysis of stiffnesses for merely two instances of modes. Furthermore, notice that we did
not overlook the flatness of the stiffness as a function of the order $n$ presented in Fig.7; instead, we explicitly commented on it in the original manuscript.
We believe that in a study focusing on low-frequency vibrational modes, singling out and focusing on the lowest-stiffness modes amongst the entire family of nonlinear modes is the most natural and
relevant choice to be made. We wonder what other criteria aside from mode stiffness would be more relevant to our main goal, which was to find a micromechanical definition of soft glassy modes that
are oblivious to the proximity of Goldstone modes with similar energies. Focusing on higher order nonlinear modes, which typically possess larger stiffnesses, seems sub-optimal with respect to our
Nevertheless, an interesting question, albeit beside the main point of our work, concerns the statistical trends observed upon comparing stiffnesses of higher order modes to the stiffnesses of 4th
order modes, beyond our observation that the 4th order modes are softer. In the revised manuscript we improved the discussion about the dependence of nonlinear modes’ stiffnesses on their order, and
further mention that the tendency of higher order modes’ stiffnesses to be similar to 4th order modes' stiffnesses increases with decreasing frequency of their ancestral harmonic modes. We leave,
however, the detailed statistical study of these trends for future work.
Finally, we elaborate further in what follows on why, in fact, the weak dependence of NGMs stiffness on their order $n$ can be seen as a strength of our approach.
In addition to this, the authors demonstrated in Refs. [11,28] that $n\!=\!3$ modes also can predict soft region (quadrupole-like region in $d\!=\!2$) from hybridized modes. From these observations,
one would expect that structure of all non-linear modes of any $n$-th order are similar (not only statistical sense shown in FIG. 5) as long as they are obtained from a same given configuration.
There seems to be some confusion here: as described explicitly in the text, Fig.5 shows the decay profiles measured for individual NGMs calculated from the same ancestral harmonic mode. The decay
profiles presented are not averaged over many realizations, and therefore the figure does not describe the similarity of different order NGMs in a statistical sense, but precisely in the sense
mentioned by the referee.
If this expectation is true, the statement that $n\!=\!4$ modes are special might be weaken.
That $n\!>\!4$ modes can have similar (but almost always higher) stiffnesses compared to $n\!=\!4$ modes is beside the main point of our work, which is the demonstration that the stiffness and
structure of (4th order) nonlinear modes converge to the stiffness and structure of the globally-minimal stiffness (harmonic) modes at low frequencies. Our observations clearly rule out the
possibility that higher order modes show better convergence properties compared to $n\!=\!4$ modes.
Moreover, we see the similarity that $n\!=\!4$ modes bare to higher order modes as a potential strength of our approach, as discussed in the Summary and Discussion Section: NGMs can be thought of as
the linear response to localized forces (see Section VI.a), with the general trend that higher order NGMs are given by responses to more localized forces. The similarity between high order NGMs to
4th order NGMs suggests that detecting the entire field of NGMs is possible by calculating the linear response to simple, localized forces, and using those linear responses as ancestral modes from
which NGMs can be calculated. The similarity of 4th order NGMs to higher order NGMs increases the likelihood that such linear responses to localized forces serve as very good heuristics for obtaining
useful ancestral modes. We expanded on this point in the Summary and Discussion Section of the revised manuscript.
With this in mind, I would like to know data comparing structures of the mode with different orders from the same input, for example, by using the inner product of the modes as shown in FIG. 8(a).
We reiterate that Fig. 5 precisely shows a comparison between the spatial structure of different order modes obtained from the same ancestor. A larger-scale statistical study of the differences
between the structural and energetic properties of $n\!=\!4$ and $n\!\ne \!4$ modes, beyond the key observation that 4th order NGMs have the smallest stiffnesses amongst all other order NGMs, is
outside of the scope of our work. The further investigation of the statistical differences between higher order NGMs, which are anyway more energetic and therefore of less interest, is left for
future studies.
b) Initial input modes for FIG. 11(a)
FIG. 11(a) shows disentangling the nonlinear glassy modes and Goldstone modes. In this plot, the authors employ the non-affine displacement response to shear deformation as the input for the
iterative method. Naively, one might think that starting from the harmonic modes (hybridized with Goldstone modes) is straightforward, because this way indeed fits the idea of disentangling the two
modes. Thus, it would be better to explain physical or technical reasons why the authors chose the non-affine displacement field.
We thank the referee for pointing out this issue. We chose the non-affine displacement fields as ancestral modes for technical reasons: they are quickly computed, and are often mapped to low-energy
nonlinear modes. There is nothing particular about this or any other choice for ancestral modes, as long as they can be mapped to the desired soft glassy modes. The important point is that NGMs can
and do exist in the frequency regime where localized glassy modes cannot be represented by harmonic modes (due to strong hybridizations). We have clarified this point in the revised manuscript.
|
{"url":"https://www.scipost.org/submissions/1610.03410v1/","timestamp":"2024-11-05T15:59:26Z","content_type":"text/html","content_length":"46323","record_id":"<urn:uuid:c707aebe-90e6-4a38-9015-9406a481e5e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00759.warc.gz"}
|
Ratio Analysis - Academia Essay Writers
0 comments
1,750–2,000 words (not including title and reference pages) APA
Find a
manufacturing company’s annual report.
Calculate the following ratios for the company that you select:
Return on assets
Return on equity
Gross profit margin
Debt to equity ratio
Debt ratio
Current ratio
Quick ratio
Inventory turnover
Total asset turnover
Price earnings ratio
Using the calculated ratios, analyze the financial performance of
the firm. You will do this by looking at the ratios and comparing them to
ratios from previous periods and in some cases, against their competitors. Keep
in mind that you are trying to determine how the firm is performing under each
of the listed ratios. In a memo to the chief executive officer (CEO), include
the following:
You are to calculate the ratios
for the company and analyze those ratios.
Find a
manufacturing company’s annual report.
MUST BE IN THIS Format:
I.Part 1 –
Give an overview of what is going to be discussed in the memo
II.Part II – Heading – Ratio Analysis Explanation –
Explain the proper way to use ratio analysis and the users of ratio analysis
III.Part III – Heading – Ratio Calculation – List
the ratio, the formula, the amount
IV.Part IV –
Heading – Evaluation of the Ratios –
Evaluate the ratios, find a similar company similar to compare your calculated ratios, discuss
what the company is doing well and how the company can improve
V.Part V – Heading – Other Financial Analysis –
Discuss other ways to analyze Financial statement s other than ratios and with
other ratios not listed.
VI.Part VI – Heading – Recommendations for
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
|
{"url":"https://academiaessaywriters.com/ratio-analysis-7/","timestamp":"2024-11-10T14:44:50Z","content_type":"text/html","content_length":"68470","record_id":"<urn:uuid:f5026c69-d539-4af1-93b5-b0fde1308092>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00291.warc.gz"}
|
__fpc_rs() — Read floating-point control register and change rounding mode
__fpc_rs() — Read floating-point control register and change rounding mode field
Standards / Extensions C or C++ Dependencies
Both OS/390® V2R6
#include <_Ieee754.h>
void __fpc_rs(_FP_fpcreg_t *cur_ptr, _FP_rmode_t rmode);
General description
The __fpc_rs() function stores the current contents of the floating-point control (FPC) register at the location pointed to by
and then sets the rounding mode field of the FPC based on the value specified by
as follows:
Rounding Mode
Round to nearest
Round toward zero
Round toward +Infinity
Round toward -Infinity
1. When processing IEEE floating-point values, the C/C++ runtime library math functions require IEEE rounding mode of round to nearest. The C/C++ runtime library takes care of setting round to
nearest rounding mode while executing math functions and restoring application rounding mode before returning to the caller.
2. This function does not return or update decimal floating-point rounding mode bits.
|
{"url":"https://www.ibm.com/docs/en/zos/3.1.0?topic=ifpf-fpc-rs-read-floating-point-control-register-change-rounding-mode-field","timestamp":"2024-11-03T15:55:08Z","content_type":"text/html","content_length":"7415","record_id":"<urn:uuid:72a7b52f-98e4-4bb4-8dea-208cfca481e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00400.warc.gz"}
|
SQL UPDATE Using a Join
You may need to do an update on joined tables to get a more conditional update. For instance, I have a Student table as well as an AcademicStatus table. The Student table contains all the students
(profound, I know) and the AcademicStatus table tells if a student is in good standing, at risk, or has dropped out based on a StandingID. The Student table also lists a graduation date and a current
bit to show if the student is currently enrolled. While generating data for these particular tables recently I ran into an issue where some students had dropped out, but mysteriously had graduation
dates, or were listed as being currently enrolled. The easiest way to update this information is by doing a simple SQL UPDATE command on the joined tables. First we will run a query to get all the
students that have dropped out in the AcademicStatus table, while being joined to the Student table pulling back the current and GraduationDate fields. SELECT AcademicStatus.StandingID, Student.
[Current], Student.GraduationDate FROM Student INNER JOIN AcademicStatus ON Student.StudentID = AcademicStatus.StudentID WHERE (AcademicStatus.StandingID = 3) We can then look through that data and
see there are students dropped out that have graduated. That would be a really neat trick. Now you simply need to put everything after "FROM" into your update statement. So now: UPDATE Student SET
GraduationDate = NULL, [Current] = '0' Becomes: UPDATE Student SET GraduationDate = NULL, [Current] = '0' FROM Student INNER JOIN AcademicStatus ON Student.StudentID = AcademicStatus.StudentID WHERE
(AcademicStatus.StandingID = 3) This means the GraduationDate will be set to NULL and the Current bit will be zero for a particular student in the Student table ONLY if the corresponding student has
an StandingID of 3 on the AcademicStatus table. In the first update statement, all students in the Student table would be updated. That is how you update based on a condition in another table.
|
{"url":"https://test.bradleyschacht.com/sql-update-using-a-join","timestamp":"2024-11-14T11:55:23Z","content_type":"text/html","content_length":"92022","record_id":"<urn:uuid:5fd9d0bb-4886-414a-a97f-d9390db61d48>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00297.warc.gz"}
|
Ievgen Ivokhin
Dr. Sci., Prof. of Department of System Analysis and Decision Making Theory
WORK EXPERIENCE
2015 and up to now – Professor, System Analysis and Decision Making Theory, Cybernetics Faculty, Taras Shevchenko National University of Kyiv
1989-2015 – Associate Professor, System Analysis and Decision Making Theory, Cybernetics Faculty, Taras Shevchenko National University of Kyiv
Doctor of Phys.-Math. Sciences in System Analysis and Optimal Solutions Theory (Thesis: "Methods of analysis of fuzzy multidimensional dynamical systems"), Institute of Aerospace Research, Ukrainian
National Academy of Science, Kyiv (2012)
Ph.D. degree in Mathematical Cybernetics (Thesis: "The development of optimal system Lyapunov functions and its application in evaluating of quality characteristics of complex systems"), Kyiv Taras
Shevchenko State University. (1986)
1982-1986 - Post-Graduate Student, Cybernetics Faculty, Kyiv Taras Shevchenko State University
1977-1982 – Student of Cybernetics Faculty, Kyiv Taras Shevchenko State University
Hybrid Models of Analysis, Optimization and Simulation Problems, Systems with Fuzzy Parameters
Research Fields:
Computer Science
Previous and Current Research
Development of new mathematical methods of system analysis and the theory of optimal solutions and its applications, 2016-2019
Problems of decision making and system analysis of stochastic networks, 2011-2015
Problems of decision making and its application at system analysis of socio-economic and ecological processes, 2006-2010
Future Projects and Goals
Development of new models of optimization problems such as distribution of limited resource, scheduling theory, logistics, classification, etc. Use the hybrid models for simulation in optimal control
problems. Research the systems with fuzzy parameters. Apply the results for solving modern social, information and communication problems.
Methodological and Technical Expertise
Development of new models of optimization problems such as distribution of limited resource, scheduling theory, logistics, classification, etc. Use the hybrid models for simulation in optimal control
problems. Research the systems with fuzzy parameters. Apply the results for solving modern social, information and communication problems.
Selected Publications
Oletsky, O.V., and Ivohin, E.V. (2021)
Formalizing the Procedure for the Formation of a Dynamic Equilibrium of Alternatives in a Multi-Agent Environment in Decision-Making by Majority of Votes, Cybernetics and Systems Analysis, 57(1),
Ivokhin, E.V., Adzhubey, L.T., and Gavrylenko, E.V. (2019)
On the formalization of dynamics in information processes on the basis of lnhomogeneous one-dimensional diffusion models,
Journal of Automation and Information Sciences, 51(2), 22–29.
Ivohin, E., and Apanasenko D. (2018)
Clustering of composite fuzzy numbers aggregate based on sets of scalar and vector levels,
Problems of Control and Informatics, 5, 136-147.
Ivohin, E., and Naumenko Yu. (2018)
On the formalization of information dissemination processes based on hybrid diffusion models,
Problems of Control and Informatics, 4, 121–128.
Ivohin, E., and Makhno, M. (2017)
On the approach to building structured fuzzy sets and their use for description of the fuzzy time schedule
Journal of Automation and Information Sciences, 3, 57-61.
Ivohin, E., and Vadnyov, D. (2015)
Some properties and estimates for the sequence of prime numbers
Problems of Control and Informatics, 6, 105-118.
Ivohin, E., and Barraq A.S.K., O. (2014)
On the approaches to the solution of the transportation problem with fuzzy resource
Problems of Control and Informatics, 5, 47–62.
Ivohin, E. (2013)
On application of special sets of prime numbers to determine the membership measure of fuzzy sets
Journal of Comp. and Applied Math. 4, 87-94.
Professor of Department of System Analysis and Decision Making Theory.
E-mail: ivohin@univ.kiev.ua
|
{"url":"http://science.knu.ua/en/researchgroups/research.php?ELEMENT_ID=2514","timestamp":"2024-11-12T00:08:31Z","content_type":"application/xhtml+xml","content_length":"18016","record_id":"<urn:uuid:bc1cbfa7-eb5d-4819-9efe-51f8b081d678>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00397.warc.gz"}
|