content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
ball mill grinding of iron ore
The ball mill is a rotating cylindrical vessel with grinding media inside, which is responsible for breaking the ore particles. Grinding media play an important role in the comminution of mineral
ores in these mills. This work reviews the application of balls in mineral processing as a function of the materials used to manufacture them and the mass loss, as influenced by three basic wear
WhatsApp: +86 18838072829
Iron ore grinding and ... The final product from these large regrinding ball mills using small grinding media is approximately 28 μm and further upgraded by finisher magnetic separators to
produce the final concentrate. The concentrate is thickened in two 45 m diameter thickeners and stored before being pumped to the port ...
WhatsApp: +86 18838072829
DOVE Ball Mills are designed to operate with various types of grinding media (Grinding Balls), DOVE supplies various types and sizes of Ball Mills Balls. DOVE supplies Steel Balls in Various
sizes and specifications. Cast Iron steel Balls, Forged grinding steel balls, Glaze Ball, High Chrome cast steel bars, with hardness of 6068 HRC.
WhatsApp: +86 18838072829
Abstract: An effect of a grinding method, that is ball mill and high pressure grinding rolls (HPGR), on the particle size, specific surface area and particle shape of an iron ore concentrate was
WhatsApp: +86 18838072829
Ore Blend Grinding at HPGR and Ball Mill. The ore blend was ground in a pilotscale HPGR (1 m diameter × m width) at a maximum feed rate of 50 t/h. The ground product was recirculated to the HPGR
feeding hopper five or seven times, wherein every recirculation steps a sample was gathered for moisture and size distribution measurement.
WhatsApp: +86 18838072829
Cast Iron Grinding Media Cast iron can be grey or white, but white cast irons are commonly used in abrasive wear applications in the comminution process. Cast iron grinding media are one of the
ancient media, which were first used in mineral processing and can be grouped into cast lowchrome and highchrome white iron [17].
WhatsApp: +86 18838072829
The work index will also have a considerable variation across one ore body or deposit. ... A wet grinding ball mill in closed circuit is to be fed 100 TPH of a material with a work index of 15
and a size distribution of 80% passing ¼ inch (6350 microns). The required product size distribution is to be 80% passing 100 mesh (149 microns).
WhatsApp: +86 18838072829
Based on batch grinding method and normalization idea, a conical ball mill is used and a quantitative separation method of grinding characteristics of multicomponent complex ore is proposed. The
results show that the feed sizes of polymetallic complex ore have an obvious influence on the particle size distribution of intermediate grinding products in the early stage of grinding.
WhatsApp: +86 18838072829
Ball mill is an energyintensive device for grinding and breaking iron ore particles, which is extensively used in mineral, cement, chemical, and other industries. 14 In
WhatsApp: +86 18838072829
Cylinder Proportions. Performance. The grinding media of a ball mill is in linear contact with its ore. The product is usually rough. A ball mill is made up of a hollow cylindrical shell that
rotates around its axis. The axis of the cylindrical shell is horizontal or partially horizontal. It is usually half filled with grinding balls.
WhatsApp: +86 18838072829
The tests covered a range of slurry concentrations from 30 to 55 vol.% solid and fractional interstitial bed filling (U) from to, at a fixed ball load (30% of mill volume) and 70% of ...
WhatsApp: +86 18838072829
and iron ores samples were ground using a ball mill in differ ent grinding conditions (dry and wet) and at different critical speed (R 45%, R 70% and R 90%) during wet grinding.
WhatsApp: +86 18838072829
Ball mill is an energyintensive device for grinding and breaking iron ore particles, which is extensively used in mineral, cement, chemical, and other industries. 14 In the field of mineral
processing, a portion of the energy is converted into heat that will raise the milling temperature and breakage characteristics of iron ore will be changed accordingly.
WhatsApp: +86 18838072829
An iron ore concentrate sample was ground separately in a pilotscale HPGR mill in multiple passes and a dry opencircuit ball mill to increase the specific surface area of particles.
WhatsApp: +86 18838072829
Its major purpose is to perform the grinding and blending of rocks and ores to release any freegold that is contained within them. At major mines, the mill was the critical equipment that was
required to process the ores that were extracted from deep underground. Many early mines used stamp mills, but many operations today find that ball mills ...
WhatsApp: +86 18838072829
Ball mill is an energyintensive device for grinding and breaking iron ore particles, which is extensively used in mineral, cement, chemical, and other In the field of mineral processing, a
portion of the energy is converted into heat that will raise the milling temperature and breakage characteristics of iron ore will
WhatsApp: +86 18838072829
A ball mill, also known as a ball grinding machine, is a wellknown ore grinding machine widely used in mining, construction, and aggregate applications. JXSC started the ball mill business in
1985, supplying global services including design, manufacturing, installation, and free operation training. 【 Type 】 According to the discharge type ...
WhatsApp: +86 18838072829
① Ceramic Ball Mill: It prevents iron pollution and is especially suitable for grinding glass and ceramics. Ceramic mill balls are white aluminum balls with a diameter within mm. ② Rod Mill: Same
as ball mills, except for the use of steel rods as grinding media. The maximum feed size is 50 mm. The output size is 435 mesh. The ...
WhatsApp: +86 18838072829
Grinding liberates valuable iron ore minerals from gangue minerals, making it easier to separate and recover the iron content. ... Ball Mill. Ball mill is widely used in iron ore beneficiation
WhatsApp: +86 18838072829
Keywords: Iron ore; Ball mill simulation; Grinding kinetics; Subsieve size measurement 1. Introduction sales of iron ore amounted to about 70% of the com pany's income and included exports of 7
billion. Companhia Vale do Rio Doce (CVRD) is a world Brazil has one of the largest iron ore reserves in the leader in production of iron ore, from ...
WhatsApp: +86 18838072829
The grinding setup refers to the socalled Malmberget method used at LKAB, characterized by a subsequent circuit of rod and ball mill grinding. The highest P 80 values were obtained by grinding
only in the rod mill for 10 min (step A). Ball mill grinding for 25 min (step B) and 35 min (step C) gave a very narrow range of P 80 values.
WhatsApp: +86 18838072829
In short, according to the author, many of the cannons (especially large calibre diameter of 100mm and above) are actually ball mills for grinding iron ore. All external and structural
alterations of the mill drum were introduced already in the early 19th century and presented to the public as artillery pieces (cannons, mortars, etc.) of ...
WhatsApp: +86 18838072829
These features distinguish stirred mills as fundamentally different from both ball mills and Tower Mills, as demonstrated by Tables 1 and 2. Table 1 : Typical Power Intensities of different
Grinding Devices Table 1: Power Intensity of Different Grinding Devices Ball Mill is a D x L Tower Mill is a D x L 520KW
WhatsApp: +86 18838072829
An effect of a grinding method, that is ball mill and high pressure grinding rolls (HPGR), on the particle size, specific surface area and particle shape of an iron ore concentrate was studied.
WhatsApp: +86 18838072829
An effect of a grinding method, that is ball mill and high pressure grinding rolls (HPGR), on the particle size, specific surface area and particle shape of an iron ore concentrate was studied.
The particle size distribution was meticulously examined by sieve, laser and image analyses.
WhatsApp: +86 18838072829
Each grinding balls are round balls with precise dimensions. The sizes of Fote grinding steel balls can be designed according to customer requirements. Generally, the ball diameter is between
20mm and 125mm. You can also design 10mm, 11mm or other diameters of steel balls. Small steel balls: 40mm or 60mm.
WhatsApp: +86 18838072829
Steel balls as traditional grinding media are prone to excessive fines generation and high energy consumption. Therefore, in light of this problem, the authors investigated another media—ceramic
balls based on the output characteristics of fine particles. This study discusses the effect of ceramic balls on the change of the particle size distribution, zeroorder output characteristics,
micro ...
WhatsApp: +86 18838072829 | {"url":"https://burgertimes33.fr/10/02-8652.html","timestamp":"2024-11-14T10:55:17Z","content_type":"application/xhtml+xml","content_length":"25599","record_id":"<urn:uuid:3a14e7f5-8d5a-492b-9345-6166117e1700>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00633.warc.gz"} |
A Complete Algorithm
Now consider designing a complete algorithm that solves the problem in the case of a single pursuer. To be complete, it must find a solution if one exists; otherwise, it correctly reports that no
solution is possible. Recall from Figure 12.38 that the nondeterministic I-state changed in an interesting way only after a critical boundary was crossed. The pursuit-evasion problem can be solved by
carefully analyzing all of the cases in which these critical changes can occur. It turns out that these are exactly the same cases as considered in Section 12.3.4: crossing inflection rays and
bitangent rays. Figure 12.38 is an example of crossing an inflection ray. Figure 12.41 indicates the connection between the gaps of Section 12.3.4 and the parts of the environment that may contain
the evader.
Figure 12.41: Recall Figure 11.15. Beyond each gap is a portion of the environment that may or may not contain the evader.
Recall that the shadow region is the set of all points not visible from some ; this is expressed as . Every critical event changes the number of shadow components. If an inflection ray is crossed,
then a shadow component either appears or disappears, depending on the direction. If a bitangent ray is crossed, then either two components merge into one or one component splits into two. To keep
track of the nondeterministic I-state, it must be determined whether each component of the shadow region is cleared, which means it certainly does not contain the evader, or contaminated, which means
that it might contain the evader. Initially, all components are labeled as contaminated, and as the pursuer moves, cleared components can emerge. Solving the pursuit-evasion problem amounts to moving
the pursuer until all shadow components are cleared. At this point, it is known that there are no places left where the evader could be hiding.
If the pursuer crosses an inflection ray and a new shadow component appears, it must always be labeled as cleared because this is a portion of the environment that was just visible. If the pursuer
crosses a bitangent ray and a split occurs, then the labels are distributed across the two components: A contaminated shadow component splits into two contaminated components, and a cleared component
splits into two cleared components. If the bitangent ray is crossed in the other direction, resulting in a merge of components, then the situation is more complicated. If one component is cleared and
the other is contaminated, then the merged component is contaminated. The merged component may only be labeled as cleared if both of the original components are already cleared. Note that among the
four critical cases, only the merge has the potential to undo the work of the pursuer. In other words, it may lead to recontamination.
Figure 12.42: The environment is decomposed into cells based on inflections and bitangents, which are the only critical visibility events.
Consider decomposing into cells based on inflection rays and bitangent rays, as shown in Figure 12.42. These cells have the following information-conservative property: If the pursuer travels along
any loop path that stays within a 2D cell, then the I-state remains the same upon returning to the start. This implies that the particular path taken by the pursuer through a cell is not important. A
solution to the pursuit-evasion problem can be described as a sequence of adjacent 2D cells that must be visited. Due to the information-conservative property, the particular path through a sequence
of cells can be chosen arbitrarily.
Searching the cells for a solution is more complicated than searching for paths in Chapter 6 because the search must be conducted in the I-space. The pursuer may visit the same cell in on different
occasions but with different knowledge about which components are cleared and contaminated. A directed graph, , can be constructed as follows. For each 2D cell in and each possible labeling of shadow
components, a vertex is defined in . For example, if the shadow region of a cell has three components, then there are corresponding vertices in . An edge exists in between two vertices if: 1) their
corresponding cells are adjacent, and 2) the labels of the components are consistent with the changes induced by crossing the boundary between the two cells. The second condition means that the
labeling rules for an appear, disappear, split, or merge must be followed. For example, if crossing the boundary causes a split of a contaminated shadow component, then the new components must be
labeled contaminated and all other components must retain the same label. Note that is directed because many motions in the are not reversible. For example, if a contaminated region disappears, it
cannot reappear as contaminated by reversing the path. Note that the information in this directed graph does not improve monotonically as in the case of lazy discrete localization from Section 12.2.1
. In the current setting, information is potentially worse when shadow components merge because contamination can spread.
To search , start with any vertex for which all shadow region components are labeled as contaminated. The particular starting cell is not important. Any of the search algorithms from Section 2.2 may
be applied to find a goal vertex, which is any vertex of for which all shadow components are labeled as cleared. If no such vertices are reachable from the initial state, then the algorithm can
correctly declare that no solution exists. If a goal vertex is found, then the path in gives the sequence of cells that must be visited to solve the problem. The actual path through is then
constructed from the sequence of cells. Some of the cells may not be convex; however, their shape is simple enough that a sophisticated motion planning algorithm is not needed to construct a path
that traverses the cell sequence.
The algorithm presented here is conceptually straightforward and performs well in practice; however, its worst-case running time is exponential in the number of inflection rays. Consider a polygonal
environment that is expressed with edges. There can be as many as inflections and bitangents. The number of cells is bounded by [412]. Unfortunately, has an exponential number of vertices because
there can be as many as shadow components, and there are possible labelings if there are components. Note that does not need to be computed prior to the search. It can be revealed incrementally
during the planning process. The most efficient complete algorithm, which is more complicated, solves the pursuit-evasion problem in time and was derived by first proving that any problem that can be
solved by a pursuer using the visibility polygon can be solved by a pursuer that uses only two beams of light [770]. This simplifies from a 2D region in to two rotatable rays that emanate from and
dramatically reduces the complexity of the I-space.
Steven M LaValle 2020-08-14 | {"url":"https://lavalle.pl/planning/node631.html","timestamp":"2024-11-03T06:48:24Z","content_type":"text/html","content_length":"15990","record_id":"<urn:uuid:9f3b3b48-8e19-4eb0-b160-8094f6ab819a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00387.warc.gz"} |
Failing to capture stuff is not being wrong, so for e.g. indicative conditionals not being material conditionals does not mean that classical logic is wrong, only that it doesn’t by itself handle the
validity or otherwise of arguments involving indicative conditionals.
But the threat from truth-theoretic considerations, gaps and gluts, is different here.
Also the threat from the more general idea that there’s a relevant sense of logical consequence whereby explosion (ex falso quodiblet) isn’t valid.
In both cases things come to a head with: this argument is valid according to classical logic but really isn’t.
With the first threat, the problem is not in the notion of validity—that can stay. With the second, the frame of mind is that of wanting a conception of consequence/validity in which explosion simply
lacks that status, isn’t sort of pseudo-valid, i.e. valid in a stronger pseudo-logic where we ignore some real possibilities.
With the first, you can say: explosion instances are often good arguments in some sense, even if not strictly valid. They’re truth-preserving w.r.t. all cases not involving dialetheia. Not, of
course, good arguments in the sense that you’d ever follow them from premise to conclusion! But we can say yep, anything “follows from” a contradiction if we ignore models in which dialtheia occur.
Provided you know you have no dialethia in the mix, you have your guarantee that you’re not gonna be led from truth to falsity.
On this first conception, i.e. the dialethic one, how does classical logic err? Where does it go wrong?
‘You say it is not possible for P&~P to be true while Q is false. But it is possible, because, when you interpret P, sometimes your classical model which corresponds to reality, while it rightly
captures the fact that P is true (false), goes wrong in not also capturing the fact that P is false (true). Well, actually, in these cases, two of your classical models will correspond to reality.’
(One fix, make the valuation function a relation—on that implementation, we can say the classical model rightly maps P to T (F) but fails to also map it to F (T). Another, add a third truth value
representing the dialethic status - but that’s a different mode of presentation and so you don’t get the perspicuous sense in which the classical model just leaves something out. — In the relation
mode of presentation, you can take a classical model with one letter and get two full models - the one where you do nothing, and the one where you also map it to the other value. But with the
three-value mode of presentation, while a classical model is straightforwardly still a special case of one of these full-story models, it is no longer the case that you can take a classical model of
a situation and make it correct by only adding something (or doing nothing)-if you have a dialethia, you have to unmap it from T and instead map it to X. So that makes it look like the classical
model has said something wrong — and of course we can look at a classical model what way, if we treat T as “true and true only” or regard the model as making an implicit claim to telling the whole
story about which letters have which of the two properties truth and falsity. But here the principle of charity, and general good sense, should tell us to not regard that as part of classical logic
itself. So let’s put that aside.
From this point of view, classical logic knows what validity is alright, and doesn’t get any individual thing wrong semantically, but the semantics is incomplete—the models lack information sometimes
(and the way the notion of model is set up precludes putting it in). And this leads to cases where counterexamples fail to show up, because the classical models miss parts of the picture without
which the picture doesn’t show a counterexample.
A true sentence like ‘John is here in this room’, and its Twin Earth counterpart, express different propositions, since they are about distinct people. And that means that propositions sometimes
constitutively involve particular external things that they are about.
What, in light of this, should we say about how, if at all, what propositions there are—what claims exist—varies across possible worlds?
One side of this issue is: could propositions like the ones expressed by a normal true use of ‘John is here in this room’ have failed to exist? Do they fail to exist in (or with respect to, perhaps?)
all worlds in which John does not exist? (I set aside Williamsonian necessitarianism about what there is.)
My notion of the internal meaning of a sentence, or the way it is used, gives me a way of agreeing that there’s something right about the idea that the meanings of sentences are just there and exist
necessarily. Given a normal occurrence of ‘John is here in this room’, the way the sentence is being used—which it has in common with its Twin Earth counterpart—may be regarded as a pure abstract
object, like a way of dancing, which we can say is just there and could in no sense have failed to exist.
Here is another question we might ask: propositions about particular people and physical things—suppose they do exist in some possible worlds apart from the actual world. But do they themselves have
different properties in worlds where the things they are about have different properties? A way of using a sentence, we might say, is just what it is and doesn’t have different intrinsic properties
at any rate in different worlds—it will of course have different extrinsic properties such as ‘having been instantiated by someone wearing a blue hat’. But if a claim constitutively involves the
object it is about, is the object with respect to the claim like a diamond set in a piece of jewellry, so that the piece’s properties change whenever the diamond’s do, since the diamond is part of
it? I think perhaps this need not be so. We could instead use the model of something which needs to be tied to something else, and which disappears, or at least ceases to be that thing, if we cut the
tie or remove the something else.
A tremendous complicating factor is that there are undoubtedly, in some sense, claims about things that do not in fact exist. We cannot here follow Kripke in Reference and Existence into the view
that these sentences do not in fact express propositions, anymore than we should follow him in analyzing particular existential statements as talking about whether there is such-and-such a
proposition. (That theory is I think clearly tortured but this is not the place to mount objections but see Postscript.) And recall there that even Kripke was keen to avoid the seeming absurdity of
having to hold that the correct analysis of a statement can depend on whether it is true or false. However! It seems to me there is one thing in this general vicinity which we might indeed have to
come to terms with. Namely, that the modal profiles and identity conditions of propositions expressed by statements involving names that happen to be empty differ from those of propositions expressed
by statements which are being used in exactly the same way but where the names aren’t empty.
Someone might want to say: just because we can’t pick out particular propositions about physical objects and people etc. that do not actually exist, doesn’t mean they don’t exist. (Anymore than the
fact that non-actual people can’t pick out our propositions means that they don’t exist.) But this is only really correct given something like Lewisian modal realism.
‘What if Vulcan had existed?’—Are we to follow Kripke in his view of unicorns and apply that even to the case of names, i.e. say that there is no particular possibility in question at all here? A lot
of what I am otherwise tending toward does seem to be leading me that way—but I suspect that here the shoe might really pinch, and that dwelling on this part of the issue and trying to do it justice
will lead to a breakthrough—-a better view. A kind of more nuanced view which, pace recent Williamson, would not be a case of overfitting.
‘What if the claims made by some astronomers about Vulcan had been true? I don’t mean what if they had been right when they spoke. I mean, consider the claims they expressed about Vulcan. What if
those claims had been true? Is there a possible world in which they are true?’
Postscript. It seems a very important objection to Kripke’s analysis of negative existentials in R&E that he is kicking the can down the road. For how does it get to be true that ‘There is no such
proposition as that Vulcan exists’ expresses a true proposition? If we interpret it metalinguistically, it’s wrong as an analysis. So then how do we interpret it? The ‘no such’ has a soothing effect
and as it were shrouds the occurrence of ‘Vulcan’ in a haze. But we still need to account for what it’s doing there and how we get different statements when we pop in different empty names.
This post is dedicated to the memory of the late Queen Elizabeth II.
My book Meaning and Metaphysical Necessity is now out with Routledge. I began seriously developing the ideas in it in 2011 when I began my PhD, which is also the year this blog started. Many of the
posts here over the years were devoted to working out the views in the book.
Platonism is the default, almost obviously correct view about mathematical objects. One of the major things that puts pressure on Platonism is the question 'How do we know about mathematical objects,
then?'. What gives this question its power? I think three things conspire and that the third might be under-appreciated:
1. Real justificatory demands internal to mathematical discourse. For particular mathematical claims, there are very real 'How do we know?' questions, and they have substantive mathematical answers.
The impulse to ask the question then gets generalized to mathematical knowledge in general, except that then there's no substantial answer.
2. A feeling of impossibility engendered by a causal theory of knowledge. If you only think about certain kinds of knowledge, it can seem plausible that, in general, the way we get to know about
things is via their causal impacts on us. This then makes mathematical knowledge seem impossible.
3. Our deeply-ingrained habit of giving reasons. The social impulse to justify one's claims to another is hacked by a monster: the philosophical question at the heart of the epistemology of
If it were just 1 and 2 getting tangled up with each other, the how-question would not be so persistent. With existing philosophical understanding we'd be able to see our way past it. But 3 hasn't
been excavated yet and that keeps the whole thing going.
Let us use 'ONE' as an abbreviation of 'This sentence token contains more than one word token'. Now consider whether the following is true:
In the Times Literary Supplement there is a review of a new biography of Gödel by philosopher Cheryl Misak. The review is called 'What are the limits of logic? How a groundbreaking logician lost
control'. This paragraph contains two major inaccuracies:
Gödel proved that if a statement in first-order logic is well formed (that is to say, it follows the syntactic rules for the formal language correctly), then there is a formal proof of it. But his
second doctorate, or Habilitation, published in 1931, showed that in any formal system that includes arithmetic, there will always be statements that are both true and unprovable. The answer to the
Entscheidungsproblem was, therefore, negative.
The first one is that being well formed is like being grammatically correct - among the well formed formulas of first-order logic, there are formulas that are false no matter what (false on all
models or interpretations), formulas that can go either way, and formulas that are true no matter what (these ones are often called logical truths, or logically valid formulas). What Gödel showed is
that for every logical truth, there is a proof that it's a logical truth.
The second major inaccuracy is that the answer to the decision problem (Entscheidungsproblem) is not shown to be negative by the incompleteness theorem that Misak alludes to. The negative answer
became known only in 1936, when Alonzo Church and Alan Turing independently showed it.
Final draft available on PhilPapers.
Abstract: Gillian Russell has recently proposed counterexamples to such elementary argument forms as Conjunction Introduction (e.g. ‘Snow is white. Grass is green. Therefore, snow is white and grass
is green’) and Identity (e.g. ‘Snow is white. Therefore, snow is white’). These purported counterexamples involve expressions that are sensitive to linguistic context—for example, a sentence which is
true when it appears alone but false when embedded in a larger sentence. If they are genuine counterexamples, it looks as though logical nihilism—the view that there are no valid argument forms—might
be true. In this paper, I argue that the purported counterexamples are not genuine, on the grounds that they equivocate. Having defused the threat of logical nihilism, I argue that the kind of
linguistic context sensitivity at work in Russell’s purported counterexamples, if taken seriously, far from leading to logical nihilism, reveals new, previously undreamt-of valid forms. By way of
proof of concept I present a simple logic, Solo-Only Propositional Logic (SOPL), designed to capture some of them. Along the way, some interesting subtleties about the fallacy of equivocation are | {"url":"https://sprachlogik.blogspot.com/","timestamp":"2024-11-10T14:08:14Z","content_type":"text/html","content_length":"141504","record_id":"<urn:uuid:b93eee1e-a9ec-4989-97b8-728bb80fb903>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00203.warc.gz"} |
FA Cup
In four years 2001 to 2004 Arsenal have been drawn against Chelsea in the FA cup and have beaten Chelsea every time. What was the probability of this? Lots of fractions in the calculations!
In the FA Cup 64 teams play a knockout tournament. Over the four years 2001, 2002, 2003 and 2004 Arsenal have been drawn against Chelsea each year and have beaten Chelsea every time. What is the
probability of that happening?
Let's say that whenever Arsenal and Chelsea play the probability of Arsenal winning is 0.6 and otherwise, throughout the tournament, both these teams have a probability of winning of 0.7 in the first
round, 0.6 in the second round and 0.5 in the subsequent rounds.
Getting Started
You'll need to work out, for each round, the probability that Arsenal play Chelsea and also the probability that they do not play against each other but both survive to play in the following round. A
tree diagram is useful in thinking through this problem.
Student Solutions
Well done Roy M. for the following excellent solution.
To calculate the probability of Arsenal and Chelsea playing each other in the FA cup in four consecutive years and Arsenal winning each time we have to make reasonable assumptions about the
probabilities of these teams winning their games. So we assume that whenever Arsenal and Chelsea play the probability of Arsenal winning is $0.6$ and otherwise, throughout the tournament, both these
teams have a probability of winning of $0.7$ in the first round, $0.6$ in the second round and $0.5$ in the subsequent rounds.
To go about this problem I decided to find the chance of Arsenal playing and beating Chelsea round by round.
Round One: The chance of Arsenal being drawn against Chelsea and beating them is: $(1/63)\times(3/5)= 1/105$.
Round Two: The chance of Arsenal not being drawn against Chelsea in round one, and both teams winning their round one matches, multiplied by the chance of them playing each other in round two and
Arsenal winning is: $(62/63)\times(7/10)\times(7/10)\times(1/31)\times(3/5)= 7/750$
Continuing in this fashion, the probability of Arsenal beating Chelsea in any given round is equal to the product of the chances of the two teams not being drawn against each other in any former
round, times the chances of the two teams beating any team they were drawn against in all former rounds, times the chances of Arsenal being drawn against Chelsea in the given round, times the chance
of Arsenal winning.
The odds for Arsenal beating Chelsea in rounds 1 to 6 (6 being the final) is as follows:
│Round│Probability A plays C and wins │$\quad$ Probability A, C don't meet, both win $\quad$ │
│1 │$\frac{1}{63}.\frac{3}{5}=\frac{1}{105}$ │$\frac{62}{63}.{\frac{7}{10}}^2$ │
│2 │$\frac{1}{31}.\frac{62}{63}.{\frac{7}{10}}^2.\frac{3}{5}=\frac{7}{750}$ │$\frac{30}{31}.\frac{62}{63}.{\frac{7}{10}}^2.{\frac{3}{5}}^2$ │
│3 │$\frac{1}{15}.\frac{30}{31}\frac{62}{63}.{\frac{7}{10}}^2.{\frac{3}{5}}^2.\frac{3}{5}$ │$\frac{14}{15}.\frac{30}{31}.\frac{62}{63}.{\frac{7}{10}}^2.{\frac{3}{5}}^2.{\ │
│ │ │frac{1}{2}}^2$ │
│4 │$\frac{1}{7}.\frac{14}{15}.\frac{30}{31}.\frac{62}{63}.{\frac{7}{10}}^2{\frac{3}{5}}^2.{\frac{1}{2}}^2.\frac │$\frac{6}{7}.\frac{14}{15}.\frac{30}{31}.\frac{62}{63}.{\frac{7}{10}}^2.{\frac{3}│
│ │{3}{5}=\frac{21}{6250}$ │{5}}^2.{\frac{1}{2}}^4$ │
│5 │$\frac{1}{3}.\frac{6}{7}.\frac{14}{15}.\frac{30}{31}.\frac{62}{63}.{\frac{7}{10}}^2.{\frac{3}{5}}^2.{\frac{1}│$\frac{2}{3}.\frac{6}{7}.\frac{14}{15}.\frac{30}{31}.\frac{62}{63}.{\frac{7}{10}}│
│ │{2}}^4.\frac{3}{5}=\frac{21}{12500}$ │^2.{\frac{3}{5}}^2.{\frac{1}{2}}^6$ │
│6 │$\quad \frac{2}{3}.\frac{6}{7}.\frac{14}{15}.\frac{30}{31}.\frac{62}{63}.{\frac{7}{10}}^2.{\frac{3}{5}}^2.{\ │ │
│ │frac{1}{2}}^6.\frac{3}{5}=\frac{21}{25000} \quad$ │ │
Adding all these probabilities together (i.e. finding the chances of one of them happening) you get: $${1101\over 35000}$$ which is roughly $3.15$ per cent. So for the odds given, there is a $3.15$
per cent chance of Arsenal playing and beating Chelsea in the FA cup.
However the question asks for the probability of it happening four years in a row. So you must multiply this result by itself four times $$\left({1101\over 35000}\right)^4=9.792 \times 10^{-7}$$
(giving the result to 4 s.f.) which gives roughly 1 in a million chance of it happening or a probability of $0.000098$ per cent.
So for a bet of £1 you could be laughing all the way to the bank with a sum of just over £1m
Teachers' Resources
This requires an understanding of the simple rules of probability and a lot of calculating with fractions. | {"url":"http://nrich.maths.org/problems/fa-cup","timestamp":"2024-11-05T12:35:17Z","content_type":"text/html","content_length":"42860","record_id":"<urn:uuid:0fd6120f-5a08-47a8-a8ef-37ae6d03e970>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00296.warc.gz"} |
A percentile (or a centile) is a measure used in statistics indicating the value
below which
a given percentage of observations in a group of observations fall. For example, the 20th percentile is the value (or score) below which 20% of the observations may be found.
The term percentile and the related term
percentile rank
are often used in the reporting of scores from norm-referenced tests. For example, if a score is at the 86th percentile, where 86 is the percentile rank, it is equal to the value below which 86% of
the observations may be found. In contrast, if it is in the 86th percentile, the score is at or below the value of which 86% of the observations may be found.
Every score is in the 100th percentile
The 25th percentile is also known as the first quartile (Q1), the 50th percentile as the median or second quartile (Q2), and the 75th percentile as the third quartile (Q3). In general, percentiles
and quartiles are specific types of quantiles.
Adapted from Wikipedia, the free encyclopedia. Internet. Accessed on June 14, 2016. | {"url":"https://pallipedia.org/percentile/","timestamp":"2024-11-09T19:39:02Z","content_type":"text/html","content_length":"11383","record_id":"<urn:uuid:7738a0e2-5416-40d3-8085-d5c112b8a0b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00152.warc.gz"} |
More reflections Worksheet
K-12: Materials at high school level.
Objective: Learn about rotational symmetry as it occurs in the real world
Materials: A print out of this worksheet
1. These pictures are found in the real world.
• Draw the reflectional line of symmetry for each picture.
• Are the lines of symmetry horizontal or vertical (Circle one)?
If the picture does not have any reflectional lines of symmetry, just write, no lines of symmetry!
│ │ │
│Horizontal Vertical │Horizontal Vertical │
│ │ │
│Horizontal Vertical │Horizontal Vertical │
2. How many reflectional lines of symmetry can you find in each picture?
If the picture does not have any reflectional lines of symmetry, state that.
│ │ │
│How many lines of symmetry? _____ │How many lines of symmetry? _____ │
│ │ │
│How many lines of symmetry? _____ │How many lines of symmetry? _____ │
3. Only half of the images below are shown. Draw the other half using reflectional symmetry.
Show Me Standards: 2. Geometric and spatial sense involving measurement (including length, area, volume), trigonometry, and similarity and transformations of shapes.
NCTM Standard: Apply transformations and use symmetry to analyze mathematical situations.
Created by Alyssa Kernen | {"url":"https://eschermath.org/wiki/More_reflections_Worksheet.html","timestamp":"2024-11-03T05:39:51Z","content_type":"text/html","content_length":"21531","record_id":"<urn:uuid:80d5e4bd-dbc0-4b3e-8066-20df7a6fd2db>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00518.warc.gz"} |
Walter A. Shewhart - HKT Consultant
Economic Theorists, Management Theorists
Walter A. Shewhart
Walter Andrew Shewhart (pronounced like “shoe-heart”; March 18, 1891 – March 11, 1967) was an American physicist, engineer and statistician, sometimes known as the father of statistical quality
control and also related to the Shewhart cycle.
W. Edwards Deming said of him:
As a statistician, he was, like so many of the rest of us, self-taught, on a good background of physics and mathematics.^[1]
Early life
Born in New Canton, Illinois to Anton and Esta Barney Shewhart, he attended the University of Illinois at Urbana–Champaign before being awarded his doctorate in physics from the University of
California, Berkeley in 1917. He married Edna Elizabeth Hart, daughter of William Nathaniel and Isabelle “Ibie” Lippencott Hart on August 4, 1914 in Pike County, Illinois.
Work on industrial quality
Bell Telephone’s engineers had been working to improve the reliability of their transmission systems. In order to impress government regulators of this natural monopoly with the high quality of their
service, Shewhart’s first assignment was to improve the voice clarity of the carbon transmitters in the company’s telephone handsets. Later he applied his statistical methods to the final
installation of central station switching systems, then to factory production. When Shewhart joined the Western Electric Company Inspection Engineering Department at the Hawthorne Works in
1918, industrial quality was limited to inspecting finished products and removing defective items. That all changed on May 16, 1924. Shewhart’s boss, George D. Edwards, recalled: “Dr. Shewhart
prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram,
and the short text which preceded and followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control.”^[2] Shewhart’s
work pointed out the importance of reducing variation in a manufacturing process and the understanding that continual process-adjustment in reaction to non-conformance actually increased variation
and degraded quality.
Shewhart framed the problem in terms of assignable-cause and chance-cause variation and introduced the control chart as a tool for distinguishing between the two. Shewhart stressed that bringing a
production process into a state of statistical control, where there is only chance-cause variation, and keeping it in control, is necessary to predict future output and to manage a process
economically. Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical
statistical theories, he understood data from physical processes never produce a “normal distribution curve” (a Gaussian distribution, also commonly called a “bell curve”). He discovered that
observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some
processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.^[3]
Shewhart worked to advance the thinking at Bell Telephone Laboratories from their foundation in 1925 until his retirement in 1956, publishing a series of papers in the Bell System Technical Journal.
His work was summarized in his book Economic Control of Quality of Manufactured Product (1931).
Shewhart’s charts were adopted by the American Society for Testing and Materials (ASTM) in 1933 and advocated to improve production during World War II in American War Standards Z1.1–1941, Z1.2-1941
and Z1.3-1942.
Later work
From the late 1930s onwards, Shewhart’s interests expanded out from industrial quality to wider concerns in science and statistical inference. The title of his second book, Statistical Method from
the Viewpoint of Quality Control (1939), asks the question: “What can statistical practice, and science in general, learn from the experience of industrial quality control?”
Shewhart’s approach to statistics was radically different from that of many of his contemporaries. He possessed a strong operationalist outlook, largely absorbed from the writings
of pragmatist philosopher Clarence Irving Lewis, and this influenced his statistical practice. In particular, he had read Lewis’ Mind and the World Order many times. Though he lectured in England in
1932 under the sponsorship of Karl Pearson (another committed operationalist) his ideas attracted little enthusiasm within the English statistical tradition. The British Standards nominally based on
his work, in fact, diverge on serious philosophical and methodological issues from his practice.
His more conventional work led him to formulate the statistical idea of tolerance intervals and to propose his data presentation rules, which are listed below:
1. Data have no meaning apart from their context.
2. Data contain both signal and noise. To be able to extract information, one must separate the signal from the noise within the data.
Shewhart visited India in 1947–1948 under the sponsorship of P. C. Mahalanobis of the Indian Statistical Institute. He toured the country, held conferences and stimulated interest in statistical
quality control among Indian industrialists.^[4]
He died at Troy Hills, New Jersey in 1967.
In 1938 his work came to the attention of physicists W. Edwards Deming and Raymond T. Birge. The two had been deeply intrigued by the issue of measurement error in science and had published a
landmark paper in Reviews of Modern Physics in 1934. On reading of Shewhart’s insights, they wrote to the journal to wholly recast their approach in the terms that Shewhart advocated.
The encounter began a long collaboration between Shewhart and Deming that involved work on productivity during World War II and Deming’s championing of Shewhart’s ideas in Japan from 1950 onwards.
Deming developed some of Shewhart’s methodological proposals around scientific inference and named his synthesis the Shewhart cycle that later became The PDSA Cycle.^[5]
To celebrate his quasquicentennial (125th) birth anniversary, the journal Quality Technology and Quantitative Management (ISSN 1684-3703) published a special issue in on “Advances in the Theory and
Application of Statistical Process Control”.^[6]
Achievements and honours
In his obituary for the American Statistical Association, Deming wrote of Shewhart:
As a man, he was gentle, genteel, never ruffled, never off his dignity. He knew disappointment and frustration, through failure of many writers in mathematical statistics to understand his point
of view.
He was founding editor of the Wiley Series in Mathematical Statistics, a role that he maintained for twenty years, always championing freedom of speech and confident to publish views at variance with
his own.
His honours included:
• Founding member, fellow and president of the Institute of Mathematical Statistics;
• Founding member, first honorary member and first Shewhart Medalist of the American Society for Quality;
• Fellow and President of the American Statistical Association;
• Fellow of the International Statistical Institute;
• Honorary fellow of the Royal Statistical Society;
• Holley medal of the American Society of Mechanical Engineers;
• Honorary Doctor of Science, Indian Statistical Institute, Calcutta.
• 1917: A Study of the Accelerated Motion of Small Drops through a Viscous Medium, Ph.D. dissertation via Hathi Trust
• 1931: The Economic Control of Manufactured Product, D. Van Nostrand Company via Internet Archive
• 1939: (with W. Edwards Deming) Statistical Method from the viewpoint of Quality Control, The Graduate School, U. S. Department of Agriculture via Internet Archive | {"url":"https://sciencetheory.net/walter-a-shewhart/","timestamp":"2024-11-05T13:50:05Z","content_type":"text/html","content_length":"111489","record_id":"<urn:uuid:a99de9cd-e224-48f4-b4c2-c349903b34b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00602.warc.gz"} |
On Seeking Alpha
August 27, 2011
The most concise mathematical formulation for profits generated from trading stocks that I have encountered is provided by Schachermayer's payoff matrix:
Profits = ∑ (H . * ∆S) Schachermayer's Payoff Matrix
where H is the holding matrix (the number of shares held in each stock over time), and ∆S is the matrix of price differentials from period to period.
Multiplying element-wise (• .* •) and summing will result in the payoff or sum of profits generated by the holding function H. These calculations can easily be done using any spreadsheet where
columns are stock price variations, and rows are the inventory held in each stock by date.
This article is a work in progress and should be viewed as such. I write sequentially, and presently, I do not know where this article is leading to. At times I fear its coming conclusions as I
think it might end with a contradiction; a kind of catch 22. Nonetheless, even at mid-point, I thought it might be of interest as what is presented is the premise of the second part. And it is all
based on common sense or acceptable knowledge.
Profits generated by any trading method can be represented using Schachermayer's notation. For instance, the Buy & Hold strategy would be:
Profits = ∑ (h[o]I . * ∆S) Buy & Hold Pay-Off Matrix
where h[o] is the initial quantity in each stock and I a matrix composed entirely of ones with again ∆S the matrix of price differentials.
Only three variables are involved in Schachermayer's matrix equation: the quantities held in inventory, the stock price differentials, and time. Both matrices H and ∆S are of the same size and time
ordered. You have no control over time or future price differentials; for that matter. They will be the same for everyone.
Therefore, based on the same stock selection, if you want to outperform and produce more profits, it will most likely have to be by improving the holding function H itself, other things being equal.
This means implementing a trading strategy in such a way that it improves, enhances, and/or controls the inventory level as a time process. As such, the trading strategy (the way the inventory level
is controlled) becomes a critical part of your trading plan. The position sizing algorithm will take center stage as it alone can exercise control over the holding function H.
It should be noted that one can also improve performance by selecting better long-term-performing stocks. As the number of stocks in the portfolio increases, the average stock price differentials
will tend more and more to approach ∆M (the average market price differentials). It is considered sufficient to have some 30 stocks or more in a portfolio to be diversified at the 95% level. And as
my view of the game uses over-diversification (50 stocks and more), the average portfolio price for the selected stocks will also tend long term to move in sync with ∆M. Nonetheless, a better than
average stock selection will lead to higher performance when compared to the average Buy & Hold strategy.
Generating alpha becomes almost synonymous with enhancing the holding function H and will need to result in generating more profits. The more you improve the holding function, the more you will
outperform. One seeks a holding function such that H^+ > H. To the extent that H^+ is greater than H, it will generate alpha as a byproduct (H^+ - H > 0). Having a higher inventory level than the Buy
& Hold strategy appears sufficient to outperform the averages, and whatever position sizing algorithm is used to accomplish this task will be solely responsible for the alpha generation, other things
being equal.
In my first paper: Alpha Power: Adding More Alpha to Portfolio Return, I use a simple linear regression to represent a price series:
P(t) = P[o] + ax + ∑ε where the sum of residuals: ∑ε → 0
And therefore, after detrending, the most expected value for the price will be P[o].
P(t) = P[o] + (a - a)x + ∑ε since again ∑ε → 0
This raises the whole question of the value of predicting prices. There is no need to try predicting the error term since its most expected value is zero. There is no need to make estimates of P[o];
its value is already given. So, one is left with trying to predict the value of the regression line with slope “a”, and this is very much time-dependent. The shorter the time interval, the more noise
is predominant, and the longer the time interval, the more the rate of change will tend to “a”, the slope of the regression line. And the longer the time horizon, the less accurate these estimates
will be. Making 20-year predictions on single stocks should immediately appear as just a wild guess at best.
But still, we need to predict prices; we need a reason to enter a trade. We need to have a minimum expectation of a profit. Otherwise, why take the trade?
The Schachermayer payoff equation could be rewritten as follows:
∑ (H . * ∆ (P[o] + ax + ∑ε))
where ∆S is being replaced by its regression line without loss of generality. Thereby, profits would accrue based on the slope of the regression line.
∑ (h[o]I. * P[o] ) + ∑ (h[o]I . * ∆ax) since ∑ε → 0
The above equation accepts an initial position at P[o] (the invested capital) to which is added the sum of incremental profits generated by the regression line differentials. Therefore, this equation
is just another representation of the Buy & Hold strategy.
Price Differentials
I will not argue the quasi-random nature of stock price movements. I will simply accept them as is: quasi-random like. Taking a linear regression on a long-term Buy & Hold portfolio, we should expect
a slope in the neighborhood of 10% (the prevailing (over ≈ 200 years) secular US market average). Performance, over time, would come from price appreciation and reinvested dividends. The 10% average
long-term return would imply that the most expected outcome for a $50 stock is to appreciate by about $0.02 per trading day (assuming 250 trading days a year) in order to reach $55 at year's end. For
a $25 stock, the underlying trend would be a penny a day. Detrending the price would leave only the error term, the random-like component of the price movement, with the usual bell-shaped
distribution around the mean.
A $50 stock moves a lot more than 2 cents per day! Indeed, but whatever is left after detrending the price series is only noise: an error term considered randomly distributed around zero and where
its cumulative sum over the time interval will also tend on average to zero. On this premise, one should not expect to extract (long-term) a profit from the error term since it is mostly randomly
distributed data with an expected mean value of zero.
This means that whatever trading strategy one may devise to enhance performance, because of diversification and the very nature of stock price movements; the long-term performance will tend most
likely to the average market return. In other words, long-term, H^+ → H and, therefore, H^+ - H → 0. And this translates into alpha → 0. No alpha generation, no over-performance.
This has for consequence, for me at least, that with no alpha generation, all my writings would be worth absolutely nothing. Don’t worry, this is not the case; I will make my point that alpha
generation is relatively easy to come by.
Much of the portfolio management literature over the past 50 years has adopted this stance that if there is some alpha, it will tend long-term to zero. It is a byproduct of Modern Portfolio Theory as
well as, more recently, Stochastic Portfolio Theory. And the Growth Optimal Portfolio (GOP) turns out to be the most coveted outcome in portfolio management, which leads directly to trading indexes.
However, this is saying the same thing as all you can hope for, on average, long-term, is to achieve something close to the market average. Even with such a low objective, more than 75% of portfolio
managers have a hard time beating the average and come in short of their goals. Indirectly, this justifies the pursuit of the GOP.
Are there trends after detrending? No, since that was the whole purpose: finding the best linear fit, the error term will tend to zero.
The amount of noise in stock prices is so considerable that it literally drowns the signal to such an extent that there is practically no need to extract it on a daily basis (except maybe
theoretically). When viewed from a long-term perspective, a 10% move on the $50 stock in a single day would show noise to account for over 99% of the price movement. Even a 1% move in price, which is
not unusual, would drown the signal in 96% noise.
Predicting Prices
The job of predicting prices becomes, at most, an extrapolation of the linear regression since that will be the most expected outcome should past trends prevail.
Therefore, predicting the trend becomes the ability to detect within all the noise the 2 cents of expected daily appreciation on the $50 stock. Again, this trend is buried deep in surrounding noise.
On a daily basis, the long-term trend has such minimal value that it is untradeable.
But what about the rest (the noise) in the price data series?
Detecting the signal over all the ambient noise requires changing one’s perspective and allowing that even random-like price series can have, at times, detectable trends, which may or may not last.
There lies the real problem; our inability to extract from the price movements what constitutes a trend or predicting to some extent the magnitude of the next price move. We might as well concede
that prices are totally random after all. Over the long-term trend, I would be off by about only 2 cents for the day on a $50 stock (or by 0.04%)!
Disregarding the long-term trend in calculations will have little impact on estimates of short-term price movements. Short-term price movements will tend to show more of their random nature. But then
again, discarding the long-term trend will leave only the random-like component of price movements. However, over the short term, there is no obligation for the random-like component to stay within
boundaries or have its sum of variations tend to zero, for that matter.
By the very nature of stock price movements, anyone can declare what might constitute a trend based on whatever principle and see that, at times, their trend definition holds. As a matter of fact,
all types of patterns, such as triangles, wedges, flags, pennants, trend-following indicators, oscillators, and much more, can be detected by those seeking them. You will even find them in a totally
randomly generated price series. It is not our own personal definition of a trend that the market needs or has to follow; it will follow its own course, whatever our trend definition.
Is not a sub-set of a random-like price series a random-like price series in its own right? Isn’t a series of a hundred tosses of a fair coin taken from anywhere within a series of 10,000 tosses
still a random series?
Playing Prices
A short-term trader is not that much interested in the 2 cents per day thing. There is too much price movement in the error term to be ignored. Put a lot of lines on a price chart, and you are bound
to see the price cross one or another at some point. You will start seeing all kinds of relationships and interconnections. Sometimes the price will go through a particular line and, at other times,
bounce right off. As to the meaning or interpretation of the price move, that is a totally different question. What interpretation can be given when, most likely, the cross of a line has for origin a
random-like phenomenon? You can find all kinds of explanations for past price movements, but very little that will help you anticipate future price moves except on a larger scale, and even there, you
will not know how long it could or would last.
Nonetheless, even under the random-like nature definition for price movements, one can, just by looking at a chart, see trends develop, cycles, support and resistance levels, and a lot more. Most of
which will operate about half the time, as if at whatever price, one can only go up or down even when the most expected change is no change at all.
But there are short-, mid-, and long-term trends! They are easy to see after the fact. A short-term trader is interested in the continuation of the trend; that is where he can make money. But then
again, from the same price, contrarians will bet that the continuation will fail. Only time will show who gets the reward. Both players make their bets, but only one will win, and next time may
produce the same result or a complete reversal.
The Long-Term Investor
For long-term investors, daily price fluctuations are not necessarily a concern; what they are looking for is the 2 cents a day of upward trend. They know over the long term, their expected portfolio
performance will tend to approach the long-term market average. They appear ready to accept what seems like all the market has to offer. The best can make better stock selections and push their
returns to 4 or 5 cents per day on average, which will translate to 20 – 25% per year.
Technically, the long-term investor is playing the regression line while the short-term trader is mostly playing the error term (the noise). No wonder why, at times, the short-term trader has a
nickname: noise trader.
The long-term investor/trader plays the Buy & Hold strategy which was expressed earlier as:
∑ (h[o]I. * P[o] ) + ∑ (h[o]I . * ∆ax) since ∑ε → 0
His portfolio appreciation is based on the slope of the regression line; he plays time (hold). He knows that if he waits long enough, he is bound or at least on his way to reaching the market average
or better at some distant point in the future.
The short-term trader is faced with a totally different problem.
Trading on What?
… to be continued…
Created on ... August 27, 2011, © Guy R. Fleury. All rights reserved | {"url":"https://alphapowertrading.com/index.php?view=article&id=187:on-seeking-alpha&catid=12","timestamp":"2024-11-06T21:18:48Z","content_type":"text/html","content_length":"41803","record_id":"<urn:uuid:52bbdd18-d948-42f6-990e-51f36ff14e32>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00585.warc.gz"} |
Missing Number Worksheet For Addition and Subtraction (to 20): #1
Note: this page contains legacy resources that are no longer supported. You are free to continue using these materials but we can only support our current worksheets, available as part of our
membership offering.
Missing Number Worksheet For Addition and Subtraction (to 20): #1
Related Resources
The various resources listed below are aligned to the same standard, (1OA08) taken from the CCSM (Common Core Standards For Mathematics) as the Addition and subtraction Worksheet shown above.
Determine the unknown whole number in an addition or subtraction equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations
8 + ? = 11, 5 = _ – 3, 6 + 6 = _.
To 10
To 20
Worksheet Generator
Similar to the above listing, the resources below are aligned to related standards in the Common Core For Mathematics that together support the following learning outcome:
Work with addition and subtraction equations | {"url":"https://helpingwithmath.com/generators/1oa8-addition-subtraction03/","timestamp":"2024-11-06T17:18:52Z","content_type":"text/html","content_length":"111931","record_id":"<urn:uuid:9d4a3ca9-2710-4c34-8bff-01aab200c0a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00115.warc.gz"} |
help please
Solving the quadratic inequality, we get x > -1/2, x < -5/2. So the inequality is not satisfied for x = -1 and x = -2. The answer is 2.
Guest Jun 28, 2023
For how many integer values of x is $5x^{2}+19x+16 > 20$ not satisfied?
Guest Jun 28, 2023
To find the number of integer values of x for which the inequality $5x^{2}+19x+16 > 20$ is not satisfied, we can start by simplifying the inequality:
$5x^{2}+19x+16 > 20$
$5x^{2}+19x-4 > 0$
Now, we can solve this inequality by finding the values of x that make the quadratic expression positive.
To solve the inequality, we can factorize the quadratic expression:
$5x^{2}+19x-4 = (5x-1)(x+4)$
We need to find the values of x for which the product $(5x-1)(x+4)$ is greater than zero.
For this inequality to be satisfied, either both factors must be positive or both factors must be negative.
Gray46 Jun 29, 2023 | {"url":"https://web2.0calc.com/questions/help-please_33155","timestamp":"2024-11-12T10:07:01Z","content_type":"text/html","content_length":"21341","record_id":"<urn:uuid:88f218e1-bf09-4326-a815-81d17a83b81b>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00499.warc.gz"} |
CST Studio Suite Software - 3D Electromagnetic Simulation | Simuleon
SIMULIA CST Studio Suite
Electromagnetic Field Software
High-performance 3D EM analysis for designing, analyzing and optimizing electromagnetic components and systems.
Broad Electromagnetic Simulation
CST analyses Electromagnetic systems from statics and low-frequency to high-frequency range, in a fully parametric design environment.
Specialised Solvers
CST holds specialised solvers for applications such as motors,
circuit boards, cable harnesses and filters.
High Performance Computing
CST allows workstation multithreading, GPU and hardware
acceleration, and cluster distributed computing and MPI.
Coupled Simulation
CST provides coupled simulation: System-level, hybrid,
multiphysics, thermal, EM/circuit co-simulation, and co-simulation with Abaqus
CST Studio Suite
A market-leading electromagnetic field software
Extended Solvers in a single user interface
Electromagnetic field solvers for applications across the EM spectrum are contained within a single user interface in CST Studio Suite. The solvers can be coupled to perform hybrid simulations,
giving engineers the flexibility to analyze whole systems made up of multiple components in an efficient and straightforward way.
Co-design with other SIMULIA software applications
CST allows to be used in co-design with other SIMULIA products so that EM simulation can be integrated into the design flow and drives the development process from the earliest stages.
Extended applications
Common subjects of EM analysis include the performance and efficiency of antennas and filters, electromagnetic compatibility and interference (EMC/EMI), exposure of the human body to EM fields,
electro-mechanical effects in motors and generators, and thermal effects in high-power devices.
CST Studio Suite capabilities in Electromagnetic (EM) analysis
Diverse industry specific capabilities to capture and predict many different behaviours.
Electromagnetic Simulation Solvers
CST solver capabilities for many different purposes
CST EM Solvers
CST Studio Suite gives customers access to multiple electromagnetic (EM) simulation solvers which use methods such as the finite element method (FEM) the finite integration technique (FIT), and the
transmission line matrix method (TLM). These represent the most powerful general purpose solvers for high frequency simulation tasks. Additional solvers for specialist high frequency applications
such as electrically large or highly resonant structures complement the general purpose solvers. CST Studio Suite includes FEM solvers dedicated to static and low frequency applications such as
electromechanical devices, transformers or sensors. Alongside these are simulation methods available for charged particle dynamics, electronics, and multiphysics problems.
The seamless integration of the solvers into one user interface in CST Studio Suite enables the easy selection of the most appropriate simulation method for a given problem class, delivering improved
simulation performance and unprecedented simulation reliability through cross-verification.
High Frequency Solvers
The CST Asymptotic Solver is a ray tracing solver which is efficient for extremely large structures where a full-wave solver is unnecessary. The Asymptotic Solver is based on the Shooting Bouncing
Ray (SBR) method, an extension to physical optics, and is capable of tackling simulations with an electric size of many thousands of wavelengths. Applications like; electrically very large
structures, installed performance of antennas, scattering analysis.
High Frequency Solvers
The CST Eigenmode Solver is a 3D solver for simulating resonant structures, incorporating the Advanced Krylov Subspace method (AKS), and the Jacobi-Davidson method (JDM). Common applications of the
Eigenmode Solver are highly-resonant filter structures, high-Q particle accelerator cavities, and slow wave structures such as travelling wave tubes. The Eigenmode Solver supports sensitivity
analysis, allowing the detuning effect of structural deformation to be calculated directly. Applications like; filters, Cavities, metamaterials and periodic structures.
High Frequency Solvers
Filter Designer 2D
A design tool offering a range of options for filter implementation in various planar technologies, such as microstrip, stripline and suspended microstrip. It offers different configurations using
stubs, stepped impedance lines, coupled-resonators and lumped component circuits, where SMD libraries can be selected and inter-connects automatically created. With the push of a button the model is
created on the schematic with the appropriate optimizer setup. In addition a full-wave simulation project is created from the building blocks and includes the solver setup and all parameters needed
for the final tuning steps. Applications like; planar filters, circuit filters.
High Frequency Solvers
Filter Designer 3D
A synthesis tool for designing bandpass and diplexer filters, where a range of coupling matrix topologies are produced for the application in arbitrary coupled-resonator based technology. It also
offers a choice in building blocks to realize the 3D filter by making use of the Assembly Modelling. From the Component Library the user can choose between combline/interdigital coaxial cavities and
rectangular waveguides, or simply define customized building blocks of any type of single-mode technology (e.g. SIW or dielectric pucks). Additional functionality includes the coupling matrix
extraction that can directly be used as a goal for optimization of a simulation model, or for assistance in tuning complex hardware via real-time measurements using a network analyzer. Applications
like; cross-coupled filters for different electromagnetic technologies (e.g. cavities, microstrip, dielectrics), assistive tuning for filter hardware (with vector network analyzer link).
High Frequency Solvers
Frequency Domain
The CST Frequency Domain Solver is a powerful multi-purpose 3D full-wave solver, based on the finite element method (FEM), that offers excellent simulation performance for many types of component.
Because the Frequency Domain Solver can calculate all ports at the same time, it is also a very efficient way to simulate multi-port systems such as connectors and arrays. The Frequency Domain Solver
includes a model-order reduction (MOR) feature which can speed up the simulation of resonant structures such as filters. Applications like; general high-frequency applications using small-to-medium
sized models, resonant structures, multi-port systems, 3D electronics.
High Frequency Solvers
Integral Equation
The CST Integral Equation Solver is a 3D full-wave solver, based on the method of moments (MOM) technique with multilevel fast multipole method (MLFMM). The Integral Equation Solver uses a surface
integral technique, which makes it much more efficient than full volume methods when simulating large models with lots of empty space. The Integral Equation Solver includes a characteristic mode
analysis (CMA) feature which calculates the modes supported by a structure. Applications like; high-frequency applications using electrically large models, installed performance, characteristic mode
High Frequency Solvers
The CST Multilayer Solver is a 3D full-wave solver, based on the method of moments (MOM) technique. The Multilayer Solver uses a surface integral technique and is optimized for simulating planar
microwave structures. The Multilayer Solver includes a characteristic mode analysis (CMA) feature which calculates the modes supported by a structure. Applications like; MMIC, feeding networks,
planar antennas.
High Frequency Solvers
Time Domain
The CST Time Domain Solver is a powerful and versatile multi-purpose 3D full-wave solver, with both finite integration technique (FIT) and transmission line matrix (TLM) implementations included in a
single package. The Time Domain Solver can perform broadband simulations in a single run. Support for hardware acceleration and MPI cluster computing also makes the solver suitable for extremely
large, complex and detail-rich simulations. Applications like; general high-frequency applications using medium-to-large models, transient effects, 3D electronics
High Frequency Solvers
Hybrid Solver Task
The CST Hybrid Solver Task allows the Time Domain, Frequency Domain, Integral Equation and Asymptotic Solvers to be linked for hybrid simulation. For simulation projects that involve very wide
frequency bands or electrically large structures with very fine details, calculations can be made much more efficient by using different solvers on different parts. Simulated fields are transferred
between solvers through field sources, with a bidirectional link between the solvers for more accurate simulation. Applications like; small antennas on very large structures, EMC simulation, human
body simulation in complex environments.
Low Frequency Solvers
The CST Electrostatic Solver is a 3D solver for simulating static electric fields. This solver is especially suitable for applications such as sensors where electric charge or capacitance is
important. The speed of the solver also means that it is very useful for optimizing applications such as electrodes and insulators. Applications like; sensors and touchscreens, power equipment, c
harged particle devices and X-ray tubes
Low Frequency Solvers
Stationary Current
The CST Stationary Current Field Solver is a 3D solver for simulating the flow of DC currents through a device, especially with lossy components. This solver can be used to characterize the
electrical properties of a component that is DC or in which eddy currents and transient effects are irrelevant. Applications like; high-power equipment, electrical machines, PCB power distribution
Low Frequency Solvers
The CST Magnetostatic Solver is a 3D solver for simulating static magnetic fields. This solver is most useful for simulating magnets, sensors, and for simulating electrical machines such as motors
and generators in cases where transient effects and eddy currents are not critical. Applications like; sensors, electrical machines, particle beam focusing magnets.
Low Frequency Solvers
Low Frequency – Frequency Domain
The CST Low-Frequency Frequency Domain (LF-FD) Solver is a 3D solver for simulating the time-harmonic behavior in low frequency systems, and includes magneto-quasistatic (MQS), electro-quasistatic
(EQS) and fullwave implementations. This solver is most useful for simulations that involve frequency-domain effects and where the sources are coils. Applications like; sensors and non-destructive
testing (NDT), RFID and wireless power transfer, power engineering – bus bar systems.
Low Frequency Solvers
Low Frequency – Time Domain
The CST Low-Frequency Time Domain (LF-FD) Solver is a 3D solver for simulating the transient behavior in low frequency systems, and includes both magneto-quasistatic (MQS) and electro-quasistatic
(EQS) implementations. The MQS solver is suitable for problems involving eddy currents, non-linear effects, and transient effects such as motion or inrush. The EQS solver is suitable for
resistive-capacitive problems and HV-DC applications. Applications like; electrical machines and transformers, electromechanical – motors, generators, power engineering – insulation, bus bar systems,
Multiphysics Solvers
Thermal Steady State
The CST Thermal Steady State Solver can predict temperature distribution of a steady-state system. Heat sources can include losses generated by electric and magnetic fields, currents, particle
collisions, human bio-heat, as well as other user-defined sources. Seamlessly linked to our electromagnetic solvers, the Thermal Steady State Solver enables temperature prediction of devices and
resulting impact on their electromagnetic performance. Applications like; high-power electronics components and devices, such as print circuit boards (PCBs), filters, antennas etc. Medical devices
and human bio-heating.
Multiphysics Solvers
Thermal Transient
The CST Thermal Transient Solver can predict time-varying temperature response of a system. Heat sources can include losses generated by electric and magnetic fields, currents, particle collisions,
human bio-heat, as well as other user-defined sources. Seamlessly linked to our electromagnetic solvers, the Thermal Transient Solver enables transient temperature prediction of devices and resulting
impact on their electromagnetic performance. Applications like; high-power electronics components and devices, such as print circuit boards (PCBs), filters, antennas etc. Medical devices and human
Multiphysics Solvers
Conjugate Heat Transfer
The CST Conjugate Heat Transfer (CHT) Solver uses CFD technique to predict fluid flow and temperature distribution in a system. The CHT solver includes the thermal effects from all heat transfer
modes: conduction, convection and radiation, and can include heat sources from electromagnetic losses just as the Steady State and Transient Thermal solvers do. Devices such as fans, perforated
screens, thermal interface materials can be directly modeled. Compact thermal models (CTM), such as two-resistor CTM, can also be considered. Applications like; electronics cooling: natural and
forced convection of high-power electronics components and devices, such as PCBs, filters, antennas, chassis etc. with installed cooling devices such as fans, heatsinks etc.
Multiphysics Solvers
The CST Mechanical Solver can predict structures’ mechanical stress and deformation caused by electromagnetic forces and thermal expansion. It is designed to be used in conjunction with the EM and
thermal solvers to assess the possible performance impact of the force and heating to the device. Applications like; filter detuning, PCB deformation, lorentz forces on particle accelerators.
Particles Solvers
The CST Particle-in-Cell (PIC) Solver is a versatile, self-consistent simulation method for particle tracking that calculates both particle trajectory and electromagnetic fields in the time-domain,
taking into account the space charge effects and mutual coupling between the two. This allows it to be used to simulate a huge variety of devices where the interaction between particles and
high-frequency fields are important, as well as high-power devices where electron multipacting is a risk. Applications like; Accelerator components, Slow-wave devices, Multipaction.
Particles Solvers
Particle Tracking
The CST Particle Tracking Solver is a 3D solver for simulating particle trajectories through electromagnetic fields. The space charge effect on the electric field can be taken into account by the Gun
Iteration option. Several emission models including fixed, space charge limited, thermionic and field emission are available, and secondary electron emissions can be simulated. Applications like;
Particle sources, Focusing and beam steering magnets, Accelerator components.
Particles Solvers
Wake Field
The CST Wakefield Solver calculates the fields around a particle beam, represented by a line current, and the wakefields produced through interactions with discontinuities in the surrounding
structure. Applications like; Cavities, Collimators, Beam position monitors.
EMC and EDA Solvers
The PCBs and Packages Module of CST Studio Suite is a tool for signal integrity (SI), power integrity (PI), and electromagnetic compatibility (EMC) analysis on printed circuit boards (PCB). It
integrates easily into the EDA design flow by providing powerful import filters for popular layout tools from Cadence, Zuken, and Altium. Effects like resonances, reflections, crosstalk, power/ground
bounce and simultaneous switching noise (SSN) can be simulated at any stage of product development, from pre-layout to post-layout phase. CST Studio Suite includes three different solver types – a 2D
Transmission Line method, a 3D Partial Element Equivalent Circuit (PEEC) method and a 3D Finite-Element Frequency-Domain (FEFD) method – and pre-defined workflows for IR drop, PI and SI analysis.
Applications like; high-speed PCBs, Packages, Power electronics.
EMC and EDA Solvers
Rule Check
Rule Check is an EMC, SI and PI design rule checking (DRC) tool that reads popular board files from Cadence, Mentor Graphics, and Zuken as well as ODB++ (e.g. Altium) files and checks the PCB design
against a suite of EMC or SI rules. The kernel used by Rule Check is the well-known software tool EMSAT. The user can designate various nets and components that are critical for EMC, such as I/O
nets, power/ground nets, and decoupling capacitors. Rule Check relieves the tedium and removes the human error by examining each critical net in turn to check that it does not violate any of the
selected EMC or SI design rules. After the rule checking is completed, the EMC rules’ violations can be viewed graphically or as an HTML document. Applications like; Electromagnetic compatibility
(EMC) PCB design rule checking, Signal integrity and power integrity (SI/PI) PCB design rule checking.
EMC and EDA Solvers
Cable Harness
The CST Cable Harness Solver is dedicated to the three-dimensional analysis of signal integrity (SI), conducted emission (CE), radiated emission (RE), and electromagnetic susceptibility (EMS) of
complex cable structures in electrically large systems. It incorporates a fast and accurate transmission line modeling technique for cable harness configurations in 3D metallic or dielectric
environment. Hybrid simulation with the Cable Harness Solver and other high-frequency solvers allows structures containing complex cable harnesses to be simulated in 3D efficiently. Applications
like; General SI and EMC simulation of cables, Cable harness layout in vehicles and aircraft, Hybrid cables in consumer electronics.
CST Data Exchange Options
CST Workflow Integrations
Data Exchange Options
The excellent workflow integration available within CST Studio Suite provides reliable data exchange options which help to reduce the design engineer’s workload.
CST Studio Suite is renowned for its superb CAD and EDA data import capabilities. The sophisticated healing mechanisms, which restore the integrity of flawed or non-compliant data, are particularly
important as even one corrupted element can prevent the use of the whole part.
Fully parametrized models can be imported and design changes are instantly reflected in the simulation model due to the bidirectional link between CAD and simulation. This means that the results of
optimizations and parametric design studies can be imported back directly into the master model. This improves workflow integration and reduces the time and effort needed to optimize a design.
Integrations with CATIA, SOLIDWORKS & 3DEXPERIENCE Platform are possible.
CST Automatic Optimization
CST Optimization Routines an Algorithms
Automatic Optimization
CST Studio Suite offers automatic optimization routines for electromagnetic systems and devices. CST Studio Suite models can be parameterized with respect to their geometrical dimensions or material
properties. This enables users to study the behavior of a device as its properties change. Users can find the optimum design parameters to achieve a given effect or fulfill a certain goal. They can
also adapt material properties to fit measured data.
CST Studio Suite contains several automatic optimization algorithms, both local and global. Local optimizers provide fast convergence but risk converging to a local minimum rather than the overall
best solution. On the other hand, global optimizers search the entire problem space but typically require more calculations. High-performance computing techniques can be used to speed up simulation
and optimization for very complex systems, or problems with large numbers of variables. The performance of global optimizers in particular can be greatly improved with the use of distributed
Covariance Matrix Adaptation Evolutionary Strategy
The Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is the most sophisticated of the global optimizers, and has relatively fast convergence for a global optimizer. With CMA-ES, the
optimizer can “remember” previous iterations, and this history can be exploited to improve the performance of the algorithm while avoiding local optimums. Suitable for: General optimization,
especially for complex problem domains.
Trust Region Framework
A powerful local optimizer, which builds a linear model on primary data in a “trust” region around the starting point. The modeled solution will be used as new starting point until it converges to an
accurate model of the data. The Trust Region Framework can take advantage of S-parameter sensitivity information to reduce the number of simulations needed and speed up the optimization process. It
is the most robust of the optimization algorithms. Suitable for: General optimization, especially on models with sensitivity information.
Genetic Algorithm
Using an evolutionary approach to optimization, the Genetic Algorithm generates points in the parameter space and then refines them through multiple generations, with random parameter mutation. By
selecting the “fittest” sets of parameters at each generation, the algorithm converges to a global optimum. Suitable for: Complex problem domains and models with many parameters.
Particle Swarm Optimization
Another global optimizer, this algorithm treats points in parameter space as moving particles. At each iteration, the position of the particles changes according to, not only to the best known
position of each particle, but also the best position of the entire swarm as well. Particle Swarm Optimization works well for models with many parameters. Suitable for: Models with many parameters.
Nelder Mead Simplex Algorithm
This method is a local optimization technique which uses multiple points distributed across the parameter space to find the optimum. Nelder Mead Simplex Algorithm is less dependent on the starting
point than most local optimizers. Suitable for: Complex problem domains with relatively few parameters, systems without a good initial model.
Interpolated Quasi Newton
This is a fast local optimizer which uses interpolation to approximate the gradient of the parameter space. The Interpolated Quasi Newton method has fast convergence. Suitable for: Computationally
demanding models.
Classic Powell
A simple, robust local optimizer for single-parameter problems. Although slower than the Interpolated Quasi Newton, it can sometimes be more accurate. Suitable for: Single-variable optimization.
Decap Optimization
A specialized optimizer for printed circuit board (PCB) design, the Decap Optimizer calculates the most effective placement of decoupling capacitors using the Pareto front method. This can be used to
minimize either the number of capacitors needed or the total cost while still meeting the specified impedance curve. Suitable for: PCB layout.
Electromagnetic Systems Modelling
CST Systems Assembly and Modelling
EM Systems Modelling
With System Assembly and Modeling (SAM), CST Studio Suite provides an environment that simplifies the management of simulation projects, enabling the intuitive construction of electromagnetic (EM)
systems and straightforward management of complex simulation flows using schematic modeling.
The SAM framework can be used for analyzing and optimizing an entire device that consists of multiple individual components. These are described by relevant physical quantities such as currents,
fields or S-parameters. SAM enables the use of the most efficient solver technology for each component.
SAM helps users to compare the results of different solvers or model configurations within one simulation project and perform post-processing automatically. SAM facilities the set-up of a linked
sequence of solver runs for hybrid and multiphysics simulations. For example using the results of EM simulation to calculate thermal effects, then structural deformation, and then another EM
simulation to analyze detuning.
This combination of different levels of simulation helps to reduce the computational effort required to analyze a complex model accurately.
User Interface
CST Electromagnetic Design Environment
User Interface
The CST Studio Suite design environment is an intuitive user interface used by all the modules. It comprises a 3D interactive modeling tool, a schematic layout tool, a pre-processor for the
electromagnetic solvers and post-processing tools tailored to industry needs.
The ribbon-based interface uses tabs to display all the tools and options needed to set up, carry out and analyze a simulation, grouped according to their position in the workflow. Contextual tabs
mean that the most relevant options for the task are always just a click away. In addition, the Project Wizard and the QuickStart Guide provide guidance to new users and offer access to a wide range
of features.
The 3D interactive modeling tool at the heart of the interface uses the ACIS 3D CAD kernel. This powerful tool enables complex models to be constructed within CST Studio Suite and edited
parametrically with a simple WYSIWYG approach.
Can you perform realistic simulations yourself?
Join one of our FREE workshops and discover how easy it is to perform realistic FEA to solve your complex engineering challenges. | {"url":"https://www.simuleon.com/simulia-cst-studio-suite/","timestamp":"2024-11-06T15:37:20Z","content_type":"text/html","content_length":"434660","record_id":"<urn:uuid:857475be-50ee-467b-85a2-b8e764799e01>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00447.warc.gz"} |
Bar Chart vs. Histogram: What's the Difference?
A bar chart displays categorical data with rectangular bars of heights or lengths proportional to the values they represent, while a histogram represents the frequency distribution of numerical data.
Key Differences
A bar chart represents categorical data using bars of varying heights or lengths. Each bar signifies a category and its size reflects its value. Conversely, a histogram displays the frequency
distribution of numerical data. It groups data points into ranges, and each bar's height indicates the number of data points in that range.
Bar charts are ideal for comparing discrete categories or groups. For example, sales by product type or preferences among different age groups. Histograms, on the other hand, are used for continuous
data, showing how many data points fall within each range, like age distribution in a population.
In bar charts, the x-axis typically shows categories and the y-axis represents values. Bars can be vertical or horizontal. In histograms, the x-axis represents bins or intervals into which the data
points are grouped, and the y-axis shows frequency or count of data points in each bin.
Bar charts have spaces between bars to emphasize that each bar represents a distinct category. In contrast, histograms have no gaps between bars (unless there's no data in a bin), as they depict the
distribution of a variable over a continuous interval.
Bar charts are used to compare different groups or to track changes over time for different categories. They make it easy to compare categories visually. Histograms are used for understanding the
distribution of a variable, such as identifying skewness, peaks, or outliers in the data.
Comparison Chart
Comparison of categories or groups
Displaying frequency distribution
Axis Representation
Categories on one axis, values on the other
Data ranges (bins) on one axis, frequency on the other
Bar Arrangement
Spaces between bars
Bars touch each other
Identifies trends or differences between categories
Shows distribution pattern of a dataset
Bar Chart and Histogram Definitions
Bar Chart
A representation of data where each category is denoted by a bar, whose size is indicative of its numerical value.
The bar chart showed the population growth over the last decade for different cities.
A chart illustrating the frequency distribution of a set of continuous data.
The histogram showed a bell-shaped curve, suggesting a normal distribution of the data.
Bar Chart
A graphical representation using bars of varying lengths to compare different categories.
The bar chart clearly showed that sales of Product A were twice as high as Product B.
A graphical display of data using bars to show the frequency of numerical data.
The histogram indicated most students scored between 70 and 80 in the exam.
Bar Chart
A visual tool used to display and compare the quantity, frequency, or other measures for different categories.
The bar chart comparing monthly rainfall made it easy to see which month was the wettest.
A representation of data where the frequency of variables is depicted by the height of the bars.
The histogram displayed a skewed distribution, with most values clustered at the lower end.
Bar Chart
A method of showing categorical data using rectangular bars with heights or lengths corresponding to the values they represent.
The company's annual performance was depicted in a bar chart, highlighting increased profits.
A type of bar chart that represents the distribution of continuous data.
A histogram of age distribution in the survey highlighted a majority of participants were in their 30s.
Bar Chart
A chart with rectangular bars where the length of the bar is proportional to the value of the variable.
In the bar chart, longer bars indicated higher expenses in each department.
A method to depict the frequency of occurrence of data points in successive numerical intervals.
The histogram of daily temperatures helped in understanding the climate pattern.
A bar graph of a frequency distribution in which one axis lists each unique value (or range of continuous values) in a set of data, and the area of each bar represents the frequency (or relative
frequency) of that value (or range of continuous values).
(statistics) A graphical display of numerical data in the form of upright bars, with the area of each bar representing frequency.
(transitive) To represent (data) as a histogram.
A bar chart representing a frequency distribution; heights of the bars represent observed frequencies
When should I use a bar chart?
Use a bar chart when comparing categorical data like sales by product or survey responses by category.
What does a histogram show?
A histogram shows how data is distributed across different ranges or bins, indicating patterns like skewness or central tendency.
How is a histogram different from a bar chart?
A histogram represents the frequency distribution of numerical data without gaps between bars, while a bar chart compares categories with gaps between bars.
What is a bar chart?
A visual representation using bars to compare different categories or groups, with each bar’s length or height proportional to the value it represents.
Can histograms handle large data sets?
Yes, histograms are particularly useful for large data sets to understand distribution patterns.
How do you determine the number of bins in a histogram?
The number of bins can be determined based on data spread, data size, or using statistical formulas.
Can a bar chart show trends over time?
Yes, a bar chart can effectively show trends over time for different categories.
Are the bars in a histogram always of the same width?
Typically, yes. However, histograms can have variable bin widths if needed to better represent the data.
What types of data are best for bar charts?
Categorical or nominal data, like types of fruit in a fruit salad or different brands of cars.
Do bar charts have a specific orientation?
Bar charts can be horizontal or vertical, depending on the data and the desired presentation.
Why don't histograms have gaps between bars?
Gaps are absent in histograms to emphasize the continuous nature of the data they represent.
What is the key visual element of a histogram?
The key element is the series of adjacent bars that represent the frequency of data points in each range.
Is color important in bar charts?
Color can enhance the readability and differentiation of categories in bar charts.
Is it possible to have too many bars in a bar chart?
Yes, too many bars can make a bar chart cluttered and difficult to interpret.
Can bar charts be used for part-to-whole relationships?
Yes, particularly stacked bar charts are effective for showing part-to-whole relationships.
Can histograms be skewed?
Yes, the shape of a histogram can indicate skewness in the data.
Why is bin size important in histograms?
The bin size can affect the interpretation of data, as too large or too small bins can obscure data patterns.
Does a histogram always represent probability distributions?
Not always, but it is commonly used to approximate the underlying probability distribution of the data.
What is an example of a bar chart in everyday use?
A bar chart displaying the popularity of different social media platforms among teenagers.
Can a bar chart include multiple data series?
Yes, bar charts can include multiple series, often using different colors for comparison.
About Author
Written by
Janet White
Janet White has been an esteemed writer and blogger for Difference Wiki. Holding a Master's degree in Science and Medical Journalism from the prestigious Boston University, she has consistently
demonstrated her expertise and passion for her field. When she's not immersed in her work, Janet relishes her time exercising, delving into a good book, and cherishing moments with friends and
Edited by
Aimie Carlson
Aimie Carlson, holding a master's degree in English literature, is a fervent English language enthusiast. She lends her writing talents to Difference Wiki, a prominent website that specializes in
comparisons, offering readers insightful analyses that both captivate and inform. | {"url":"https://www.difference.wiki/bar-chart-vs-histogram/","timestamp":"2024-11-12T07:18:38Z","content_type":"text/html","content_length":"132485","record_id":"<urn:uuid:b9991b3f-ac8a-45e1-b1ee-ef4f458ca593>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00153.warc.gz"} |
Correlation vs. causation - (Critical Thinking) - Vocab, Definition, Explanations | Fiveable
Correlation vs. causation
from class:
Critical Thinking
Correlation refers to a statistical relationship between two variables, indicating that they tend to move together in some way, while causation implies that one variable directly influences or causes
a change in another. Understanding the difference is crucial, especially when evaluating data or claims, as it helps avoid misleading conclusions that arise from assuming that correlation equates to
congrats on reading the definition of correlation vs. causation. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Just because two variables are correlated does not mean that one causes the other; there could be a third factor influencing both.
2. In research and data analysis, establishing causation typically requires more rigorous methods, such as controlled experiments or longitudinal studies.
3. Misinterpretation of correlation as causation can lead to poor decision-making, particularly in health and scientific claims where public policy may be affected.
4. Statistical tools like regression analysis can help determine whether an observed correlation might suggest a causal relationship, but careful consideration is still needed.
5. Recognizing the difference between correlation and causation helps critically assess claims made in media and scientific studies, encouraging better understanding of underlying issues.
Review Questions
• How can the misunderstanding between correlation and causation impact critical thinking in scientific research?
□ Misunderstanding the difference between correlation and causation can lead researchers and the public to draw incorrect conclusions from data. For example, if a study shows a correlation
between two health factors, one might mistakenly assume that one directly causes the other without investigating further. This can result in misguided health recommendations or policies based
on flawed interpretations of research findings.
• What methods can researchers use to establish causation rather than merely showing correlation in their studies?
□ Researchers can use experimental design, where they manipulate one variable while controlling others to observe its effects on a second variable. This approach helps demonstrate causation by
ruling out alternative explanations. Additionally, longitudinal studies that track changes over time can provide insight into potential causal relationships, allowing for more robust
conclusions about how variables interact.
• Analyze a real-world example where correlation was mistaken for causation and discuss the implications of this error.
□ One classic example is the correlation found between ice cream sales and drowning incidents during summer months. While both increase at the same time, this does not mean that ice cream sales
cause drownings. Instead, a third factor—hot weather—affects both variables. Misinterpreting this correlation could lead to misguided conclusions about public safety measures around swimming
and ice cream sales, illustrating how critical it is to differentiate correlation from causation when analyzing data.
"Correlation vs. causation" also found in:
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/critical-thinking/correlation-vs-causation","timestamp":"2024-11-11T13:44:13Z","content_type":"text/html","content_length":"153764","record_id":"<urn:uuid:671094fa-ec96-4cc4-9d65-257d1f77d2cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00851.warc.gz"} |
Python: Find List Index of All Occurrences of an Element • datagy
In this tutorial, you’ll learn how to use Python to find the list index of all occurrences of an element. In many cases, Python makes it simple to find the first index of an element in a list.
However, because Python lists can contain duplicate items, it can be helpful to find all of the indices of an element in a list.
By the end of this tutorial, you’ll have learned:
• How to find the indices of all occurences of an element in a Python list using:
□ For loops,
□ List comprehensions,
□ NumPy, and
□ more_itertools
• Which method is fastest
Let’s get started!
How Python List Indices Work
Before diving into how to get the index positions of all elements in a Python list, let’s take a quick moment to understand how Python lists are indexed. Because Python lists are ordered container
objects, their order remains unless it’s explicitly altered.
Python list indices start at 0 and continue through to the length of the list minus one. The image below shows how these list indices work in Python:
Python List Indexing Explained
In the following section, you’ll learn how to use the list .index() method to find the index of an element in the list.
How the Python List Index Method Works
The Python list .index() has three parameters:
1. The element to search for,
2. The index to begin searching at, and
3. The index to search up to
The only required argument is the element to search for. By default, Python will search through the entire list, unless explicitly told otherwise.
Let’s take a look at how this method looks :
# The list.index() Method
value=, # The value to search for
start=, # The index to start at
stop= # The index to end at
Let’s take a look at an example. We can load a list with different elements in it and find the index of the element using the .index() method:
# Using the .index() Method
a_list = [1,2,3,4,1,2,1,2,3,4]
# Returns: 0
In the example above, we applied the .index() method on our list to find the index of the element 1. The method returned the value 0, meaning the item exists at the 0^th position. However, we know
that the value exists multiple times in the list.
Why didn’t the method return more than the first index? This is how the method works. Even if an item exists more than once, only the first instance is returned.
In the following sections, you’ll learn how to get the index positions of all occurrences of an element in a list.
How to Get Index of All Occurrences of Element in a Python List with a For Loop and Enumerate
One of the most basic ways to get the index positions of all occurrences of an element in a Python list is by using a for loop and the Python enumerate function. The enumerate function is used to
iterate over an object and returns both the index and element.
Because of this, we can check whether the element matches the element we want to find. If it does, we can add the index position to another list. Let’s see how this works with an example:
# Using enumerate to Find Index Positions
a_list = [1,2,3,4,1,2,1,2,3,4]
def find_indices(list_to_check, item_to_find):
indices = []
for idx, value in enumerate(a_list):
if value == item_to_find:
return indices
print(find_indices(a_list, 1))
# Returns: [0, 4, 6]
Let’s break down what we did here:
1. We defined a function that takes a list and an element as input
2. The function then loops over the retult of the enumerate() function
3. If the value of the item matches the item we’re looking for, the corresponding index is appended to the list
4. Finally, the list of all indices is returned
How to Get Index of All Occurrences of Element in a Python List with more_itertools
The built-in more_itertools library comes with a number of helpful functions. One of these functions is the locate() function that takes an iterable and a function to evaluate against.
In order to find the index positions of all elements matching an element, we can use a lambda function that simply checks if that item is equal to the item we want to check against.
Let’s take a look at an example:
# Using more_itertools to Find All Occurrences of an Element
from more_itertools import locate
a_list = [1,2,3,4,1,2,1,2,3,4]
def find_indices(list_to_check, item_to_find):
indices = locate(list_to_check, lambda x: x == item_to_find)
return list(indices)
print(find_indices(a_list, 1))
# Returns: [0, 4, 6]
Let’s break down what we did here:
1. We defined a function that takes both a list and the element to search for
2. The function uses the locate() function to use the list we want to search and a lambda function that checks if each item is equal to the value we’re searching for
3. Finally, the function returns a list of the result
How to Get Index of All Occurrences of Element in a Python List with Numpy
NumPy makes the process of finding all index positions of an element in a list very easy and fast. This can be done by using the where() function. The where() function returns the index positions of
all items in an array that match a given value.
Let’s take a look at an example:
# Using numpy to Find All Occurrences of an Element
import numpy as np
a_list = [1,2,3,4,1,2,1,2,3,4]
def find_indices(list_to_check, item_to_find):
array = np.array(list_to_check)
indices = np.where(array == item_to_find)[0]
return list(indices)
print(find_indices(a_list, 1))
# Returns: [0, 4, 6]
Let’s break down what we did here:
1. We created a function that takes a list and an element to find
2. The list is converted into a numpy array
3. The where() function is used to evaluated the array against our item
4. We return the 0th index of that resulting array
5. We convert that array to a list
How to Get Index of All Occurrences of Element in a Python List with a List Comprehension
In this section, we’ll take a look at how to use a list comprehension to return a list of indices of an element in a list. This method works the same as the first method, the for loop, except it uses
a list comprehension.
Let’s see how we can convert the for loop into a list comprehension:
# Using a List Comprehension to Find All Occurrences of an Element
a_list = [1,2,3,4,1,2,1,2,3,4]
def find_indices(list_to_check, item_to_find):
return [idx for idx, value in enumerate(list_to_check) if value == item_to_find]
print(find_indices(a_list, 1))
# Returns: [0, 4, 6]
The method shown above is much cleaner and easier to read than the for loop. In the next section, we’ll take a look at how these different methods compare in terms of speed.
Which Method is Fastest To Get Index of All Occurrences of an Element in a Python List
The table below breaks down how long each method took to find the indices of all occurrences in a list of one hundred million elements:
Method Time to execute
For loop and enumerate() 4.97 seconds
more_itertools locate() 7.08 seconds
numpy where() 6.05 seconds
List Comprehension and enumerate() 4.69 seconds
The execution time of each of these methods
We can see that the list comprehension method was the fastest. Not only was this method the fastest, but it was very easy to read and required no additional packages.
In this tutorial, you learned how to find the index positions of all occurrences of an element in a Python list. You learned how to do this using the Python enumerate() function using both for loops
and list comprehensions. You also learned how to use the numpy where() function to do and the more_itertools locate() function.
Additional Resources
To learn more about related topics, check out the tutorials below: | {"url":"https://datagy.io/python-list-find-all-index/","timestamp":"2024-11-13T02:55:41Z","content_type":"text/html","content_length":"156524","record_id":"<urn:uuid:d7b6edb3-32b0-48cd-b94f-7161ec7db356>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00616.warc.gz"} |
Guess the Number
Guess the Number Logic Number Puzzles
Give students more practice in using the hundred board to solve number clue problems. Once again, students solve the problem clue by clue. With this set, however, students must use multiplication,
money, and measurement facts to correctly solve the problem and guess the number.This is great practice for state testing as the clues draw on many different mathematical skills.
Sample Problem:Guess the Number-1
• The number is greater than the number of pennies in a quarter.
• The number is less than the number of pennies in five dimes.
• The number is an odd number.
• If you count by 5s, you say the number.
• The sum of the digits is 8.
• What is the number?
Suggested Uses:
• Do Now! or warm-up activity
• Math Center Activity
• Problem solving activity on game days
• Writing original Guess The Number problems for a class collection
Download the Guess the Number teacher resource packet which includes 10 logic number puzzles, a blank template and answer key. | {"url":"https://mathwire.blogspot.com/2011/03/guess-number.html","timestamp":"2024-11-09T17:29:48Z","content_type":"application/xhtml+xml","content_length":"61772","record_id":"<urn:uuid:4a9cb264-2586-40b7-9f15-f0513d305e0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00620.warc.gz"} |
Find the derivative of sin−1(53sinx+4cosx), w.r.t to x? ... | Filo
Find the derivative of , w.r.t to ?
Not the question you're searching for?
+ Ask your question
, where
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE for FREE
11 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Continuity and Differentiability
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Find the derivative of , w.r.t to ?
Updated On Sep 17, 2022
Topic Continuity and Differentiability
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 1
Upvotes 232
Avg. Video Duration 4 min | {"url":"https://askfilo.com/math-question-answers/find-the-derivative-of-sin-1leftfrac3-sin-x4-cos-x5right-wrt-to-x","timestamp":"2024-11-06T17:16:06Z","content_type":"text/html","content_length":"349767","record_id":"<urn:uuid:a08b7dca-0ec0-4a51-84e2-e8994837fec9>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00710.warc.gz"} |
Study Guide - Solving Radical Equations
Solving Radical Equations
Radical equations are equations that contain variables in the radicand (the expression under a radical symbol), such as
[latex]\begin{array}{l}\sqrt{3x+18}\hfill&=x\hfill \\ \sqrt{x+3}\hfill&=x - 3\hfill \\ \sqrt{x+5}-\sqrt{x - 3}\hfill&=2\hfill \end{array}[/latex]
Radical equations may have one or more radical terms, and are solved by eliminating each radical, one at a time. We have to be careful when solving radical equations, as it is not unusual to find
extraneous solutions, roots that are not, in fact, solutions to the equation. These solutions are not due to a mistake in the solving method, but result from the process of raising both sides of an
equation to a power. However, checking each answer in the original equation will confirm the true solutions.
A General Note: Radical Equations
An equation containing terms with a variable in the radicand is called a radical equation.
How To: Given a radical equation, solve it.
1. Isolate the radical expression on one side of the equal sign. Put all remaining terms on the other side.
2. If the radical is a square root, then square both sides of the equation. If it is a cube root, then raise both sides of the equation to the third power. In other words, for an nth root radical,
raise both sides to the nth power. Doing so eliminates the radical symbol.
3. Solve the remaining equation.
4. If a radical term still remains, repeat steps 1–2.
5. Confirm solutions by substituting them into the original equation.
Example 6: Solving an Equation with One Radical
Solve [latex]\sqrt{15 - 2x}=x[/latex].
The radical is already isolated on the left side of the equal side, so proceed to square both sides.
[latex]\begin{array}{l}\sqrt{15 - 2x}\hfill&=x\hfill \\ {\left(\sqrt{15 - 2x}\right)}^{2}\hfill&={\left(x\right)}^{2}\hfill \\ 15 - 2x\hfill&={x}^{2}\hfill \end{array}[/latex]
We see that the remaining equation is a quadratic. Set it equal to zero and solve.
[latex]\begin{array}{l}0\hfill&={x}^{2}+2x - 15\hfill \\ \hfill&=\left(x+5\right)\left(x - 3\right)\hfill \\ x\hfill&=-5\hfill \\ x\hfill&=3\hfill \end{array}[/latex]
The proposed solutions are [latex]x=-5[/latex] and [latex]x=3[/latex]. Let us check each solution back in the original equation. First, check [latex]x=-5[/latex].
[latex]\begin{array}{l}\sqrt{15 - 2x}\hfill&=x\hfill \\ \sqrt{15 - 2\left(-5\right)}\hfill&=-5\hfill \\ \sqrt{25}\hfill&=-5\hfill \\ 5\hfill&\ne -5\hfill \end{array}[/latex]
This is an extraneous solution. While no mistake was made solving the equation, we found a solution that does not satisfy the original equation. Check [latex]x=3[/latex].
[latex]\begin{array}{l}\sqrt{15 - 2x}\hfill&=x\hfill \\ \sqrt{15 - 2\left(3\right)}\hfill&=3\hfill \\ \sqrt{9}\hfill&=3\hfill \\ 3\hfill&=3\hfill \end{array}[/latex]
The solution is [latex]x=3[/latex].
Try It 5
Solve the radical equation: [latex]\sqrt{x+3}=3x - 1[/latex] Solution
Example 7: Solving a Radical Equation Containing Two Radicals
Solve [latex]\sqrt{2x+3}+\sqrt{x - 2}=4[/latex].
As this equation contains two radicals, we isolate one radical, eliminate it, and then isolate the second radical.
[latex]\begin{array}{ll}\sqrt{2x+3}+\sqrt{x - 2}\hfill& =4\hfill & \hfill \\ \sqrt{2x+3}\hfill& =4-\sqrt{x - 2}\hfill & \text{Subtract }\sqrt{x - 2}\text{ from both sides}.\hfill \\ {\left(\sqrt
{2x+3}\right)}^{2}\hfill& ={\left(4-\sqrt{x - 2}\right)}^{2}\hfill & \text{Square both sides}.\hfill \end{array}[/latex]
Use the perfect square formula to expand the right side: [latex]{\left(a-b\right)}^{2}={a}^{2}-2ab+{b}^{2}[/latex].
[latex]\begin{array}{ll}2x+3\hfill& ={\left(4\right)}^{2}-2\left(4\right)\sqrt{x - 2}+{\left(\sqrt{x - 2}\right)}^{2}\hfill & \hfill \\ 2x+3\hfill& =16 - 8\sqrt{x - 2}+\left(x - 2\right)\hfill & \
hfill \\ 2x+3\hfill& =14+x - 8\sqrt{x - 2}\hfill & \text{Combine like terms}.\hfill \\ x - 11\hfill& =-8\sqrt{x - 2}\hfill & \text{Isolate the second radical}.\hfill \\ {\left(x - 11\right)}^{2}\
hfill& ={\left(-8\sqrt{x - 2}\right)}^{2}\hfill & \text{Square both sides}.\hfill \\ {x}^{2}-22x+121\hfill& =64\left(x - 2\right)\hfill & \hfill \end{array}[/latex]
Now that both radicals have been eliminated, set the quadratic equal to zero and solve.
[latex]\begin{array}{ll}{x}^{2}-22x+121=64x - 128\hfill & \hfill \\ {x}^{2}-86x+249=0\hfill & \hfill \\ \left(x - 3\right)\left(x - 83\right)=0\hfill & \text{Factor and solve}.\hfill \\ x=3\hfill & \
hfill \\ x=83\hfill & \hfill \end{array}[/latex]
The proposed solutions are [latex]x=3[/latex] and [latex]x=83[/latex]. Check each solution in the original equation.
[latex]\begin{array}{l}\sqrt{2x+3}+\sqrt{x - 2}\hfill& =4\hfill \\ \sqrt{2x+3}\hfill& =4-\sqrt{x - 2}\hfill \\ \sqrt{2\left(3\right)+3}\hfill& =4-\sqrt{\left(3\right)-2}\hfill \\ \sqrt{9}\hfill& =4-\
sqrt{1}\hfill \\ 3\hfill& =3\hfill \end{array}[/latex]
One solution is [latex]x=3[/latex]. Check [latex]x=83[/latex].
[latex]\begin{array}{l}\sqrt{2x+3}+\sqrt{x - 2}\hfill&=4\hfill \\ \sqrt{2x+3}\hfill&=4-\sqrt{x - 2}\hfill \\ \sqrt{2\left(83\right)+3}\hfill&=4-\sqrt{\left(83 - 2\right)}\hfill \\ \sqrt{169}\hfill&=
4-\sqrt{81}\hfill \\ 13\hfill&\ne -5\hfill \end{array}[/latex]
The only solution is [latex]x=3[/latex]. We see that [latex]x=83[/latex] is an extraneous solution.
Try It 6
Solve the equation with two radicals: [latex]\sqrt{3x+7}+\sqrt{x+2}=1[/latex]. Solution
Licenses & Attributions
CC licensed content, Specific attribution | {"url":"https://www.symbolab.com/study-guides/sanjacinto-atdcoursereview-collegealgebra-1/solving-radical-equations.html","timestamp":"2024-11-08T02:14:26Z","content_type":"text/html","content_length":"134655","record_id":"<urn:uuid:26f5a64c-e996-4833-ae33-41f1abd715b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00054.warc.gz"} |
Chapter 5.2: Factoring by Grouping
First thing to do when factoring is to factor out the GCF. This GCF is often a monomial, like in the problem
Find and factor out the GCF for
By observation, one can see that both have
This means that
Find and factor out the GCF for
Both have
This means that if you factor out
The factored polynomial is written as
In the same way as factoring out a GCF from a binomial, there is a process known as grouping to factor out common binomials from a polynomial containing four terms.
Find and factor out the GCF for
To do this, first split the polynomial into two binomials.
Now find the common factor from each binomial.
This means that
Factor the following polynomials.
Answers to odd questions | {"url":"https://ecampusontario.pressbooks.pub/sccmathtechmath1/chapter/7-2-factoring-by-grouping/","timestamp":"2024-11-03T01:09:21Z","content_type":"text/html","content_length":"107394","record_id":"<urn:uuid:670de9af-7203-406e-8234-c4d90696b970>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00359.warc.gz"} |
Renormalization group for open quantum
SciPost Submission Page
Renormalization group for open quantum systems using environment temperature as flow parameter
by K. Nestmann, M. R. Wegewijs
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Konstantin Nestmann · Maarten Wegewijs
Submission information
Preprint Link: https://arxiv.org/abs/2111.07320v1 (pdf)
Date submitted: 2021-11-16 09:11
Submitted by: Nestmann, Konstantin
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties: • Quantum Physics
Approach: Theoretical
We present the $T$-flow renormalization group method, which computes the memory kernel for the density-operator evolution of an open quantum system by lowering the physical temperature $T$ of its
environment. This has the key advantage that it can be formulated directly in real time, making it particularly suitable for transient dynamics, while automatically accumulating the full temperature
dependence of transport quantities. We solve the $T$-flow equations numerically for the example of the single impurity Anderson model. In the stationary limit, readily accessible in real-time for
voltages on the order of the coupling or larger, we benchmark using results obtained by the functional renormalization group, density-matrix renormalization group and the quantum Monte Carlo method.
Here we find quantitative agreement even in the worst case of strong interactions and low temperatures, indicating the reliability of the method. For transient charge currents we find good agreement
with results obtained by the 2PI Green's function approach. Furthermore, we analytically show that the short-time dynamics of both local and non-local observables follow a "universal"
temperature-independent behavior when the metallic reservoirs have a flat wide band.
Current status:
Has been resubmitted
Reports on this Submission
Report #2 by Anonymous (Referee 2) on 2022-2-23 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2111.07320v1, delivered 2022-02-23, doi: 10.21468/SciPost.Report.4500
1) interesting new method with high potential
2) thoroughly tested in a simple model
3) finds re-entrant effect in transient dynamics
1) sometimes a little guesswork is needed to find the definitions of concepts used in the paper
2) paper uses american spelling (I suppose this is an european journal and would have preferred to see european spelling)
This is an interesting work on the temperature renormalisation scheme, starting from high temperatures and slowly reducing T until one reaches the low temperatures of physical interest. The formalism
is well explained and is built around the central self-consistency equation (32).
The working of the method is explored in a single-impurity Anderson model, first for stationary solutions (where results are compared with the one of other general approaches, mainly numerical and
are found to agree well) and then also for transient behaviour. An amusing re-entrant behaviour of occupation number and correlations is found and is related to the non-semi-group nature of the
An intrinsic assumption of this approach is that the progressive lowering of the temperature can also be slow enough as not to interfere with any intrinsic time-scales of the system. I suspect that
feature should limit the applicability of the technique to systems without phase-transition phenomena at some critical temperature T_c. Of course the single-impurity Anderson model studied does not
have such a phase transition.
But this remark is not meant to cast into doubt the general interest of the method proposed in this work.
Requested changes
please give a more detailed definition of the `bias' you use on p. 10 -- in the present version one has to guess what you probably mean
(you seem to consider two baths, called right and left and use two parameters \mu_L, \mu_R to finally define your bias V).
answer to question
We thank the referee for the report and are glad he found our manuscript interesting and well explained.
**Revision** As suggested we now define the bias $V=\mu_L - \mu_R$ on p. 10 of the manuscript.
**Revision** We have checked spelling for British English.
**Revision** In further extending our numerical results in response to a question by referee 1 we noted that Fig. 2b contained curves which were insufficiently converged. This has been corrected
without affecting any part of the text and conclusions.
The suggested question regarding the applicability of our method to systems with phase transitions is intriguing, but beyond the scope of the present manuscript.
Report #1 by Anonymous (Referee 1) on 2021-12-24 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2111.07320v1, delivered 2021-12-24, doi: 10.21468/SciPost.Report.4092
1 - Interesting generalization of fRG ideas, allowing one to study transient phenomena
2 - Clear presentation of the method and thorough analysis of the model
3 - Synergy of analytics and numerics
1 - A connection of the proposed scheme to other RG approaches could be described in a more extensive way
2 - Consideration of the paper appears too formal; a discussion in more physical (qualitative) terms should be extended
3 - The method is initially formulated for the general case of distinct temperatures of reservoirs; however, all the results are given for the case of equal temperatures
This is a nice attempt to formulate a renormalization-group approach to interacting systems by explicitly using the physical temperature as a flowing scale. The authors applied the developed
machinery to the Anderson model of an interacting quantum dot and demonstrated a quantitative agreement between their results and those obtained by other numerical methods.
Let me comment on the points that could be improved in the manuscript.
1. A connection of the proposed scheme to other RG approaches could be described in a more extensive way. Indeed, usually temperature serves as an infrared cutoff for the running energy scale, e.g.,
in a poor-man scaling approach. It would be interesting to see this connection on a formal level, comparing the "conventional" RG approach with temperature serving only as a cutoff with the framework
developed here. For this type of comparison, it would be nice to see a comparison of the approaches at an analytical level; for example, a derivation of the Luttinger-liquid renormalization of the
finite-temperature conductance through a barrier (either in the limit of a weak barrier, or in the limit of weak interaction) in a quantum wire would be very instructive.
2. Consideration of the paper appears too formal; a discussion in more physical (qualitative) terms should be extended. It is well known that, in interacting systems, temperature separates the energy
domains where the physics is dominated by real processes (with energy transfers smaller than T) or by virtual processes (energy transfers larger than T). The latter usually yield renormalization of
the quantities entering the effective kinetic equation. In the present framework, it is not clear whether only the latter processes are accounted for, or the real inelastic processes are also
effectively captured by the developed method.
Further, it is not quite clear whether the procedure depends on the initial density matrix of the system and how equilibration processes that lead to thermalization of the system state are described
in this approach. I strongly suggest the authors discussing such points in a qualitative manner and simple physical terms.
3 . The method is initially formulated for the general case of distinct temperatures of reservoirs; however, all the results are given for the case of equal temperatures. It is extremely interesting
to see how the method is applied to a situation with different temperatures of the reservoirs, which would correspond to a non-equilibrium steady state of the system. In particular, whether the
notion of "non-equilibrium dephasing" (see, e.g., papers by Gutman et al. on non-equilibrium bosonization) would naturally emerge in this setting. But even without performing the numerical analysis
of the flow equations in this intriguing case, it would be nice to present the set of such equations explicitly.
Requested changes
1 - Extend the discussion of a relation between the proposed approach and other RG approaches with temperature serving as an infrared cutoff; present an analytical solution of the derived flow
equations in some tractable well-known model.
2 - Add a discussion of the approach in more qualitative terms (see report).
3 - Present the flow equations for the general case of non-equal temperatures of the reservoirs.
answer to question
pointer to related literature
We thank the referee for the interesting questions and suggestions in the positive report. We address them below in the order of the requested changes and append a remark at the end. We have supplied
a color-coded PDF in which all changes have been marked including various small additional improvements and clarifications. In further extending our numerical results we noted that Fig. 2b contained
curves which were insufficiently converged. This has been corrected without affecting any part of the text and conclusions.
$1$. Extend the discussion of a relation between the proposed approach and other RG approaches with temperature serving as an infrared cutoff; present an analytical solution of the derived flow
equations in some tractable well-known model.
A connection of the proposed scheme to other RG approaches could be described in a more extensive way.
For this type of comparison, it would be nice to see a comparison of the approaches at an analytical level;
Indeed, usually temperature serves as an infrared cutoff for the running energy scale, e.g., in a poor-man scaling approach.It would be interesting to see this connection on a formal level,
comparing the "conventional" RG approach with temperature serving only as a cutoff with the framework developed here.
Revision To accomodate the referee's suggestion we have inserted the new Appendix F "Exact solution for $U=0$" in the revised manuscript. There we explain how the exact solution of the Anderson model
for $U=0$ (spin-degenerate resonant level) is obtained within the presented $T$-flow approach.
However, there is no obvious way of comparing with methods using a different flow parameter (other than comparing final results as we already do in the benchmark tests in Section 5 of the original
manuscript). Indeed, the referee correctly points out the key complication in finding such a simple relation between our $T$-flow and "conventional" RG methods: temperature is a flow parameter and
cannot serve as -or be compared to- a cut-off scale. As we argue further in point 2 below one may wonder whether attempting to find such a relation is necessary for physical understanding, since the
$T$-flow method already has a quite clear motivation in close analogy to Wilson's arguments as described in Section 2 of the manuscript.
Revision We have added a paragraph at the end of Section 2 discussing the above understandable concerns of the referee. Furthermore, in the Summary and Outlook Section 6 we have extended the
discussion of the most closely related RG methods which preceded the $T$ flow.
$2$. Add a discussion of the approach in more qualitative terms (see report).
We agree that further considerations of the physical underpinnings of the $T$-flow would be of interest as we stated in the Outlook Section 6. The referee inquires about various particular points:
(A) Elastic/inelastic contributions
It is well known that, in interacting systems, temperature separates the energy domains where the physics is dominated by real processes (with energy transfers smaller than T) or by virtual
processes (energy transfers larger than T). [...] In the present framework, it is not clear whether only the latter [virtual elastic] processes are accounted for, or the real inelastic processes
are also effectively captured by the developed method
Consideration of the paper appears too formal; a discussion in more physical (qualitative) terms should be extended.
I strongly suggest the authors discussing such points in a qualitative manner and simple physical terms.
Before attempting to reformulate the issue addressed by the referee and answer the question we need to explain why the particular "simple physical terms" that the referee suggests are not applicable
(a) In the ordinary bare perturbation expansion in the tunnel coupling, i.e., around the limit where system and reservoir are decoupled, the "energy transfers" mentioned by the referee can be
discussed (but already involve subtleties) in the context of quantum dot transport. One standardly distinguishes for example (see for a review R. Gaudenzi et al, J. Chem. Phys. 146, 092330 (2017)):
• Order $\Gamma$ contributions ("single-electron tunneling") which are significant only when the energy transfered to the dot matches the energy coming from the reservoirs as in Fermi's Golden
Rule. (Already this is a subtle matter since it is well established that this does not hold for additional "Lamb-shift" terms which are not captured by the Golden Rule but arise in the same order
$\Gamma$ (!) and can significantly contribute [theory: König, Martinek, Phys. Rev. Lett. 90, 166602 (2003), experiment: Hauptmann et. al. Nature Physics 4, 373 (2008)].
• Order $\Gamma^2$ processes ("cotunneling") are denoted "(in)elastic" when energy transfer between reservoir and dot is smaller (larger) than temperature. (Order $\Gamma^2$ Lamb-shift
contributions further complicate matters, [Misiorny, et al, Nature Physics 9, 801, (2013)] since the dot + reservoir energy are no longer conserved due to the significant coupling. The total
energy including the coupling is conserved.)
We are a bit confused that the convention mentioned by the referee is opposite the second standard convention. (We suspect this may relate to the difference between the picture of a 1 dim. wire and 0
dim. system coupled to reservoirs, see our comment at the end.)
(b) Crucially, in the present paper we do not start from this bare perturbation theory in $\Gamma$. This means that the above standard distinctions can no longer be made:
• We start from the exact solution for $T=\infty$ accounting for all tunneling contributions in this limit: the dot and the reservoir are coupled with finite energy $\Gamma$ and the reservoir is
traced out (not: decoupled!). This means that contrary to case (a) the energy of the dot is not conserved by the reference evolution and is meaningless (the system is open). The time-evolution is
governed by the renormalized Liouvillian (15) which is not a commutator of an energy operator. (One can thus not even say that "all energy transitions are smaller than the infinite thermal
energy" since there is no dot energy associated with the time evolution.)
• The "task" of the $T$-flow is not to include the coupling to reservoirs but instead to reduce its effect from the value at $T=\infty$ to the value at finite $T < \infty$. At no point is the
coupling switched off such that the notion of "(in)elastic" becomes a meaningful way of characterizing the approximations as the referee suggests. One cannot talk about the "energy transfers"
that the referee has in mind since these presuppose one can distinguish the energy of the dot from the energy of the reservoirs (which is unavailable) and that one can ignore the energy
contribution $\Gamma$ of the coupling (which is fully accounted for at $T=\infty$).
(c) Most importantly, in our approach there is also no need to consider the suggested concepts:
• As our technical development shows, in order to define, derive and implement the $T$-flow there is no need for the standard perturbation expansion or any terminology that presupposes it.
• As our intuitive discussion emphasizes, the simple insight of Wilson is sufficient to derive the required equations and suggests the explored approximation scheme. The only point of systematic
improvement is how far one goes in the expansion in the renormalized vertices of the $T$-flow equations, which is essentially controlled by the strength of the memory effects, i.e., how low in
temperature (!) one goes.
• As our results show, the approach is surprisingly successful underscoring the importance of Wilson's physical insights about correlations.
In summary, it is a key feature (!) of the $T$-flow method that commonly used concepts and simple pictures -referred to by the referee- are not applicable. It is this completely opposite approach
[see point (b)] that reveals a natural connection to Wilson's insightful physical considerations.
With the above in mind, we stress that our calculations do not have a mere "formal" character as suggested by the referee: They are guided down to the details by the physical considerations of
Section 2. As mentioned there the renormalized perturbation theory gives a direct handle on the relevant time-correlations which are crucial to the low-temperature (!) physics. The ordinary
perturbation expansion --and its associated considerations-- do not reveal the relevant correlations since the $T=\infty$ and $T < \infty$ contributions are completely mixed up. Our paper points out
-where possible and applicable- the qualitative underpinnings appropriate to the actually presented technical development. That these are not the "traditional" concepts used in RG methods starting
from the bare perturbative expansion underscores the novelty of the presented method. That the equations implied by Wilson's considerations do not have a "simple" rationalization underscores the
nontrivial nature of the problem. Clearly, further understanding along these lines is desirable but seems beyond the scope of our first presentation of this novel method.
Revision We fully understand the referee's motivation for the above requests and have highlighted in the Summary and outlook section 6 of the revised submission why "traditional" considerations fail
to apply.
(B) Renormalization of the kinetic equation
The latter [virtual processes (energy transfers larger than T)] usually yield renormalization of the quantities entering the effective kinetic equation.
The renormalization of the kinetic equation by higher order contributions which the referee inquires about can be seen in several instances in the paper. The simplest example is the renormalized
Liouvillian Eq. (15) which accounts for all orders of tunneling $\Gamma$ at $T=\infty$. Further renormalizations of the kinetic equation are generated by subsequent $T$-flow steps thereby accounting
for the finite-temperature dependence of the kinetic equation. Thus, the $T$-flow can be considered as a continuous renormalization of the kinetic equation going from one temperature $T$ to the next
lower temperature $T-\delta T$.
Revision This is now pointed out after Eq. (16) and (19) in the revised manuscript (omitting the distinction between elastic and inelastic processes mentioned by the referee which is not applicable
(C) Initial state dependence
Further, it is not quite clear whether the procedure depends on the initial density matrix of the system
Revision This is now mentioned more explicitly on p. 5 after Eq. (6) of the revised manuscript.
The existence of a well-defined propagator independent of the initial system state is guaranteed from the very beginning by our assumption of the initial decoupling of system and environment. Related
to this is a non-trivial physical property that the exact superoperator $\Pi(t)$ obeys, called complete positivity. It asserts that the evolution correctly treats any entanglement that the system may
have with other systems. We have now verified that the approximate propagator obtained by the $T$-flow indeed obey this crucial property for all times by numerical analysis.
Revision In the beginning of Section 5 the nontrivial complete positivity check on our propagator results is now reported and its physical significance is briefly mentioned with pertinent references.
(D) Decay to stationary state
[it is not quite clear] how equilibration processes that lead to thermalization of the system state are described in this approach.
We are not sure what the referee aims at here and similarly under point 3:
... a situation with different temperatures of the reservoirs, which would correspond to a non-equilibrium steady state of the system.
The phrasing of these question seems to suggests that our results deal with transient approach to equilibrium which is not the case: due to the finite bias voltage we already have a non-equilibrium
stationary quantum dot state. (Also here we suspect the confusion is due to the picture of 1-dim. wire versus 0-dim. quantum dot, see concluding remark.)
Revision This important point is now highlighted at the beginning of Section 4.
To avoid any confusion we note that:
• The decay of the initial state of the quantum dot is completely dictated by the memory kernel $K$. This decay is not "thermalization" since it leads to non-equilibrium stationary state.
• The $T$-flow method does not rely on any assumption about the stationary state, e.g., whether it is unique.
• The presented results benchmarked the $T$-flow method under strong non-equilibrium conditions for the example of strongly interacting quantum dot at finite bias voltage exceeding both $T$ and $\
• Introducing (in addition to the voltage bias $\mu_L - \mu_R$ already studied) a finite temperature bias $T_L-T_R$ is possible as mentioned in the Outlook Section 6. This would drive the system
out of equilibrium further/differently.
$3$. Present the flow equations for the general case of non-equal temperatures of the reservoirs.
The method is initially formulated for the general case of distinct temperatures of reservoirs; however, all the results are given for the case of equal temperatures. It is extremely interesting
to see how the method is applied to a situation with different temperatures of the reservoirs,...
But even without performing the numerical analysis of the flow equations in this intriguing case, it would be nice to present the set of such equations explicitly.
We are glad the referee finds the ability to deal with different temperatures interesting.
Revision In the revised manuscript we have added a new Appendix D, which discusses the flow equations for different temperatures. Additionally, in the Outlook Section 6 we have further indicated
which applications we envisage to thermoelectric transport through correlated system by providing two relevant references.
From the suggestions made by the referee in point 1
.. for example, a derivation of the Luttinger-liquid renormalization of the finite-temperature conductance through a barrier (either in the limit of a weak barrier, or in the limit of weak
interaction) in a quantum wire would be very instructive.
and in point 3
In particular, whether the notion of "non-equilibrium dephasing" (see, e.g., papers by Gutman et al. on non-equilibrium bosonization) would naturally emerge in this setting.
we infer a strong affinity with transport through strongly interacting wires with impurities. This is a much more complicated problem than that addressed in our paper and the methods and "pictures"
appropriate to that field are quite different from the ones discussed in the present paper. (E.g. the notion of "energy" in point 2.)
In our case we focus on the impurity, a strongly interacting 0 dimensional object (quantum dot) coupled to reservoirs treated as non-interacting. This is clearly not appropriate for 1-dimensional
wires where the "reservoir interactions" are well-known to be important (Luttinger-liquid physics). Due to our focus on a relatively simpler situation we are able to address the more complicated
issue of transient phenomena.
Revision To avoid confusion we now emphasize from the outset the importance of the assumption of non-interacting reservoirs in the revised manuscript in Section 2. | {"url":"https://scipost.org/submissions/2111.07320v1/","timestamp":"2024-11-10T02:44:46Z","content_type":"text/html","content_length":"62196","record_id":"<urn:uuid:2ef31eb2-58fc-4ae9-a74f-77a5c18f86c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00233.warc.gz"} |
Re: Allometric equations for predicting body mass in dinosaurs
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]
Re: Allometric equations for predicting body mass in dinosaurs
2009/8/20 Zach Armstrong <zach.armstrong64@yahoo.com>:
> I have a question about this paper: Allometric equations for predicting body
> mass of dinosaurs. Journal of Zoology, June 21, 2009 DOI:
> 10.1111/j.1469-7998.2009.00594.x.
> I do not have the paper, but was wondering if an estimate of 18 tonnes for
> Brachiosaurus is reasonable (estimate reported here:
> http://www.livescience.com/animals/090621-dinosaur-size.html)? That seems way
> too light for a 70 ft long animal.
Yes, it's too light -- quite a bit too light (though not as much too
light as you might think). The Packard et al. paper is about getting
better regression equations for estimating mass from limb-bone
measurements, but that method is always going to give misleading
results for brachiosaurs, however good the equations, because they
have crazy-weird super-gracile humeri (and their femora are
surprisingly narrow anteroposteriorly, though they are transversely
> The reason I bring this up was because I was trying to come up with a better
> estimate of the weight of Puertasaurus (here:
> http://ztwarmstrong.deviantart.com/). In the end, I get an estimate of 58
> tonnes for Puertasaurus using Argentinosaurus as a model, and using the
> drastically reduced weight estimates of the above paper. Any thoughts would
> be appreciated.
(Nothing to say on this, sorry -- I've not really looked at
Puertasaurus. Execept of course to say that any mass estimate based
on one cervical and one dorsal is going to be, shall we say, subject
to uncertainty.)
> Do these equations affect how those, like Greg Paul, get there weight
> estimates?
No. Greg Paul -- like Don Henderson, Alexander McNeill and (going
further back) Colbert and Gregory -- got his estimate by measuring a
model, scaling up the volume and multiplying by density. There are
potential sources of error in this approach, for sure, but there's no
allometric equation involved, and each estimate is based on its own
By the way, reputable published estimates of the mass of A SINGLE
SPECIMEN (the "Brachiosaurus" brancai lectotype HMN SII) have varied
by a factor of 5.75 (yes!) as shown by the following table from an
in-press book chapter:
Table 2. Changing mass estimates for Brachiosaurus brancai.
Author and date Method Volume Density Mass
(l) (kg/l) (kg)
Janensch 1938 Not specified -- -- "40 t"
Colbert 1962 displacement of sand 86953 0.9 78258
Russell et al. 1980 limb-bone allometry -- -- 13618(1)
Anderson et al. 1985 limb-bone allometry -- -- 29000
Paul 1988a displacement of water 36585 0.861(2) 31500
Alexander 1989(3) weighing in air and water 46600 1.0 46600
Gunga et al. 1995 computer model 74420 1.0 74420
Christiansen 1997 weighing in air and water 41556 0.9 37400
Henderson 2004 computer model 32398 0.796 25789
Henderson 2006 computer model -- -- 25922
Gunga et al. 2008 computer model 47600 0.8 38000
Taylor in press graphic double integration 29171 0.8 23337
1. Russell et al. give the mass as "14.9t", which has usually been
interpreted as representing metric tonnes, e.g. 14900 kg. However,
they cite "the generally accepted figure of 85 tons" (p. 170), which
can only be a reference to Colbert (1962). Colbert stated a mass of
85.63 U.S. tons as well as the metric version, so we must assume that
Russell et al. were using U.S. tons throughout.
2. Paul used a density of 0.9 kg/l for most of the model, and 0.6 kg/l
for the neck, which was measured separately and found to constitute
13% of the total volume, yielding an aggregate density of (0.9 × 87%)
+ (0.6 × 13%) = 0.861 kg/l.
3. Alexander did not state which Brachiosaurus species his estimate
was for, only that it was based on the BMNH model. This model is
simply stamped "Brachiosaurus".
> BTW, if anyone has a pdf of the paper I would be much obliged to receive a
> copy...
I'll send it offlist.
> P.S.: How many cervicals and dorsals do titanosaurs have respectively?
It's impossible to say -- these numbers vary wildly throughout other
major sauropod clades and will likely do the same in Titanosauria
(which after all includes more than a third of a sauropod genera). We
have very, very few complete titanosaurs to model on. Rapetosaurus
has 16 cervicals and 11 dorsals (Curry Rogers and Forster 2001),
Malawisaurus may have 13 cervicals but even that is an estimate based
on others' estimates (Gomani 2005).
> How would you estimate the length of the cervical series and the dorsal
> series (and caudal series) respectively?
You basically can't do it based on published measurements. Hopefully
that will start to change with the long-awaited monographic
description of Rapetosaurus, which is either in review or in press at | {"url":"http://dml.reptilis.net/2009Aug/msg00251.html","timestamp":"2024-11-03T16:55:24Z","content_type":"text/html","content_length":"10564","record_id":"<urn:uuid:e32a6de3-bf50-4c8c-8b18-506a644aa2f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00553.warc.gz"} |
mmorph-1.0.0: Monad morphisms
Several people have asked me to split off
so that they could use it in their own libraries without a
dependency, so today I'm releasing
the mmorph library
, which is the new official home of
. The upcoming
release will depend on
to provide
library specifically targets people who make heavy use of monad transformers. Many common problems that plague users of monad transformers have very elegant solutions inspired by category theory and
provides a standard home for these kinds of operations.
This post won't include examples because
already features an
extended tutorial
at the bottom of its sole module:
. The tutorial highlights several common use cases where the
library comes in handy and I highly recommend you read it if you want to understand the concrete problems that the
library solves.
Moving on up
takes several common Haskell idioms you know and love and lifts them to work on monads instead. The simplest example is a monad morphism:
{-# LANGUAGE RankNTypes, TypeOperators #-}
type m :-> n = forall a . m a -> n a
A monad morphism is a function between monads and all monad morphisms must satisfy the following two "monad morphism laws":
morph :: m :-> n
morph $ do x <- m = do x <- morph m
f x morph (f x)
morph (return x) = return x
Using the above type synonym for monad morphisms, we can simplify the type signature of
from the
type class:
class MFunctor t where
hoist :: (Monad m) => (m :-> n) -> (t m :-> t n)
is the higher-order analog of the
class (thus the name), and the resemblance becomes even more striking if you change the type variable names:
class MFunctor f where
hoist :: (a :-> b) -> (f a :-> f b)
This parallel lets us reuse our intuition for
s. An
wraps a monad in the same way that a
wraps a type, and
s provide an
-like function,
, which modifies the wrapped monad.
If you've ever used monad transformers then you've probably already used
s. Just check out the instance list for
and you'll see many familiar names:
instance MMorph IdentityT where ...
instance MMorph MaybeT where ...
instance MMorph (StateT s) where ...
In fact,
has been carrying around type-specialized versions of
for years:
• mapIdentityT is hoist for IdentityT
• mapStateT is hoist for StateT
• mapMaybeT is hoist for MaybeT
provides a standard interface to these functions so that you can program generically over any monad transformer that implements
I heard you like monads
We can define a higher-order functor that wraps monads, so why not also define a higher-order monad that wraps ... monads?
It turns out that actually works!
class MMonad t where
embed :: (Monad n) => (m :-> t n) -> (t m :-> t n)
Again, judicious renaming of type variables reveals the parallel to the
class MMonad m where
embed :: (Monad b) => (a :-> m b) -> (m a :-> m b)
is just the higher-order cousin of
! Many monad transformers have sensible definitions for
instance MMonad IdentityT where ...
instance MMonad MaybeT where ...
instance (Monoid w) => MMonad (WriterT w) where ...
But wait! Where is
? Well, what type would we expect the higher-order
to have?
??? :: m :-> t m
Well, if we expand out the definition of
, we get:
??? :: m a -> t m a
Why, that is just the signature for
But it's not enough for it to just have the right shape of type. If it's really part of a higher-order monad, then
must obey the monad laws:
-- m >>= return = m
embed lift m = m
-- return x >>= f = f x
embed f (lift x) = f x
-- (m >>= f) >>= g = m >>= (\x -> f x >>= g)
embed g (embed f m) = embed (\x -> embed g (f x)) m
... and all the
instances do satisfy these laws!
Functor design pattern
library represents a concrete example of the
functor design pattern
in two separate ways.
First, the monad morphisms themselves define functors that transform Kleisli categories, and the monad morphism laws are actually functor laws:
morph :: forall a . m a -> n a
(morph .) (f >=> g) = (morph .) f >=> (morph .) g
(morph .) return = return
... so you can think of a monad morphism as just a principled way to transform one monad into another monad for compatibility purposes.
Second, the
function from
defines a functor that transforms monad morphisms:
hoist (f . g) = hoist f . hoist g
hoist id = id
... so you can think of
as just a principled way to transform one monad morphism into another monad morphism for compatibility purposes.
library is a concrete example of how functors naturally arise as compatibility layers whenever we encounter impedance mismatch between our tools. In this case, we have an impedance mismatch between
our monad transformers and we use functors to bridge between them so they can seamlessly work together.
7 comments:
1. I'm confused by the second law. How is `morph (return x)` a law? Do you mean `morph (return x) = return x`?
1. Yeah, that was a mistake. I did mean `morph (return x) = return x`. I fixed it.
2. Also `(morph .) return` should be `(morph .) return = return`.
Thanks for a nice library!
1. You're welcome! Thanks for catching that. Now that's fixed, too.
3. This is neat, thanks for the hard work!
4. Have you read "Monads, Zippers and Views: Virtualizing the Monad Stack" by Schrijvers and Oliveira? It seems like your "hoist" is exactly their "tmap." I wonder if you think one could use mmorph
to implement the same monad operations they describe in their paper?
5. I skimmed it once a long time ago back when I was learning monad transformers for the first time (it was way over my head back then). Having now read it again I see that `mmorph` basically
corresponds to sections 4 and 5 of their paper. The idea is that the combination of `hoist` and `lift` acts like their structural mask (and they use `view`/`tmap`, but it's still the same basic
For example, if you have a global transformer stack of type:
total :: t1 (t2 (t3 (t4 m))) r
... but you want to ignore layers t1 and t3. Then what you do is write a computation that assumes that only layers t2 and t4 are present:
sub :: t2 (t4 m) r
... then when you are done you can merge it into the global transformer stack using `hoist` and `lift`:
(lift . hoist lift) sub :: t1 (t2 (t3 (t4 m))) r
This lets you write `sub` in such a way that it ignores layers it does not need, and then the client can worry about getting it to unify with other monad layers through judicious use of `lift`
and `hoist`. Those `lift`s and `hoist` are basically the "structural mask" they proposed.
However, there are several things in that paper that `mmorph` cannot do. For example, you cannot do bidirectional views, sophisticated liftings (of the kind described in section 6), or nominal
Thanks for bringing that paper to my attention. Now I see that there is prior art in the literature for `mmorph`. Neat! :) | {"url":"https://www.haskellforall.com/2013/03/mmorph-100-monad-morphisms.html","timestamp":"2024-11-03T05:59:11Z","content_type":"application/xhtml+xml","content_length":"96325","record_id":"<urn:uuid:3984bb3f-ce5b-41b0-8971-a0a68f746aea>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00261.warc.gz"} |
How to Calculate Face Value of a Bond in Excel (3 Easy Ways) - ExcelDemy
In this article we will demonstrate 3 different formulas to calculate the face value of a bond in Excel.
Bond and Face Value
A bond is a fixed-income tool used by investors, companies, governments, and business entities to borrow money from the Capital Market. The owners of bonds are called the debtholders, creditors, or
bond issuers. The bond price is the present discounted value of the future cash stream generated by a bond, and refers to the accumulation of all likely Coupon payments and the present value of the
par value at maturity.
The principal amount of the bond is called the face value of the bond, and reflects how much a bond is worth when it matures, also known as the par value.
3 Handy Approaches to Calculate the Face Value of a Bond in Excel
To demonstrate our methods, we’ll use a dataset with 2 columns: “Bond Particulars” and “Value”. For the first 2 methods, we will find the face value of a Coupon Bond, and for the last method we will
find the face value of a Zero Coupon Bond. The following values are provided in order to perform the calculations:
• Coupon Bond Price.
• Number of Years Until Maturity (t).
• The number of the Compounding Per Year (n).
• Yield to Maturity – YTM (r).
• Annual Coupon Rate (for Zero Coupon Bond, this value will be zero (0%)).
• Coupon (c).
Using these values, we will find the face value of a Bond in Excel.
Method 1 – Using the Coupon to Calculate the Face Value of a Bond in Excel
For the first method, we will multiply the coupon (c) by the number of compounding per year (n), and divide the product by the Annual Coupon Rate to calculate the face value of a bond.
Our formula looks like this:
• Enter the following formula in cell C11:
• Press ENTER to return the face value of the bond.
The face value of a bond with a coupon price of $25 and a coupon rate of 5% compounded semi-annually is $1000.
Read More: Calculate Price of a Semi Annual Coupon Bond in Excel
Method 2 – Finding the Face Value from the Bond Price
Now we will derive our formula from the coupon bond price formula, then use that formula to calculate the face value. This time, the coupon price is not directly provided in the example, so our
formula looks like this:
• Enter the following formula in cell C10:
The face value of a bond with a price of $1081.76, t = 10 years, n = 2, r = 4%, and an annual coupon rate = 5% is $1000.
Read More: How to Calculate Coupon Rate in Excel
Method 3 – Calculating the Face Value for a Zero Coupon Bond in Excel
Finally, we will find the face value for a Zero Coupon Bond in Excel. The Annual Coupon Rate is 0% for a Zero Coupon Bond, so our formula is as follows:
• Enter this formula in cell C10:
With a Zero Coupon Bond price of $1345.94, t = 10 years, n = 2, r = 4%, the face value of the bond will be $2000.
Download Practice Workbook
Related Articles
<< Go Back to Bond Price Formula Excel|Excel Formulas for Finance|Excel for Finance|Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
We will be happy to hear your thoughts
Leave a reply | {"url":"https://www.exceldemy.com/calculate-face-value-of-a-bond-in-excel/","timestamp":"2024-11-14T20:45:18Z","content_type":"text/html","content_length":"190828","record_id":"<urn:uuid:fbeb5cb7-0fcb-4f8e-8d9e-93fca82b729b>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00242.warc.gz"} |
Statement i two particles moving in the same direction do not l-Turito
Are you sure you want to logout?
Statement I Two particles moving in the same direction do not lose all their energy in a completely inelastic collision.
Statement II Principle of conservation of momentum holds true for all kinds of collisions.
A. Statement I is true, statement II is true, statement II is the correct explanation of statement I
B. Statement I is true Statement II is true, Statement II is not correct explanation of statement I.
C. Statement I is false, Statement II is true
D. Statement I is true, Statement II is false
The correct answer is: Statement I is true, Statement II is false
Get an Expert Advice From Turito. | {"url":"https://www.turito.com/ask-a-doubt/statement-i-two-particles-moving-in-the-same-direction-do-not-lose-all-their-energy-in-a-completely-inelastic-colli-qdbbbaf","timestamp":"2024-11-13T03:21:30Z","content_type":"application/xhtml+xml","content_length":"501587","record_id":"<urn:uuid:5200a319-0cd5-4b8c-964d-c1f5cac58545>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00034.warc.gz"} |
Analytical Chemistry - Online Tutor, Practice Problems & Exam Prep
We've taken a look at monoprotic acids, diprotic acids, and now we're taking a look at polyprotic acids. This means the inclusion of additional equivalence points and even more calculations involved
with titrations. As always, we look for the equivalent volume of our titrant being used. Here, our titrant is a strong base. We're going to say here, molarity of my acid times volume of my acid
equals molarity of my base times the equivalence volume of my strong base. Here, we'll plug in the values. When we divide both sides by the 0.100 molar of the strong base, we'll see that the first
equivalence volume here will be 50 mLs. Since we're dealing with phosphoric acid, that means we have 3 equivalence points involved. So we're going to have 3 equivalence volumes needed. To get to the
second equivalence point, we need an additional 50 mL. So that'd be 100 mL. To get to the final and third equivalence point, we would need another 50 mL. That would be a 150 mL. Those are the volumes
needed for each equivalence point. Before any strong base has been added, we essentially just have a weak acid. Therefore, we can set up an ICE chart in order to determine our equilibrium expression.
Since we're dealing with removing the first acidic hydrogen from phosphoric acid, that means we're dealing with Ka1. So the acid donates an H+ to water to produce H[2]PO[4]^−, which is dihydrogen
phosphate, plus a hydronium ion. Remember, we'd have the initial concentration of our acid, but no initial concentrations of our products because they haven't yet formed. We're losing reactants in
order to make products. Bringing down everything helps us to come up with our expression. Remember, we could do our 5% approximation method to help us determine, we have k1, k2, and Ka3. So Ka1 is
7.5 × 10^−3. K2 is 6.2 × 10^−8, and then we have 4.8 × 10^−13. If we use the initial concentration of 0.100 molar and divide it by the Ka that we're using in this example, which is 7.5 × 10^−3, we
would not get a value greater than 500. Therefore, we have to keep the minus x in our expression and perform the quadratic formula. Using the quadratic formula, we'd find out that x, which equals our
H^+ concentration, was going to be equal to approximately 0.0239 molar. By taking the negative log of that concentration, we'd find out that our pH is approximately 1.62. At this point, we haven't
even commenced titrations yet. We haven't added any strong base. This is our initial pH based on just the concentration of phosphoric acid. Once we start adding our strong base to this solution, we
add NaOH and OH^− to our weak acid solution. | {"url":"https://www.pearson.com/channels/analytical-chemistry/learn/jules/ch-10-acid-base-titrations/polyprotic-titrations?chapterId=f5d9d19c","timestamp":"2024-11-09T09:16:00Z","content_type":"text/html","content_length":"301112","record_id":"<urn:uuid:5dda675b-5778-4a84-a243-bb562f9bf644>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00051.warc.gz"} |
Does Damage to MH17 indicate or exclude a Particular Buk Launch Location?
If the speed of missile is reduced to something like 480 meters/sec fragments will hit that area on top of the roof with the angle we see on the photos.
The original post was information/claim that AA showed evidence that the SAM did not originate in Snizhne - all of the basic speed and fragmentation parameters in the simulation are from the AA
presentation. The simulation shows that the AA claim starts breaking down when the vector calculations based on these speeds are resolved. Resorting to modifying the claimed speeds from AA would seem
to me to further impeach their evidence.
Based on the next 3 pictures:
It shows the cockpit roof.
This photo shows the same two scratches in the little red box above the cockpit window
A larger photo here
http://www.diena.lt/sites/default/files/Vilniausdiena/Vartotoju zona/rutaa/47rs141120b032.jpg
The reconstruction looks like this
Conclusion: missile exploded very close to the cockpit window in front of the captain seat.
The centre of the warhead was at the level of the red line/arrow in first picture, maybe a few centimeters higher or lower. The damage of photo 2 & 3 is mostly not a direct penetration, hit the plane
under a very small angle. Also the damage is on the starboardside of the plane, and the missile exploded on the portside of the plane, that means the fragments had cross the middle of the plane,
which is sligthly higher than is location of the damage shown in picture 2 and 2.
Even if the point of exploding of AA is correct, there is still a big variable which is unproven in the AA point of exploding, the angle (or heading) of the missile.
At this moment is assumed the trajectory of the missile is 20 degrees, which is for me still a very big questionmark. This trajectory doesn't match with the basic principle of the radar of the
missile and with the functionality of the proximity fuse (although i have to admit, i need more information on that to be sure).
Almaz-Antey spread out information about meeting course plane-missile for both Zaroschenskoe and Snizhne.
The velocities involved are big, the errormargins so very small. 1/10 of a second difference in timing means a gap of 125 meters between the plane and missile (252 m/s of the plane and 1000 m/s
of the missile), even 0,05 of a second already means the difference between 'succes' and 'failure'.
It is absolutely nothing for work fuse. Detonation happen with speed 8000 m/s which much more then other speeds.
The radar of the missile has a variable angle between 30-60 degrees.
It wrong information - radar have angle from -90 to +90 to both axis.
The missile will adjust its trajectory depending on the result of this radar, but at the same time always makes sure the new trajectory is never outside the margins of the radar angle of 30-60
Missile dont adjust trajectory. Trajectory pattern for missile is result of calculation next things:
1. Mode of fire
2. Location of Meeting point
3. Type of target
4. Speed and course of target
5. Relative position of TELAR (TEL) to target course
When missile readying to start all that info downloading to missile computer so missile know what angle must choose radar seeker for lock target on final stage. All other targets ignored as false/
At launch of the missile the width of the angle in km is big, but the closer it gets to its target, the narrower it gets. A launch from Zaroschenskoe will show a curved (in 2D horizontal
spectrum) trajectory then a launch from Shizne.
Missile dont have target for seeker almost all flight except last km. But radar seeker have very wide radiation pattern so dont target narrow angle but measure power of signal in a few position of
antenna during scanning. Correction to course is result of choosing best power of signal.
Look on picture
My personal opinion is the trajectory of a missile launched from Zaroschenskoe should be bigger, how much i can't tell (yet).
Do you have theory how proportional navigation work for Z and dont work for S?
What i am traying to explain:
there are several variables that needs to be proven (launchlocations and angles) we cannot proof variable 1 with using one of the other variables. We have to do step by step.
1st: the best estimate of the point (both vertical and horizontal) of the moment of exploding.
2nd: the precise functionality of the radar and proximity fuse
Based on those answer, we could draw a conclusion on the possible launchingside.
We cannot pinpoint location of launch missile just because we cannot measure penetration angles on MH17 debris. We only can debunk lie about possible penetration angles based on math. And include
such scenarios like launch from S in possible (or Z as impossible) just because it fit well in math model which simulated by Mick West.
Moment of exploding missile cannot be calculated with radar or fuse precision. Seeker dont know distance to target and measure angle to power center. Fuse know range only on terminal stage when
seeker end his work and give to fuse command to ready exploding. But radiation pattern for fuse is wide enough for cover 180 degree area around missile (each antenna).
P.S. Another reason why missile from Z is impossible shown on picture above - seeker should choose best power position of target and from side it is fuselage near centerplane, dont nose part.
Last edited:
Here a photo (same one as above) but a bit zoomed out
It shows the cockpit roof.
This photo shows the same two scratches in the little red box above the cockpit window
A larger photo here
http://www.diena.lt/sites/default/files/Vilniausdiena/Vartotoju zona/rutaa/47rs141120b032.jpg
The reconstruction looks like this
Conclusion: missile exploded very close to the cockpit window in front of the captain seat.
These marks seem to fit very well with AA's theory of a missile from Zaroshens'kye
These marks seem to fit very well with AA's theory of a missile from Zaroshens'kye
Sure! Just need add only 45 degree to their lie and then AA theory is right!
But if you miss simulation of Mick West then you should know - AA give wrong lancet (and pellets flight) angles.
Wait! AA lie about lancet too.
Sure! Just need add only 45 degree to their lie and then AA theory is right!
How did you calculate 45 degrees?
But if you miss simulation of Mick West then you should know - AA give wrong lancet (and pellets flight) angles.
Wait! AA lie about lancet too.
It has not been established that the missile manufacturer is wrong WRT the lancet.
If you think it has then perhaps you can explain it in plain English.
I've always been in agreement with a quote attributed the Albert Einstein.
"If you can't explain it simply, you don't understand it well enough." - Albert Einstein
thank you
It is absolutely nothing for work fuse. Detonation happen with speed 8000 m/s which much more then other speeds.
it is more relevant for the radar of the missile, that is what i meant.
It wrong information - radar have angle from -90 to +90 to both axis.
Please could you provide us the source of that claim. In previous messages in this thread, proof is shown about the 30-60 degrees (depending on velocity).
Do you have theory how proportional navigation work for Z and dont work for S?
proportional navigation is much more relevant for Z than for S, based on the lauchsides compared to the position of the plane.
seeker should choose best power position of target and from side it is fuselage near centerplane, dont nose part.
that is my assumption as well, but i would like to find proof for that.
How did you calculate 45 degrees?
It has not been established that the missile manufacturer is wrong WRT the lancet.
If you think it has then perhaps you can explain it in plain English.
I've always been in agreement with a quote attributed the Albert Einstein.
thank you
At least 45 degree or even more? Not enough for debunk their version with missile from Z?
Im already told about AA lie in
this doc
it is more relevant for the radar of the missile, that is what i meant.
When fuse receive signal from target's surface no time for radar, time for explode.
Please could you provide us the source of that claim. In previous messages in this thread, proof is shown about the 30-60 degrees (depending on velocity).
Do you understand difference between radar seeker and radar fuse?
proportional navigation is much more relevant for Z than for S, based on the lauchsides compared to the position of the plane.
Proportional navigation good for both scenarios. But for Z is meeting point is wrong. It little AA lie which should describe better damage of plane.
that is my assumption as well, but i would like to find proof for that.
Sorry, proof for your assumption from me?
At least 45 degree or even more? Not enough for debunk their version with missile from Z?
Im already told about AA lie in
this doc
agree with that,lancet much exaggerated,barrel is simply the wrong shape for the desired effect,may have concentration of heavier frags on centreline but thats it
Im already told about AA lie in
this doc
You linked to an anonymous document, that might or might not be reliable. I'm not sure how that is supposed to convince anyone.
What sentences
You linked to an anonymous document, that might or might not be reliable. I'm not sure how that is supposed to convince anyone.
is hard to understand for you? May be you have better description how barrel-like warhead 9N314 can produce lancet when all other warheads cannot? Or you want to dispute with vector addition (in my
doc - for dummies, or in Mick's simulation -very cute and understandible visualisation of math/geometry)? Then shoot it pls.
These marks seem to fit very well with AA's theory of a missile from Zaroshens'kye
if we are to believe this detonation angle claimed by A-A with forward sweeping frag beam,how can that explain entry holes with regards to cabin curvature at this point,frags don't boomerang
View attachment 13525 View attachment 13525
if we are to believe this detonation angle claimed by A-A with forward sweeping frag beam,how can that explain entry holes with regards to cabin curvature at this point,frags don't boomerang
View attachment 13525 View attachment 13525
I don't see where anything has to necessarily boomerang. The best way would be to look at a 3D reconstruction. There is a video in the slides of how AA sees the detonation. Did you see that? I won't
link to it right now as I'm unable to post a screenshot right at this moment.
These marks seem to fit very well with AA's theory of a missile from Zaroshens'kye
The simulation shows that the red bar and first 2 green arrows (on the left) are impossible given the missile and aircraft speed, and the data provided by AA.
External Quote:
The main peculiarity of rocket 9M38M is the special area which is called a "lancet", or the killing lancet, which is perpendicular [inaudible] basically the area of concentration of more than 40%
of the all splinter mass, and one half of the whole kinetic energy.
That means even though the area of severest damage is within the lancet, more than 50% of the shrapnel mass will be outside of the lancet. Some of it behind the lancet and some of it in front of the
Looking at these two photos:
it doesn't appear the lancet moved along those two red arrows in the second foto. That is not where the area of main destruction is, nor is it the area where most of the shrapnel went.
Last edited:
The simulation shows that the red bar and first 2 green arrows (on the left) are impossible given the missile and aircraft speed, and the data provided by AA.
What specific assumptions are you using WRT to the missile?
Added in edit: What do you
know about the way this missile detonates and what is your source?
Last edited:
The trajectory of the fragments is the best proof there is. Photos do not lie (if not Photoshopped). We know very little about the BUK missile. What is being told about missile characteristics could
or could not be the truth. The interests of both Ukraine and Rusland are very high.
Find those photos, place them in this thread and ask others to help locate the debris.
These marks seem to fit very well with AA's theory of a missile from Zaroshens'kye
William, in your presentation above, the direction in which the fragments move is at least 45 degrees, if not more, off with how these marks actually appear on the hull :
So apart from the fact that you still seem to use the unadjusted AA fragment pattern (hint: please use Mick's dynamic tool to make your case), it is not clear AT ALL from your pictures why these
marks seem to fit very well with AA's theory of a missile from Zaroshens'kye.
In fact, they seem to contradict it outright.
Last edited:
Angles about perpendicular to plane course.
It's not very helpful to depict what the damages would look like on the much smaller 767:
The cabin width of the 767 is 4.72 meter as opposed to 6.20 meters for the 777, so the way the sheets are arrange and the angles involved may differ.
It's not very helpful to depict what the damages would look like on the much smaller 767:
The cabin width of the 767 is 4.72 meter as opposed to 6.20 meters for the 777, so the way the sheets are arrange and the angles involved may differ.
Do you think this damage would look much different on a (wider) 777 hull ?
It's not very helpful to depict what the damages would look like on the much smaller 767:
The cabin width of the 767 is 4.72 meter as opposed to 6.20 meters for the 777, so the way the sheets are arrange and the angles involved may differ.
Nose section of B767 and B777 is same, but you right - better compare angle on B777 skin.
And your picture
give a very understandible angle which dont fitted with AA version about launch from Z (even with their lie about disclosure angles of warheads splinters), but fit well with missile from S.
William, in your presentation above, the direction in which the fragments move is at least 45 degrees, if not more, off with how these marks actually appear on the hull :.
Rob you posted a photo of a different plane. <scratches head>
Rob you posted a photo of a different plane. <scratches head>
Do you think this damage would look much different on a (wider) 777 hull ?
give a very understandible angle which dont fitted with AA version about launch from Z (even with their lie about disclosure angles of warheads splinters), but fit well with missile from S.
ad 2015. Can you explain what angles you are imagining?
1.What angle do you think the marks are?
2.What angle do you think a Z launch predicts?
3.What angle does an S launch predict?
Do you think this damage would look much different on a (wider) 777 hull ?
There was no damage on the photo you posted , just a funny red mark on a different plane that didn't seem to be relevant
William, your have a more than 45 degree angle to deal with.
There was no damage on the photo you posted , just a funny red mark that didn't seem to be relevant
Just a funny red mark ?
That did not seem relevant ?
William, your have a more than 45 degree angle to deal with.
Just a funny red mark ?
That did not seem relevant ?
Yes. You posted a different plane, put your own red mark on it (I presume) on your own invented angle, and tried to say it meant something <scratches head>
When it comes to facts, I guess some people are hard to convince.
There is a certain difference in the shrapnel distribution.
When the missile was lauchned from Zaroschenskoe some fragments will have a trajectory running from the nose of the aircraft towards the tail. It is impossible for a Snizhne launch to have this
When the missile was launched from Snizhne,some of the fragments will go from the left side of the aircraft towards the right, almost perpendicular to the route of the plane. This is impossible for a
Zaroschenskoe launch.
Go find that difference!
I found three separate photos which I stiched together to a single one to get a better view of the part.
Many shrapnel holes are visible. I assume the part is upside down and only one side of the steel bar is damaged.
The high resolution photos of this can be found in this album.
Question: where in the aircraft is this piece located? And what is front, back etc?
The Flickr album has various photos of these part. It might help to id the location.
The parts where photographed at the site where the cockpit crashed.
Last edited:
This is an easy one. It is part of the nose cone, lefthand side
Some damage of what could be fragments on the top closest to the cockpit window of captain.
Just found old soviet picture of SA-1 Guild missile with disclosure angles and dynamic field of strike elements.
Warhead (shape is serve for concentrate most splinters in backward direction but still disclosure angle around 30-40 degree because have barrel-like shape)
And dynamic field of strike elements with VECTOR ADDITION
On top vector addition in graphic form
Vp (missile speed) + Vц (target speed) = V отн (relative speed)
Below disclosure angle of strike elements varied from 84 degree to 118 degree.
Important - even
constricted to end
barrel-like warhead give only
11 degree rotation to backward
(84-118 degree with center near 101 degree)! But this is angles of disclosure for static missile.
On graph below angles 84-118 degree (disclosure angles of splinters for static missile) magically (for AA) rotated to
forward on 30 degree
(result of adding relative speed missile/target to splinter speed/direction), on picture Delta дин (Angle dynamic) varied from 52 to 87 degree (compare with static 84-118 degree).
Important -
no lancet
of dynamic strike field near
68 degree
This is another proof for debunk AA lie about lancet and angles of dynamic strike field (and confirmation of Mick's simulation).
Last edited:
If the missile came from Snizhne can somebody explain this damage? It shows the lefthand side of the cockpit roughly just next to the seat of the captain. The black round instrument is part of the
angle of attack sensor.
There are various hole which run from the nose of the aircraft towards the back. The other photo shows the reconstruction. There is no damage to be seen of fragments entering this part of the side.
Original photo here
The same part is seen on the right, low
That silver circle is where the angle of attack sensor used to be attached to.
If the missile came from Snizhne can somebody explain this damage? It shows the lefthand side of the cockpit roughly just next to the seat of the captain. The black round instrument is part of
the angle of attack sensor.
There are various hole which run from the nose of the aircraft towards the back. The other photo shows the reconstruction. There is no damage to be seen of fragments entering this part of the
Original photo here
The same part is seen on the right, low
That silver circle is where the angle of attack sensor used to be attached to.
Fragments entered in curved cabine section before this place (look on missing skin under first and second windows on left side - it possible entrance) and from above to below.
And dynamic field of strike elements with VECTOR ADDITION
On top vector addition in graphic form
Vp (missile speed) + Vц (target speed) = V отн (relative speed)
More correctly:
Vp (missile velocity) - Vц (target velocity) = V отн (relative velocity)
"Speed" is a scalar (just a magnitude), velocity is a vector (magnitude and direction). You have to subtract the velocity of the plane to get the relative velocity, but as they are in roughly
opposite directions, the result is like adding speeds.
Important - no lancet, center of dynamic strike field near 68 degree.
This is another proof for debunk AA lie about lancet and angles of dynamic strike field (and confirmation of Mick's simulation).
Is this diagram actually the exact same type of warhead?
Is this diagram actually the exact same type of warhead?
The SA-1 is a different warhead than used on the SA-11 BUK surface to air missile. Sa-11 uses the 9N314 warhead.
I guess the purpose of ad_2015 was to show that the barrel shaped warhead of the SA-1 has the same fragmentation beam as the SA-11 warhead which is barrel shaped as well.
However we do not know if the SA-1 is a single primer or multiple primer warhead.
My preference for this thread is to debunk one or both launch locations based on the observed damage of the aircraft.
My preference for this thread is to debunk one or both launch locations based on the observed damage of the aircraft.
Are you suggesting the angle of dispersion of fragments is irrelevant? Surely if the angle is known, then analysis of the damage can be more accurate. AA's entire thesis was based on these angles.
The sequence of research should be as transparant as possible:
1. find locations with confirmed shrapnel damage
2. match the debris with location of the fuselage
3. draw lines from the various parts to get to the source of explosion
4. try to find a match with the fragmentation beam pattern | {"url":"https://www.metabunk.org/threads/does-damage-to-mh17-indicate-or-exclude-a-particular-buk-launch-location.6345/page-7","timestamp":"2024-11-06T08:18:14Z","content_type":"text/html","content_length":"362191","record_id":"<urn:uuid:a3ab67d0-2dcf-44b4-9134-bb4320ba6075>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00030.warc.gz"} |
Quantum Optics
Quantum optics is a field of research that deals with the application of quantum mechanics to phenomena involving light and its interactions with matter. One of the main goals is to understand the
quantum nature of information and to learn how to formulate, manipulate, and process it using physical systems that operate on quantum mechanical principles.
Determine the presence of a single quantum system
Coincidence correlation with picosecond timing can be used to determine if one is actually observing a single quantum system in the form of a single photon emitter. Here one employs the knowledge
that such a system can only emit one photon at a time. This is because in typical quantum systems such as single molecules or defect centers in diamond there is a characteristic average lifetime of
the excited state that must pass before the system can be excited again. If one finds that two detectors observing the source „click“ simultaneously (with statistical significance) then obviously the
source cannot be a single photon emitter.
In case of experiments dealing with photon entanglement (example for coincidence counting of entangled photons) one effectively tries to prove or disprove correlations between measurement outcomes
using some kind of correlator. In the case of experiments with photons one may, for instance, employ polarizers to filter out quantum states of interest and then use photon detectors to determine
whether or not they occurred correspondingly at both parts of the entangled pair. Now, given that photon detectors are not 100% efficient (and actually neither is the creation of entangled pairs and
their transmission) one typically must repeat the experiment many times in order to arrive at a statistically reliable answer. Since there can also be unwanted photons from background radiation or
detector artifacts it is a smart common practice to perform the coincidence correlation with picosecond timing. The correlations can then be determined for narrow time windows where the knowledge of
the time the photons travel can be used to eliminate background.
Read More
Quantum mechanics guarantee secure communication
Quantum communication is a field of applied quantum physics closely related to quantum information processing and quantum teleportation. Its most interesting application is protecting information
channels against eavesdropping by means of quantum cryptography. The most well known and developed application of quantum cryptography is quantum key distribution (QKD). QKD describes the use of
quantum mechanical effects to perform cryptographic tasks or to break cryptographic systems. The principle of operation of a QKD system is quite straightforward: two parties (Alice and Bob) use
single photons that are randomly polarized to states representing ones and zeroes to transmit a series of random number sequences that are used as keys in cryptographic communications. Both stations
are linked together with a quantum channel and a classical channel. Alice generates a random stream of qubits that are sent over the quantum channel. Upon reception of the stream Bob and Alice —
using the classical channel — perform classical operations to check if an eavesdroper has tried to extract information on the qubits stream. The presence of an eavesdropper is revealed by the
imperfect correlation between the two lists of bits obtained after the transmission of qubits between the emitter and the receiver. One important component of virtually all proper encryption schemes
is true randomnessm which can elegantly be generated by means of quantum optics.
Read More
A common quantum mechanical state of separated systems
Quantum entanglement is a physical phenomenon that occurs when quantum systems such as photons, electrons, atoms or molecules interact and then become separated, so that they subsequently share a
common quantum mechanical state. Even when a pair of such entangled particles are far apart, they remain "connected" in the sense that a measurement on one of them instantly reveals the corresponding
aspect of the quantum state of its twin partner. These "aspects" of quantum state can be position, momentum, spin, polarization, etc. While it can only be described as a superposition with indefinite
value for the entangled pair, the measurement on one of the partners produces a definite value that instantly also determines the corresponding value of the other. The surprising "remote connection"
between the partners and their instantaneous action "faster than light" that would seem to contradict relativity has been the reason for intense research efforts, both theoretically and
experimentally. In the corresponding experiments, entanglement is proven by correlation of the measurment outcomes on the separated twins.
Read More
A qubit transmitted from one location to another
Quantum teleportation is closely related to entanglement of quantum systems. It may be defined as a process by which a qubit (the basic unit of quantum information) can be transmitted from one
location to another, without the qubit actually being transmitted through space. It is useful for quantum information processing and quantum communication. As with entanglement, it is applicable to
simple and more complex quantum systems such as atoms and molecules. Recent research demonstrated quantum teleportation between atomic systems over long distances.
Read More
Quantum Information Processing focuses on information processing and computing based on quantum mechanics. While current digital computers encode data in binary digits (bits), quantum computers
aren't limited to two states. They encode information as quantum bits, or qubits, which can exist in superposition. Qubits can be implemented with atoms, ions, photons or electrons and suitable
control devices that work together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, they provide an inherent parallelism. This
will enable them to solve certain problems much faster than any classical computer using the best currently known algorithms, like integer factorization or the simulation of quantum many-body
systems. Right now the quantum computer is still in its infancy. First steps on that road are the simplest building blocks such as quantum logic gates and memory based on genuine quantum effects such
as superposition and entanglement.
Read More | {"url":"https://www.picoquant.com/applications/category/quantum-optics","timestamp":"2024-11-05T15:51:31Z","content_type":"text/html","content_length":"40203","record_id":"<urn:uuid:0805ac03-154f-4cd0-874c-6d81599699c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00046.warc.gz"} |
Finding and Taming Outliers with Cook‘s Distance in Python – TheLinuxCode
Finding and Taming Outliers with Cook‘s Distance in Python
Outlier observations can secretly sway and skew the models we build, leading us down the wrong paths. But outliers don‘t have to ruin our plans if we can detect them early. In this guide, you‘ll
learn how to identify influential outliers using Cook‘s distance before they undermine your regressions.
The Outsized Impact of Outliers
Before we dive into the detection details, it‘s worth stepping back and asking – why do we really care about outliers? How much can a few crazy observations truly mess up our models?
Unfortunately, the impact is often shocking…and our models are quite fragile.
Just look at what happens when we try to fit a simple linear regression with an outlier lurking:
One rogue observation drags our predictor line way off course! Without outlier identification, we might end up with useless models and foolish predictions.
And it‘s not just linear regressions that can be led astray. Outliers plague all kinds of machine learning models, from neural networks to decision trees. Place too much trust in those models, and
your analyses will likely crash and burn if they ever encounter outliers in the real world.
The root issue is that many modeling techniques try to minimize the total error across ALL training points. So even a single wacky outlier can manipulate this process and pull models off track.
To build smart, reliable models, we need to be proactive about finding outliers first! This is where Cook‘s distance comes to the rescue…
Cook‘s Distance to the Outlier Rescue
Instead of summing error across all observations, Cook‘s distance measures the influence of EACH individual point. It calculates how much the model parameters shift when excluding that single
Larger change = higher influence = potential outlier.
Here is the mathematical definition of Cook‘s distance (Di) for observation i:
Di = (di^2 / (p * MSE)) * (hi / (1 - hi))^2
• di = Residual error for observation i
• p = Number of model coefficients
• MSE = Mean squared error across dataset
• hi = Leverage value for observation i
Don‘t sweat the mathematical details! The core idea is that Cook‘s Distance captures the model instability coming from each individual training point.
Let‘s move on to actually using Cook‘s Distance for identifying pesky outliers in Python code.
Finding Influencers with Cook‘s Distance in Python
Detecting observations with high Cook‘s Distance is straightforward in Python with statsmodels.
We just need to:
1. Fit a regression model
2. Extract the Infuence module
3. Calculate cooks_distance
Let‘s walk through an example:
import pandas as pd
from statsmodels.formula.api import ols
# Simulated data
data = pd.DataFrame({
"x": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"y": [1, 2, 3, 4, 6, 5, 7, 8, 9, 100]
# Linear model
model = ols("y ~ x", data).fit()
# Cook‘s Distance
distance = model.get_influence().cooks_distance
This prints out:
[0.00304905 0.00296863 0.00288361 0.00279512 0.00356833 0.00270315
0.00261547 0.00252325 0.00242541 0.96434853]
See the final value of 0.96? This confirms our suspicion of an outlier y-value at x=10 dragging our model fit off course!
By calculating Cook‘s Distance, we can reliably detect these harmful outliers before they sabotage our models.
Determining Cutoffs
So when does Cook‘s Distance indicate a clear outlier vs normal variation?
While rules of thumb exist, there is no universal cutoff. But here are two sensible thresholds:
1. 3 times the Average Distance
Points above triple the mean distance warrant further investigation as outliers.
2. 4 / (n – p – 1)
Where n = number of observations, p = number of predictors.
You can visualize the distribution to identify any high distance values clearly separated from the pack:
In our case, the Cook‘s Distance of 0.96 is alarming both ways! Now we can handle this outlier with care…
Dealing with Detected Outliers
Once we‘ve spotted influential observations using Cook‘s Distance, several options exist:
• Remove – Drop outliers from the dataset
• Impute – Replace with estimated values
• Robust modeling – Use methods resilient to outliers
• Nothing! Document and move forward
Let‘s revisit our example and rebuild the model without the outlier:
# Remove outlier row
data_updated = data.drop(index=9)
# Re-fit model
model_updated = ols("y ~ x", data_updated).fit()
With the outlier removed, our updated model fit improves substantially!
But before rushing to remove, consider if outliers represent real variability you want to retain and analyze. Proceed carefully!
Last Word on Influential Outliers
Outliers and influential observations can make or break the models we rely on for key decisions and products. By measuring the model instability caused by each individual point, Cook‘s Distance lets
us pinpoint problematic outliers early.
In this guide, we covered the motivation, math, and implementation behind using Cook‘s Distance for outlier detection in Python. By finding and handling these high leverage observations carefully,
you can avoid underwater reefs and steer your analyses smoothly into the open waters!
Now go clean up those outliers causing chaos in your datasets! Your models will thank you. | {"url":"https://thelinuxcode.com/cook-distance-removal-python/","timestamp":"2024-11-13T08:46:06Z","content_type":"text/html","content_length":"175416","record_id":"<urn:uuid:8c373a19-b228-442e-98d9-0a6aefd99603>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00671.warc.gz"} |
The JUSTIFIED Clause
The JUSTIFIED clause specifies non-standard positioning of data within a receiving data item.
General Format
Syntax Rules
1. The JUSTIFIED clause can be specified only at the elementary item level.
2. JUST is an abbreviation for JUSTIFIED.
3. The JUSTIFIED clause cannot be specified for any data item described as numeric or for which editing is specified.
4. The JUSTIFIED clause cannot be specified for an index data item (see the section The USAGE Clause)
General Rules
1. When a receiving data item is described with the JUSTIFIED clause and the sending data item is larger than the receiving data item, the leftmost characters are truncated. When the receiving data
item is described with the JUSTIFIED clause and it is larger than the sending data item, the data is aligned at the rightmost character position in the data item with space fill for the leftmost
character positions.
The contents of the sending data item are not taken into account, that is, trailing spaces within the sending data item are not suppressed.
For example, if a data item PIC X(4) whose value is "A " (that is, A followed by three spaces) is moved into a data item PIC X(6) JUSTIFIED the result is " A ". If the same data item is moved to
one with PIC X(3) JUSTIFIED the result is " " that is, the leftmost character is truncated.
2. ^1 receiving data item is described with the JUSTIFIED clause and the sending data item is smaller than the receiving item, each unused position is filled with default UTF-8 spaces (UX'20'). For
all other USAGE cases, the unused positions are filled with alphanumeric spaces.
3. When the JUSTIFIED clause is omitted, the standard rules for aligning data within an elementary item apply. (See the topic Standard Alignment Rules in the chapter Concepts of the COBOL Language.) | {"url":"https://www.microfocus.com/documentation/visual-cobol/vc60/DevHub/HRLHLHPDF40B.html","timestamp":"2024-11-02T09:11:00Z","content_type":"text/html","content_length":"15983","record_id":"<urn:uuid:d5979eca-5e85-41a2-b7bd-c53de794499a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00717.warc.gz"} |
High-Dimensional Change-point Detection
HDCD contains efficient implementations of several multiple change-point detection algorithms, including Efficient Sparsity Adaptive Change-point estimator (ESAC) and Informative sparse projection
for estimating change-points (Inspect).
You can install the development version of HDCD from GitHub with:
This is a basic example which shows you how to run ESAC: | {"url":"http://cran.r-project.org/web/packages/HDCD/readme/README.html","timestamp":"2024-11-12T06:46:41Z","content_type":"application/xhtml+xml","content_length":"9899","record_id":"<urn:uuid:c513fb5e-3f45-446c-a2a0-0ae97d7a77b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00797.warc.gz"} |
HMP Bodysites Case Study
Case Study of MixMC sPLS-DA with HMP Bodysite data (Repeated Measures)
The mixMC framework is one that is specifically built for microbial datasets and will be used here on the Human Microbiome Project (HMP) 16S dataset. A sPLS-DA methodology will be employed in order
to predict the bodysite a given sample was drawn from based on the OTU data (Operational Taxonomic Unit). The model will select the optimal set of OTUs to perform this prediction. This case study
focuses on the exploration and analysis of a repeated measurement design – meaning a multilevel framework will be employed.
For background information on the mixMC, multilevel or sPLS-DA methods, refer to the MixMC Method Page, Multilevel Page or sPLS-DA Method Page.
R script
The R script used for all the analysis in this case study is available here.
To begin
Load the latest version of mixOmics. Note that the seed is set such that all plots can be reproduced. This should not be included in proper use of these functions.
library(mixOmics) # import the mixOmics library
set.seed(5249) # for reproducibility, remove for normal use
The data
The data being used includes only the most diverse bodysites yielded from the HMP studies. It features a repeated measures design which will be accounted for in the following analysis. It is assumed
that the data are offset and pre-filtered, as described in mixMC pre-processing steps.
The mixOmics HMP dataset is accessed via diverse.16S and contains the following:
• diverse.16S$data.TSS (continuous matrix): 162 rows and 1674 columns. The prefiltered normalised data using Total Sum Scaling normalisation.
• diverse.16S$data.raw (continuous matrix): 162 rows and 1674 columns. The prefiltered raw count OTU data which include a 1 offset (i.e. no 0 values).
• diverse.16S$taxonomy (categorical matrix): 1674 rows and 6 columns. Contains the taxonomy (ie. Phylum, … Genus, Species) of each OTU.
• diverse.16S$indiv (categorical matrix): 162 rows and 5 columns. Contains all the sample meta data recorded.
• diverse.16S$bodysite (categorical vector): factor of length 162 indicating the bodysite with levels Antecubital_fossa, Stool and Subgingival_plaque.
• diverse.16S$sample (categorical vector): factor of length 162 indicating the unique individual ID of each sample.
The raw OTU data will be used as predictors (X dataframe) for the bodysite (Y vector). The subject corresponding to each sample is also extracted such that repeated measures can be accounted for. The
dimensions of the predictors is confirmed and the distribution of the response vector is observed (note that it is a balanced dataset).
data("diverse.16S") # extract the microbial data
X <- diverse.16S$data.raw # set the raw OTU data as the predictor dataframe
Y <- diverse.16S$bodysite # set the bodysite class as the response vector
sample <- diverse.16S$sample
dim(X) # check dimensions of predictors
## [1] 162 1674
summary(Y) # check distribution of response
## Antecubital_fossa Stool Subgingival_plaque
## 54 54 54
Initial Analysis
Preliminary Analysis with PCA
The first exploratory step involves using PCA (unsupervised analysis) to observe the general structure and clustering of the data to aid in later analysis. As this data are compositional by nature, a
centered log ratio (CLR) transformation needs to be undergone in order to reduce the likelihood of spurious results. This can be done by using the logratio parameter in the PCA function. The sample
object is also passed in to the multilevel parameter.
Here, a PCA with a sufficiently large number of components (ncomp = 10) is generated to choose the final reduced dimension of the model. Using the ‘elbow method’ in Figure 1, it seems that two
components will be more than sufficient for a PCA model.
Note: different log ratio transformations, normalisations and/or multilevel designs may yield differing results. Some exploration is recommended to gain an understanding of the impact of each of
these processes.
# undergo PCA with 10 components and account for repeated measures
diverse.pca = pca(X, ncomp = 10, logratio = 'CLR', multilevel = sample)
plot(diverse.pca) # plot explained variance
FIGURE 1: Bar plot of the proportion of explained variance by each principal component yielded from a PCA.
Below (in Figure 2), the samples can be seen projected onto the first two Principal Components . (a) shows the case where the repeated measures was not accounted for while (b) does control for this.
Without a multilevel approach, the total explained variation decreases from 43% to 34%. In Figure 2a, the separation of each bodysite is distinct, but not the strongest. This is vastly improved when
the multilevel framework is employed. The first component separates all three bodysites to a moderate degree, primarily discriminating between the stool and subgingival plaque.The second principal
component separates the antecubital fossa bodysite from the others.
# undergo PCA with 2 components
diverse.pca.nonRM = pca(X, ncomp = 2, logratio = 'CLR')
# undergo PCA with 2 components and account for repeated measures
diverse.pca.RM = pca(X, ncomp = 2, logratio = 'CLR', multilevel = sample)
plotIndiv(diverse.pca.nonRM, # plot samples projected onto PCs
ind.names = FALSE, # not showing sample names
group = Y, # color according to Y,
title = '(a) Diverse.16S PCA Comps 1&2 (nonRM)')
plotIndiv(diverse.pca.RM, # plot samples projected onto PCs
ind.names = FALSE, # not showing sample names
group = Y, # color according to Y
legend = TRUE,
title = '(b) Diverse.16S PCA Comps 1&2 (RM)')
FIGURE 2: Sample plots from PCA performed on the Diverse 16S OTU data. Samples are projected into the space spanned by the first two components. (a) depicts this when the repeated measures is not
accounted for. (b) does use a multilevel framework. (‘RM’ = repeated measures)
Initial sPLS-DA model
The mixMC framework uses the sPLS-DA multivariate analysis from mixOmics [3]. Hence, the next step involves generating a basic PLS-DA model such that it can be tuned and then evaluated. In many
cases, the maximum number of components needed is k-1 where k is the number of categories within the outcome vector (y) – which in this case is 3. Once again, the logratio parameter is used here to
ensure that the OTU data are transformed into an Euclidean space.
basic.diverse.plsda = plsda(X, Y, logratio = 'CLR',
ncomp = nlevels(Y),
multilevel = sample)
Tuning sPLS-DA
Selecting the number of components
The ncomp Parameter
To set a baseline from which to compare a tuned model, the performance of the basic model is assessed via the perf() function. Here, a 5-fold, 10-repeat design is utilised. To obtain a more reliable
estimation of the error rate, the number of repeats should be increased (between 50 to 100). Figure 3 shows the error rate as more components are added to the model (for all three distance metrics).
As this is a balanced dataset, the overall error rate and balanced error rate are the same (hence there seemingly being only one line on each set of axes in Figure 3).
The plot indicates a decrease in the classification error rate (i.e. an increase in classification performance) from one component to 2 components in the model. The performance does not increase
after 2 components, which suggests ncomp = 2 for a final PLS-DA model.
Note that for the sparse PLS-DA we may obtain a different optimal ncomp.
# assess the performance of the sPLS-DA model using repeated CV
basic.diverse.perf.plsda = perf(basic.diverse.plsda,
validation = 'Mfold',
folds = 5, nrepeat = 10,
progressBar = FALSE)
# extract the optimal component number
optimal.ncomp <- basic.diverse.perf.plsda$choice.ncomp["BER", "max.dist"]
plot(basic.diverse.perf.plsda, overlay = 'measure', sd=TRUE) # plot this tuning
FIGURE 3: Classification error rates for the basic sPLS-DA model on the Diverse OTU data. Includes the standard and balanced error rates across all three distance metrics.
Selecting the number of variables
The keepX Parameter
Using the tune.splsda() function, the optimal number of components can be confirmed as well as the number of features to use for each component can be determined. Once again, for real analysis a
larger number of repeats should be used compared to the 5-fold, 10-repeat structure used here. It can be seen in Figure 4 that adding a third component does not improve the performance of the model,
hence ncomp = 2 remains valid. The diamonds indicate the optimal keepX values for each of these components based on the balanced error rate.
grid.keepX = c(seq(5,150, 5))
diverse.tune.splsda = tune.splsda(X, Y,
ncomp = 3, # use optimal component number
logratio = 'CLR', # transform data to euclidean space
multilevel = sample,
test.keepX = grid.keepX,
validation = c('Mfold'),
folds = 5, nrepeat = 10, # use repeated CV
dist = 'max.dist', # maximum distance as metric
progressBar = FALSE)
# extract the optimal component number and feature count per component
optimal.ncomp = diverse.tune.splsda$choice.ncomp$ncomp
optimal.keepX = diverse.tune.splsda$choice.keepX[1:optimal.ncomp]
plot(diverse.tune.splsda) # plot this tuning
FIGURE 4: Tuning keepX for the sPLS-DA performed on the Diverse OTU data. Each coloured line represents the balanced error rate (y-axis) per component across all tested keepX values (x-axis) with the
standard deviation based on the repeated cross-validation folds.
Final Model
Following this tuning, the final sPLS-DA model can be constructed using these optimised values.
diverse.splsda = splsda(X, Y, logratio= "CLR", # form final sPLS-DA model
multilevel = sample,
ncomp = optimal.ncomp,
keepX = optimal.keepX)
Sample Plots
The sample plot found in Figure 5 depicts the projection of the samples onto the first two components of the sPLS-DA model. The subgingival plaque is adequately separated from the other two sites
along the first component. The antecibital fossa and stool sites are better separated by the second component, though the overlapping confidence ellipses shows that this component is not the best at
discriminating between them.
Do no hesitate to add other components and look at the sample plot to visualise the potential benefit of adding a third component as the current separation of bodysites could do with improvement.
comp = c(1,2),
ind.names = FALSE,
ellipse = TRUE, # include confidence ellipses
legend = TRUE,
legend.title = "Bodysite",
title = 'Diverse OTUs, sPLS-DA Comps 1&2')
FIGURE 5: Sample plots from sPLS-DA performed on the Diverse OTU data. Samples are projected into the space spanned by the first two components.
Another way to visualise the similarity of samples is through the use of a clustered image map (CIM). Figure 6 shows some relationships between OTUs and certain bodysites. For example, the right-most
cluster of OTUs seems to be positively associated with the subgingival plaque site – while the vast majority of other OTUs have a negative association with this same site.
comp = c(1,2),
row.sideColors = color.mixo(Y), # colour rows based on bodysite
legend = list(legend = c(levels(Y))),
title = 'Clustered Image Map of Diverse Bodysite data')
FIGURE 6: Clsutered Image Map of the Diverse OTU data after sPLS-DA modelling. Only the keepX selected feature for components 1 and 2 are shown, with the colour of each cell depicting the raw OTU
value after a CLR transformation.
Variable Plots
Next, the relationship between the OTUs and the sPLS-DA components is examined. Note that cutoff = 0.5 such that any feature with a correlation vector length less than 0.5 is not shown. The three
clusters of variables within this plot correspond quite well to the three bodysite clusters in Figure 5. Interpretting Figure 7 in conjunction with Figure 5 provides key insights into what OTUs are
responsible for identifying each bodysite. For example, the cluster of 4 OTUs at the negative end of the first component (left side) in Figure 7 are likely to be key OTUs in defining the microbiome
in the subgingival area.
comp = c(1,2),
cutoff = 0.5, rad.in = 0.5,
var.names = FALSE, pch = 19,
title = 'Diverse OTUs, Correlation Circle Plot Comps 1&2')
FIGURE 7: Correlation circle plot representing the OTUs selected by sPLS-DA performed on the Diverse OTU data. Only the OTUs selected by sPLS-DA are shown in components 1 and 2. Cutoff of 0.5 used
Evaluation of sPLS-DA
The mixOmics package also contains the ability to assess the classification performance of the sPLS-DA model that was constructed via the perf() function once again. The mean error rates per
component and the type of distance are output. It can be beneficial to increase the number of repeats for more accurate estimations. It is clear from the below output that adding the second component
drastically decreases the error rate.
set.seed(5249) # for reproducible results for this code, remove for your own code
# evaluate classification using repeated CV and maximum distance as metric
diverse.perf.splsda = perf(diverse.splsda, validation = 'Mfold',
folds = 5, nrepeat = 10,
progressBar = FALSE, dist = 'max.dist')
## $overall
## max.dist
## comp1 0.33333333
## comp2 0.01728395
## $BER
## max.dist
## comp1 0.33333333
## comp2 0.01728395
OTU Selection
The sPLS-DA selects the most discriminative OTUs that best characterize each body site. The below loading plots (Figures 9a&b) display the abundance of each OTU and in which body site they are the
most abundant for each sPLS-DA component. Viewing these bar plots in combination with Figures 5 and 7 aid in understanding the similarity between bodysites. For both components, the 20 highest
contributing features are depicted.
OTUs selected on the first component are all highly abundant in subgingival plaque samples based on the mean of each OTU per body site. This makes sense based on the interpretations of Figures 5 and
7. All OTUs seleced on the second component are strongly associated with the antecubital plaque site.
plotLoadings(diverse.splsda, comp = 1,
method = 'mean', contrib = 'max',
size.name = 0.8, legend = FALSE,
ndisplay = 20,
title = "(a) Loadings of comp. 1")
plotLoadings(diverse.splsda, comp = 2,
method = 'mean', contrib = 'max',
size.name = 0.7,
ndisplay = 20,
title = "(b) Loadings of comp. 2")
FIGURE 9: The loading values of the top 20 (or 5 in the case of comp. 1) contributing OTUs to the first (a) and second (b) components of a sPLS-DA undergone on the Diverse OTU dataset. Each bar is
coloured based on which bodysite had the maximum, mean value of that OTU.
To take this a step further, the stability of each OTU on these components can be assessed via the output of the perf() function. The below values (between 0 and 1) indicate the proportion of models
(during the repeated cross validation) that used that given OTU as a contributor to the first sPLS-DA component. Those with high stabilities are likely to be the most important to defining a certain
# determine which OTUs were selected
selected.OTU.comp1 = selectVar(diverse.splsda, comp = 1)$name
# display the stability values of these OTUs
## OTU_97.38174 OTU_97.39439 OTU_97.108 OTU_97.20 OTU_97.39456
## 1.00 0.86 0.92 0.86 0.52
More information on Plots
For a more in depth explanation of how to use and interpret the plots seen, refer to the following pages:
Recent Comments | {"url":"https://mixomics.org/mixmc/hmp-bodysites-case-study/","timestamp":"2024-11-02T08:50:26Z","content_type":"text/html","content_length":"192634","record_id":"<urn:uuid:776c794c-ad0f-457a-8533-9d5f3e2054d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00871.warc.gz"} |
Birthday Paradox Variance
First a
from David Johnson for proposals on locations for SODA 2012 both in and outside the US.
Here's an interesting approach to the birthday paradox using variances.
Suppose we have m people who have birthdays spread uniformly over n days. We want to bound m such that the probability that there are are at least two people with the same birthay is about one-half.
For 1 ≤ i < j ≤ m, let A[i,j] be a random variable taking value 1 if the ith and jth person share the same birthday and zero otherwise. Let A be the sum of the A[i,j]. At least two people have the
same birthday if A ≥ 1.
E(A[i,j]) = 1/n so by linearity of expectations, E(A) = m(m-1)/2n. By Markov's inequality, Prob(A ≥ 1) ≤ E(A) so if m(m-1)/2n ≤ 1/2 (approximately m ≤ n^1/2), the probability that two people have the
same birthday is less than 1/2.
How about the other direction? For that we need to compute the variance. Var(A[i,j]) = E(A[i,j]^2)-E^2(A[i,j]) = 1/n-1/n^2 = (n-1)/n^2.
A[i,j] and A[u,v] are independent random variables, obvious if {i,j}∩{u,v} = ∅ but still true even if they share an index: Prob(A[i,j]A[i,v] = 1) = Prob(The ith, jith and vth person all share the
same birthday) = 1/n^2 = Prob(A[i,j]=1)Prob(A[i,v]=1).
The variance of a sum of pairwise independent random variables is the sum of the variances so we have Var(A) = m(m-1)(n-1)/2n^2.
Since A is integral we have Prob(A < 1) = Prob(A = 0) ≤ Prob(|A-E(A)| ≥ E(A)) ≤ Var(A)/E^2(A) by Chebyshev's inequality. After simplifying we get Prob(A < 1) ≤ 2(n-1)/(m(m-1)) or approximately 2n/m^
2. Setting that equal to 1/2 says that if m ≥ 2n^1/2 the probability that everyone has different birthdays is at most 1/2.
If m is the value that gives probability one-half that we have at least two people with the same birthday, we get n^1/2 ≤ m ≤ 2n^1/2, a factor of 2 difference. Not bad for a simple variance
Plugging in n = 365 into the exact formulas we get 19.612 ≤ m ≤ 38.661 where the real answer is about m = 23.
Enjoy the Thanksgiving holiday. We'll be back on Monday.
14 comments:
1. A Canadian ;-)9:14 AM, November 25, 2009
Thanksgiving was approx. a month ago!
2. a hottie9:31 AM, November 25, 2009
well, the birthday problem aint really new neither is the approach so i aint sure wat this post aint about
3. Seems David Johnson is behind for about a year, next SODA in Austin is SODA 2010 ;-)
4. cool way of using basically the same inequality but with the unknown variable once in the numerator and once in the denominator
5. that should be obvious...
6. The chebyshev's inequality has greater and equal in the probability statement. Is it simply OK to remove the equality here?
7. Lance, you have some typos in the second part of the argument. In particular, you should be proving an upper bound on Prob(A = 0), not Prob(A >= 1) again.
8. last anon: all right, all right, but give us a break. it's thanks-giving. instead of typo-giving you should be giving something else.
9. hey waowao, this is supposed to be a mathematically and scientifically centered blog, having errors in math is really bad and worth pointing out whatever day of the week it is
10. a canadian8:08 AM, November 27, 2009
hey lance something you could explain a little bit. Why the need to advertise for wolfram and all the clunky wolfram alpha utilities.
I find it a little bit disappointing ... Seems like Wolfram and Co clearly won you over to his side.
11. Would this not be dedicated to "Thanksgiving holiday", then I would ask "Lance -- what is the message?"
Birthday paradox is no paradox -- it is just counting. We count with weights, forget what we count -- and here is a "paradox" ... I wonder how people (also my students) find this "strange".
Markov, Chebysachev = trivial counting. Chernoff = a bit mmore delicate counting. Nothing more.
Still, was nice to read.
12. Would this not be dedicated to "Thanksgiving holiday", then I would ask "Lance -- what is the message?"
Birthday paradox is no paradox -- it is just counting. We count with weights, forget what we count -- and here is a "paradox" ... I wonder how people (also my students) find this "strange".
Markov, Chebysachev = trivial counting. Chernoff = a bit mmore delicate counting. Nothing more.
Still, was nice to read :-)
13. I fixed the typos.
14. This post is so exciting it makes my penis hurt! | {"url":"https://blog.computationalcomplexity.org/2009/11/birthday-paradox-variance.html?m=1","timestamp":"2024-11-14T04:00:52Z","content_type":"application/xhtml+xml","content_length":"80755","record_id":"<urn:uuid:8e0228e1-75f6-49b7-a367-4a5429f45b4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00138.warc.gz"} |
How to calculate the threshold value for numeric attributes in Quinlan's C4.5 algorithm?
I am trying to find how the C4.5 algorithm determines the threshold value for numeric attributes. I have researched and can not understand, in most places I've found this information:
The training samples are first sorted on the values of the attribute Y being considered. There are only a finite number of these values, so let us denote them in sorted order as {v1,v2, …,vm}. Any
threshold value lying between vi and vi+1 will have the same effect of dividing the cases into those whose value of the attribute Y lies in {v1, v2, …, vi} and those whose value is in {vi+1, vi+2, …,
vm}. There are thus only m-1 possible splits on Y, all of which should be examined systematically to obtain an optimal split.
It is usual to choose the midpoint of each interval: (vi +vi+1)/2 as the representative threshold. C4.5 chooses as the threshold a smaller value vi for every interval {vi, vi+1}, rather than the
midpoint itself.
I am studying an example of Play/Don't Play and do not understand how you get the number 75 for the attribute humidity when the state is sunny because the values of humidity to the sunny state are
Does anyone know? | {"url":"https://intellipaat.com/community/15694/how-to-calculate-the-threshold-value-for-numeric-attributes-in-quinlans-c4-5-algorithm","timestamp":"2024-11-12T03:04:35Z","content_type":"text/html","content_length":"100575","record_id":"<urn:uuid:57d30a97-7395-404c-86c8-9fbb53b01bdb>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00274.warc.gz"} |
Please use MATLAB to solve this problem, see attached picture f... | Filo
Question asked by Filo student
Please use MATLAB to solve this problem, see attached picture for assignment. Thanks for your help. Assignment 1: Deformation of a rubber block Suppose that you are given the task of predicting the
horizontal and vertical displacement of a suspended rubber block (green) in response to an external force. The rubber block is fixed at the top and deflects at the bottom in response to the force
(load) F. You should compute the response of the bottom point to a force of constant magnitude, exerted in directions designated by the angle θ, where θ may vary from 0 to 90 degrees. Typically,
rubber material does not behave like an ideal spring. Also, we consider here that the material is anisotropic, i.e. there are different potential contributions in the y and x directions. This, we
assume, gives rise to the following potential energy for deflections (x,y): V(x,y) = ax^1.3 + ay^1.7 where |x| denotes the absolute value, and a are spring constants. You can see that the exponents
in the x and y directions are less than 2, since the rubber offers less resistance at larger deflection than one would expect from an ideal, linear spring. Also, compared to the y direction, the
resistance in the x direction is stiffer and is closer to an ideal spring (imagine a wire mesh embedded in the rubber material, as in a tire belt). For simplicity, we drop the units in the following.
We assume a constant magnitude (length) of the load vector F = 10, and spring constants a = 8. Assignment: Derive the external load term, which depends on θ, and add it to V(x,y). See Chapra case.
The displacements x and y should be solved in MATLAB by determining the values that yield a minimum total potential energy. Plot the displacement components x and y, as well as the length of the
displacement vector (2-norm), into a single figure, using three different colors or line drawing modes for a range of θ between 0 and 90 degrees in 1 degree steps. Important: Add an explanation/
discussion of the resulting plot in your own words. Suggested topics: What are the extrema of x and y? Is there a sweet spot where the displacement vector is short? What is the physical
interpretation of these results with respect to the different exponents? If this discussion in your own words is missing, 10% (5 points) will be subtracted from the score.
Not the question you're searching for?
+ Ask your question
Filo tutor solution
Learn from their 1-to-1 discussion with Filo tutors.
Generate FREE solution for this question from our expert tutors in next 60 seconds
Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7
Found 7 tutors discussing this question
Discuss this question LIVE
5 mins ago
Students who ask this question also asked
View more
Please use MATLAB to solve this problem, see attached picture for assignment. Thanks for your help. Assignment 1: Deformation of a rubber block Suppose that you are given the task of
predicting the horizontal and vertical displacement of a suspended rubber block (green) in response to an external force. The rubber block is fixed at the top and deflects at the bottom in
response to the force (load) F. You should compute the response of the bottom point to a force of constant magnitude, exerted in directions designated by the angle θ, where θ may vary from
0 to 90 degrees. Typically, rubber material does not behave like an ideal spring. Also, we consider here that the material is anisotropic, i.e. there are different potential contributions in
the y and x directions. This, we assume, gives rise to the following potential energy for deflections (x,y): V(x,y) = ax^1.3 + ay^1.7 where |x| denotes the absolute value, and a are spring
constants. You can see that the exponents in the x and y directions are less than 2, since the rubber offers less resistance at larger deflection than one would expect from an ideal, linear
Question spring. Also, compared to the y direction, the resistance in the x direction is stiffer and is closer to an ideal spring (imagine a wire mesh embedded in the rubber material, as in a tire
Text belt). For simplicity, we drop the units in the following. We assume a constant magnitude (length) of the load vector F = 10, and spring constants a = 8. Assignment: Derive the external load
term, which depends on θ, and add it to V(x,y). See Chapra case. The displacements x and y should be solved in MATLAB by determining the values that yield a minimum total potential energy.
Plot the displacement components x and y, as well as the length of the displacement vector (2-norm), into a single figure, using three different colors or line drawing modes for a range of
θ between 0 and 90 degrees in 1 degree steps. Important: Add an explanation/discussion of the resulting plot in your own words. Suggested topics: What are the extrema of x and y? Is there a
sweet spot where the displacement vector is short? What is the physical interpretation of these results with respect to the different exponents? If this discussion in your own words is
missing, 10% (5 points) will be subtracted from the score.
Updated Jun 5, 2024
Topic All topics
Subject Physics
Class High School | {"url":"https://askfilo.com/user-question-answers-physics/please-use-matlab-to-solve-this-problem-see-attached-picture-3131363135363232","timestamp":"2024-11-11T20:02:06Z","content_type":"text/html","content_length":"71488","record_id":"<urn:uuid:685dd629-a753-42d7-8d70-773bae4ef0a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00472.warc.gz"} |
Create a Quiz. The academic subject for which the text must be created - Mathematics. It should be for students studying at Year or Grade 1 ...
What to create Quiz
Which subject Mathematics
What age group Year or Grade 1
What topic
Question types Open-ended
Number of questions 5
Number of answers 4
Correct answers Exactly 1
Show correct answers
Use images (descriptions)
Any other preferences
Test your math skills with this quiz! Remember to read each question carefully before answering. Good luck!
1. What number comes after 5 in the counting sequence?
2. Count the apples in the picture below. How many apples are there in total?
3. Which shape has 3 sides?
4. What is the total when you add 2 and 3 together?
5. Which number comes before 7 in the counting sequence?
1. 6
2. 6
3. Triangle
4. 5
5. 6 | {"url":"https://aidemia.co/view.php?id=1921","timestamp":"2024-11-05T22:07:00Z","content_type":"text/html","content_length":"6565","record_id":"<urn:uuid:4aaf1e67-ec85-45d6-b40b-8be0c3f2d654>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00410.warc.gz"} |
South Carolina Academic Standards for Mathematics: Geometry
Shodor > Interactivate > Standards > South Carolina Academic Standards for Mathematics: Geometry
South Carolina Academic Standards for Mathematics
Standard Category • Show All
Standard Category (...)
• Standard G-1: The student will understand and utilize the mathematical processes of problem solving, reasoning and proof, communication, connections, and representation.
• Standard G-2: The student will demonstrate through the mathematical processes an understanding of the properties of basic geometric figures and the relationships between and among them.
• Standard G-3: The student will demonstrate through the mathematical processes an understanding of the properties and special segments of triangles and the relationships between and among
• Standard G-4: The student will demonstrate through the mathematical processes an understanding of the properties of quadrilaterals and other polygons and the relationships between and among them.
• Standard G-5: The student will demonstrate through the mathematical processes an understanding of the properties of circles, the lines that intersect them, and the use of their special segments.
• Standard G-6: The student will demonstrate through the mathematical processes an understanding of transformations, coordinate geometry, and vectors.
• Standard G-7: The student will demonstrate through the mathematical processes an understanding of the surface area and volume of three-dimensional objects.
No Results Found
©1994-2024 Shodor Website Feedback
South Carolina Academic Standards for Mathematics
Standard Category • Show All
Standard Category (...)
• Standard G-1: The student will understand and utilize the mathematical processes of problem solving, reasoning and proof, communication, connections, and representation.
• Standard G-2: The student will demonstrate through the mathematical processes an understanding of the properties of basic geometric figures and the relationships between and among them.
• Standard G-3: The student will demonstrate through the mathematical processes an understanding of the properties and special segments of triangles and the relationships between and among
• Standard G-4: The student will demonstrate through the mathematical processes an understanding of the properties of quadrilaterals and other polygons and the relationships between and among them.
• Standard G-5: The student will demonstrate through the mathematical processes an understanding of the properties of circles, the lines that intersect them, and the use of their special segments.
• Standard G-6: The student will demonstrate through the mathematical processes an understanding of transformations, coordinate geometry, and vectors.
• Standard G-7: The student will demonstrate through the mathematical processes an understanding of the surface area and volume of three-dimensional objects.
No Results Found | {"url":"http://www.shodor.org/interactivate/standards/organization/26/","timestamp":"2024-11-12T13:02:34Z","content_type":"application/xhtml+xml","content_length":"13871","record_id":"<urn:uuid:d5ded893-6fdc-4461-ae59-8ab11a8b26b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00480.warc.gz"} |
: How can I get "All real numbers" with contextInequalities?
This is a nice try, but there is a problem with it. The
directive can only be used as an alias for items of the same type. I.e., a string alias can only be for another string, not a constant. It is an interesting idea, though, and I'll think about it.
If you want "All real numbers" to be equal to (-inf,inf), then you need to make "All real numbers" be a constant, not a string. That would be
Context()->constants->redefine("All real numbers",from=>"Interval",using=>"R");
The problem is that constant names are not allowed to contain spaces. So in order to do this, you need to change the allowed constant names first:
Context()->constants->{namePattern} = qr/.*/;
Context()->constants->redefine("All real numbers",from=>"Interval",using=>"R");
This first line allows names to be anything (any string of any characters). Note, however, that this is case-sensitive, and that students could use this just as they could (-inf,inf), so could make
formulas that include "All real numbers", as in
1 < x or all real numbers
Alternatively, you can make "All real numbers" into a string
Context()->strings->add("All real numbers"=>{});
(which will be case-insensitive), and use "All real numbers" when that is the correct answer. Students can enter a string without an error message (though it will be marked incorrect). When it is the
correct answer, however, you may want to add
to the
call in order to get the proper type checking on the student's answers.
Hope that helps. | {"url":"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=85&parent=299","timestamp":"2024-11-13T17:18:14Z","content_type":"text/html","content_length":"68612","record_id":"<urn:uuid:292eb26a-2a98-4470-9dc8-3989795e791c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00257.warc.gz"} |
Numerical Analysis and Scientific Computing
The research deals with the analysis, development, application of mathematical models for the integration of complex systems. The analysis is conducted using mathematical methods in several fields
such as linear algebra, approximation theory, partial differential equations, optimization and control. Solution methods are developed and applied to domains as diverse as (potential and viscous)
flow dynamics, (linear and nonlinear) structural analysis, mass transport, heat transfer and in general to multiscale and multiphysics applications. The methods have been integrated into complex
multidisciplinary systems.
Research topics
• CONTROL AND OPTIMIZATION The efficient solution of optimal control or shape optimization problems involving partial differential equations (PDEs) is a problem of interest in computational science
and engineering. The goal of an optimal control problem is the minimization/maximization of a given output of interest (expressed by suitable cost functionals) under some constraints, controlling
either suitable variables (such as sources, model coefficients or boundary values) or the shape of the domain itself. In the latter case, we deal with shape optimization or optimal shape design
• REDUCED ORDER MODELLING Model order reduction techniques provide an efficient, accurate and reliable way of solving (systems of) parametrized partial differential equations in the many-query or
real-time context thanks to offline-online computational splittings, such as (shape) optimization, flow control, characterization, parameter estimation, uncertainty quantification. Our research
is mostly based, but not limited to, on certified reduced basis methods and proper orthogonal decomposition for parametrized PDEs.
• FREE SURFACES Techniques to study the position of an interface as a part of the problem itself, when studying the dynamics of a boat, for example.
• FLUID-STRUCTURE INTERACTION Development of efficient algorithms and methods for the coupling between the fluid and structure dynamics finds applications in a large variety of fields dealing with
internal or external flows, also at the reduced order level (cardiovascular applications, naval engineering).
• PARALLEL and HIGH PERFORMANCE COMPUTING
• OPEN SOURCE SOFWARE DEVELOPMENT Several open source software libraries are developed and maintained
Research Group
Visiting Professors
Main External Collaborators
Collaborating Institutes
• Politecnico di Milano, MOX, Modeling and Scientific Computing Center
• EPFL, Lausanne, Switzerland
• Massachusetts Institute of Technology, Cambridge, US
• Università di Pavia, Italy
• University of Houston, US
• University of Toronto, Canada
• Laboratoire Jacques Louis Lions, Paris VI, France
• Duke University, Durham, US
• Imperial College, London, UK
• Politecnico di Torino, Italy
• Virginia Tech, Blacksburg, Virginia, US
• Scuola Superiore S.Anna, Pisa, Italy
• University of Cambridge, UK
• University of Sevilla, Spain
• University of Santiago de Compostela, Spain
• RWTH Aachen, Germany
• University of Ghent, Belgium
ERC CoG 2015 AROMA-CFD grant 681447: Advanced Reduced Order Methods with Applications in Computational Fluid Dynamics (PI Prof. Gianluigi Rozza) | {"url":"https://www.math.sissa.it/content/numerical-analysis-and-scientific-computing","timestamp":"2024-11-12T04:11:49Z","content_type":"application/xhtml+xml","content_length":"40582","record_id":"<urn:uuid:5d0b7a98-beac-4da4-9136-86eaba9e41f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00162.warc.gz"} |
algebra 1 slope intercept form worksheet 1 answers
Algebra 1 Slope Intercept Form Worksheet Answer Key Pdf - Fill ...
Algebra 1 Worksheets | Linear Equations Worksheets
Free Answer Key Slope Intercept Form Worksheet Collection
Writing equations in slope intercept form worksheet: Fill out ...
Graphing Lines in Slope Intercept Form Worksheet
Slope-intercept Form Exercises Worksheet
Slope Intercept Form Worksheets with Answer Key
Algebra 1, 4.1: Writing Linear Equations in Slope-Intercept Form
50+ Slope-Intercept Form worksheets for 8th Grade on Quizizz ...
Algebra 1 Slope Intercept Form ≡ Fill Out Printable PDF Forms Online
Slope-Intercept Form: Writing Equations | Worksheet | Education.com
Point-Slope And Slope-Intercept Form Worksheet
Converting from Standard to Slope-Intercept Form (A)
Free Slope-Intercept Form Worksheets—with Answers — Mashup Math
50+ Slope-Intercept Form worksheets on Quizizz | Free & Printable
KutaSoftware: Algebra 1- Graphing Lines Slope Intercept Form Part 2
Slope and Slope Intercept Form Worksheet.doc - Slope Intercept ...
Edia | Free math homework in minutes
Solved Algebra 2 LA#31-9 Writing Equations of Lines (Day 1 ...
Slope Intercept Form Worksheets with Answer Key
Algebra 1 Point Slope Form Worksheet Answers - Fill Online ...
Algebra 1 Slope Intercept Form ≡ Fill Out Printable PDF Forms Online
Free Slope-Intercept Form Worksheets—with Answers — Mashup Math
Worksheet: Slope - Slope Intercept, Standard Form, Point-Slope ...
Algebra 1 Slope Intercept Form Worksheet 2020-2024 - Fill and Sign ...
Writing a Linear Equation from the Slope and y-intercept (A)
Standard & Slope-Intercept Forms Worksheets (printable, online ...
Slope-Intercept Form of a Line INB Pages | Mrs. E Teaches Math
Warm-Up #2 - 1: Write The Following Solutions in Interval Notation ...
Graphing Equations in Point Slope Form
Slope-intercept Form of Equation of a Line Worksheets
Linear Functions Unit Algebra 1 TEKS - Maneuvering the Middle
Slope Intercept Form Worksheets with Answer Key
Point-Slope Equation of a Line Worksheets | {"url":"https://worksheets.clipart-library.com/algebra-1-slope-intercept-form-worksheet-1-answers.html","timestamp":"2024-11-10T13:03:04Z","content_type":"text/html","content_length":"28910","record_id":"<urn:uuid:b81369f5-23bc-42fa-9093-33de07d9ef42>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00546.warc.gz"} |
Neural networks are extremely popular today, thanks to major research advancement over the last 10 years. The result of this research has culminated in deep learning algorithms and architecture. Big
technology giants such as Google, Facebook, and Microsoft are heavily investing in deep learning network research. Complex neural networks powered by deep learning are considered state of the art in
AI and machine learning. We see them being used in everyday life. For example, Google's image search is powered by deep learning. Google Translate is another application powered by deep learning
today. The field of computer vision has made several advancements thanks to deep learning.
The following diagram is a typical neural network, commonly called a multi-layer perceptron:
This network architecture has a single hidden layer with two nodes. The output layer... | {"url":"https://subscription.packtpub.com/book/data/9781788621878/4/ch04lvl1sec27/deep-neural-networks","timestamp":"2024-11-08T15:48:46Z","content_type":"text/html","content_length":"102674","record_id":"<urn:uuid:cee1e187-0455-4cff-a7b1-717dcc481c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00753.warc.gz"} |
Graduate Exam Abstract - Electrical and Computer Engineering
Gunjan Mahindre
Ph.D. Final
Sep 29, 2021, 8:00 am - 10:00 am
ECE Conference Room
Efficient Representation, Measurement, and Recovery of Spatial and Social Networks
Abstract: Massive penetration of networks such as social and communication networks and the growth of networked systems are leading to a colossal number of connections and data underneath. Techniques
are needed for analysis of network measurements to extract network features and patterns of interest. Network analytics can be used, for example, to predict how a network evolves for customer
interaction analysis to provide a better user experience, and to make effective business decisions based on client engagements. Ideally, one should have the complete set of measurements associated
with a network so that the analysis would lead to results that are applicable to real-world networks. However, computational, communication, and storage limitations do not allow that for large-scale
networks of interest. Thus, a network needs to be sampled for its connectivity or substructures to facilitate analysis. However, extensive sampling can be computationally expensive to carry out for
analysis or even unviable due to access restrictions, e.g., due to private or censored nodes. Also, currently practiced sampling techniques such as random-walk are susceptible to slow-mixing problem.
Sparse or locally targeted sampling techniques do not allow the recovery of the complete information of the original network while missing information may lead to altered network characteristics and
biased research results.
The limitations identified above give rise to the need for techniques to correctly predict missing connectivity data as well as to represent and store networks in coherent yet compressed ways that
preserve the original network characteristics for unbiased network experiments. As certain network features are derived from the complete network measurements, missing data or local samples affect
the accuracy of such estimates. Efficient prediction and graph representation techniques could facilitate network analytics from smaller sets of measurements with improved accuracy, low storage, ease
of access as well as manageable computational costs.
Our goal is to make the most use of the available network measurements and extract information about network characteristics, connectivity, and constraints to make informed predictions about the
missing measurements. Accordingly, this research is aimed at deriving and demonstrating novel techniques to efficiently measure and represent networks, and for accurate recovery of network topology
from partially measured network structures. The techniques need to be scalable in their computation, and graceful in their degradation with limited measurements.
We tackle the network analysis problem from two sides: a) efficiently representing complete graph data in compact formats and b) predicting missing network measurements using partially observed
distances. Specifically, we introduce the novel concepts of reconstruction set and link dimension of graphs for lossless compression and reconstruction. This helps to represent graphs with a small
set of path measurements rather than the complete adjacency or distance matrices. We also develop models to learn the network topology via path measurements and explore distance vector properties of
the network. This network information is then leveraged to make informed predictions about network connectivity. This helps to retain original network characteristics even while completing graphs
from partial data, in terms of links and distances, using deterministic methods such as low-rank matrix completion as well as non-deterministic prediction methods such as neural networks.
We present link dimension, a novel graph dimension based on a subset of nodes for defining distance vectors that completely capture the graph topology. Several definitions for dimensions of a graph
exist, such as metric dimension and edge dimension, to identify the least integer ‘n’ such that there exists a representation of the graph in n dimensions. Link dimension is however the only
dimension that captures the complete topology of the graph and allows for its exact reconstruction. We also propose a greedy algorithm to find such defining nodes in the network by identifying the
nodes that capture most of the information about nodes as well as the links. Several interesting properties of link dimension are also derived along with the bounds for link dimension for several
graph families.
Ability to recover or estimate the network topology from a small fraction of pairwise distances among nodes or even a few random distance measurements is essential when the network measurements are
expensive or infeasible. We use low-rank matrix completion to recover topology of spatial networks from a very small fraction of distance measurements. We present results for networks of different
shapes, with a range of 20% to 80% of observed measurements. This technique is especially useful in sensor networks which are constrained with respect to their storage and computational capabilities,
each node has access to only partial information about the network. It is helpful to know the connectivity, boundaries, and the overall topology of the sensor networks especially in applications such
as routing, segmentation, and anchor selection.
Not all networks are embedded in 2D or 3D physical spaces. Graphs such as friendship networks, product graphs, and software module connectivity are non-spatial. Such graphs can be measured in terms
of inter-node distances, i.e., connected nodes are said to be at one hop distance. We leverage triangle inequality as applied to distances on a graph to compute bounds on the missing distance entries
and perform bounded matrix completion on directed as well as undirected social networks. The proposed prediction techniques are evaluated for real-world social networks (such as Facebook, Email,
Collaboration networks, Twitter). The low-rank matrix completion based prediction technique is evaluated for 20% to 80% missing distances in a given network. Results for low-rank matrix completion
show that even at 40% of missing measurements, the network distances can be accurately predicted while preserving the original network characteristics.
Many real-world networks are known to exhibit scale-free phenomenon, where the degree distribution follows the power law, at least asymptotically. In order to learn this complex relationship between
node distances, we turn to neural networks as a foundation of a new prediction model. However, neural networks need complete data to train on which is not available when only a fraction of distances
is measured in a network. We use the concepts of domain adaptation to employ a novel technique of using synthetic networks to pre-train the autoencoder. We make use of the scale free phenomenon
observed in real-world networks and generate artificial networks that also embody this power law in their degree distribution. Training on these preferentially attached synthetic networks helps in
learning about the scale-free networks, even if the training data was not derived from the real-world networks. This helps to make accurate predictions on real-world networks when only ultra-sparse
measurements are observed. This aids in two ways: a) we can generate ample amounts of training data and b) we can make sure that the training data is similar in characteristics with the real-world
network we want to eventually predict on. This helps to improve the prediction accuracy of our non-deterministic approach over the deterministic one when the fraction of observed measurements
decreases. The neural network-based model achieves efficient prediction performance with graceful degradation over ultra-sparsely sampled networks. While Low-rank Matrix Completion predicts missing
network distances within 15% of error when 20% of distances are missing, neural network-based model gives an estimation error of within 20% when 90% of entries are missing.
In summary, the lossless graph representation can aid faster processing, compact storage, and novel solutions for networking algorithms. Conversely, the ability to predict missing connectivity
information from a relatively small set of distance measurements enables inference of hidden connectivity information and provides a foundation for developing novel network mining algorithms.
Adviser: Dr. Anura Jayasumana
Co-Adviser: n/a
Non-ECE Member: Michael Kirby, Mathematics
Member 3: Randy Paffenroth
Addional Members: Anthony Maciejewski
• G. Mahindre, R. Karkare, R. Paffenroth, & A. Jayasumana, "Inference in Social Networks from Ultra-Sparse Distance Measurements via Pretrained Hadamard Autoencoders," 2020 IEEE 45th Conference on
Local Computer Networks (LCN), Australia, 2020.
• G. S. Mahindre & A. P. Jayasumana, "Link Dimension for Complete and Unique Reconstruction of Graphs?", Journal of Discrete Applied Mathematics, submitted July 2020, revised June 2021 (in review).
• A. P. Jayasumana, R. Paffenroth, G. S. Mahindre, S. Ramasamy, & K. Gajamannage, "Network topology mapping from partial virtual coordinates and graph geodesics," IEEE/ACM Transactions on Networking,
• G. S. Mahindre, A. P. Jayasumana, K. Gajamannage & R. Paffenroth, "On Sampling and Recovery of Topology of Directed Social Networks: A Low-Rank Matrix Completion Based Approach," IEEE 44th
Conference on Local Computer Networks (LCN), Germany, 2019.
• G. S. Mahindre, & A. P. Jayasumana. "Efficient Representation, Measurement, and Recovery of Large-scale Networks." Ninth International Green and Sustainable Computing Conference (IGSC). IEEE, 2018.
• G. S. Mahindre & A. P. Jayasumana, "Post failure recovery of virtual coordinates in wireless sensor networks," 7th International Conference on Information and Automation for Sustainability,
Colombo, 2014.
• R. Karkare, R. Paffenroth, & G. Mahindre, "Blind Image Denoising and Inpainting Using Robust Hadamard Autoencoders." arXiv preprint arXiv:2101.10876 (2021).
• G. S. Mahindre, R. Karkare, R. Paffenroth, & A. P. Jayasumana, "Optimal Pre-training of Autoencoders to Predict Distances in Social Networks. " | to be submitted, 2021
Program of Study: | {"url":"https://www.engr.colostate.edu/ece/graduates/exam-abstract/?pass=4253","timestamp":"2024-11-03T06:44:05Z","content_type":"text/html","content_length":"58859","record_id":"<urn:uuid:18606228-d333-45f1-a068-d0d1987980a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00471.warc.gz"} |
A circle is a simple closed shape. It is the set of all points in a plane that are at a given distance from a given point, the centre; equivalently it is the curve traced out by a point that moves in
a plane so that its distance from a given point is constant. The distance between any of the points and the centre is called the radius. This article is about circles in Euclidean geometry, and, in
particular, the Euclidean plane, except where otherwise noted. | {"url":"https://chapter-08-06.hugoinaction.com/blog/community/","timestamp":"2024-11-07T13:33:52Z","content_type":"text/html","content_length":"3707","record_id":"<urn:uuid:80ba70e7-b383-4ccb-8784-d95e5dd676f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00654.warc.gz"} |
2006 APS March Meeting
Bulletin of the American Physical Society
2006 APS March Meeting
Monday–Friday, March 13–17, 2006; Baltimore, MD
Session A8: Pattern Formation and Nonlinear Dynamics Hide Abstracts
Sponsoring Units: DFD GSNP
Chair: P. Palffy-Muhoray, Kent State University
Room: Baltimore Convention Center 314
Monday, A8.00001: The effects of initial seed size and transients on dendritic crystal growth
March Andrew Dougherty, Thomas Nunnally
2006 The transient behavior of growing dendritic crystals can be quite complex, as a growing tip interacts with a sidebranch structure set up under an earlier set of conditions. In this work, we
8:00AM report on two observations of transient growth of NH$_4$Cl dendrites in aqueous solution. First, we study growth from initial nearly-spherical seeds. We have developed a technique to initiate
- growth from a well-characterized initial seed. We find that the approach to steady state is similar for both large and small seeds, in contrast to the simulation findings of Steinbach,
8:12AM Diepers, and Beckermann[1]. Second, we study the growth of a dendrite subject to rapid changes in temperature. We vary the dimensionless supersaturation $\Delta$ and monitor the tip speed $v$
and curvature $\rho$. During the transient, the tip shape is noticeably distorted from the steady-state shape, and there is considerable uncertainty in the determination of the curvature of
that distorted shape. Nevertheless, it appears that the ``selection parameter'' $\sigma^* = 2 d_0 D / v \rho^2$ remains approximately constant throughout the transient. [1] I. Steinbach,
H.-J. Diepers, and C. Beckermann, \textit{J. Cryst. Growth}, \textbf{275}, 624-638 (2005). [Preview Abstract]
Monday, A8.00002: Control of eutectic solidification microstructures through laser spot perturbations
March Silvere Akamatsu, Kyuyong Lee, Wolfgang Losert
2006 We report on a new experimental technique for controlling lamellar eutectic microstructures and testing their stability in directional solidification (solidification at fixed rate V in a
8:12AM uniaxial temperature gradient) in thin sample of a model transparent alloy. A eutectic binary alloy solidifies into a mixture of two crystal phases. In stationary regimes, periodic front
- patterns made of an alternate stacking of lamellae of the two solid phases are observed. We observe the solidification front in real time by optical microscopy. We use micromanipulation with
8:24AM laser spot arrays for perturbing the solidification front on a scale ranging from one to ten times the average value of the lamellar spacing (spatial period), i.e., typically 10 to 100
microns. These perturbations arise from local heating due to the absorption of the laser light by the liquid slightly ahead of the front. We use the laser spot perturbation technique as a
tool for mapping out the large range of accessible lamellar spacings at given V and for creating desired patterns (smooth spatial modulation, tilt domains). [Preview Abstract]
Monday, A8.00003: Pattern Formation in a NaCl Crystal undergoing Strain-enhanced Dissolution
March Zvi Karcz, Deniz Ertas, Richard Polizzotti, Einat Aharonov, Chris Scholz
2006 Observations of an initially circular contact ($\sim $300$\mu $m in diameter) between the [100] face of a single-crystal NaCl shaped as a truncated cone and a flat silicate plate immersed in
8:24AM saturated solution indicate that the crystal deforms in two sequential stages under constant normal load. The first is characterized by contact area reduction and slow convergence rates, and
- the second by fluctuations in contact area and fast and fluctuating convergence rates. Fluctuations are on a timescale of $\sim $14 hours. The transition between the stages occurs at the
8:36AM maximum contact stress, which shortly precedes the maximum convergence rate. Confocal images indicate that the crystal dissolves coaxially during the first stage, producing a decreasing
static contact. During the second stage, the contact shape is highly irregular, with channels and ridges forming inside the contact. These observations reflect a system evolving towards a
non-equilibrium steady state, controlled by the interaction between strain-energy driven undercutting dissolution and plastic flow. Undercutting dissolution reduces the area of the contact,
and preferentially removes regions with high dislocation density, while plastic flow increases the contact area by mobilizing dislocations that strain harden the crystal. The feedback between
these two mechanisms drives the system towards a dynamic steady state. [Preview Abstract]
Monday, A8.00004: Controlled Irradiative Formation of Penitentes
March Vance Bergeron, Charles Berger, M. D. Betterton
2006 Spike-shaped structures are produced by light-driven ablation in very different contexts. Penitentes 1-4 m high are common on Andean glaciers, where their formation changes glacier dynamics
8:36AM and hydrology. Laser ablation can produce cones 10-100 $\mu$m high with a variety of proposed applications in materials science. We report the first laboratory generation of centimeter-scale
- snow and ice penitentes. Systematically varying conditions allows identification of the essential parameters controlling the formation of ablation structures. We demonstrate that penitente
8:48AM initiation and coarsening requires cold temperatures, so that ablation leads to sublimation rather than melting. Once penitentes have formed, further growth of height can occur by melting.
The penitentes intially appear as small structures (3 mm high) and grow by coarsening to 1-5 cm high. Our results are an important step towards understanding and controlling ablation
morphologies. [Preview Abstract]
Monday, A8.00005: Transient growth and controlled side branching of xenon dendrites
March Marco Fell, J. H. Bilgram
2006 In our experiments we study the influence of transient growth conditions on the growth of xenon dendrites from undercooled melt. Here we report on the response of crystal growth on heating
8:48AM the melt. We start heating at a given temperature and steady-state growth. The dendrite tip reacts on this change by slowing down growth rate $v$ and increasing tip radius $R$. We observe
- that side branches emerge from an unstable surface. As we continue heating up to slightly above melting temperature, the tip radius continuously decreases to a new value. The reverse
9:00AM temperature change unveils a hysteretic behavior: As soon as we cool down the melt from a temperature tight above melting temperature, $v$ and $R$ both increase. The curvature of the tip
becomes too small to be stable at the given undercooling and an instability leads to a new, thin tip growing out of the oversized sphere-like tip. The value $R^2v$ shows a sharp peak and then
settles to a constant value in only about 20 seconds. The same instability also gives rise to side branches whose formation can be controlled by a repetitive application of the described
mechanisms. Highly symmetric xenon crystals can be grown by this technique. [Preview Abstract]
Monday, A8.00006: Late time growth dynamics in the Cahn-Hilliard equation
March Tmothy S. Sullivan, P. Palffy-Muhoray
2006 Numerical simulations were carried out in 2D of the scaled Cahn-Hilliard equation $\left[ {\partial \psi /\partial t=(1/2)\nabla ^2(-\psi +\psi ^3-\nabla ^2\psi )} \right]$ starting from
9:00AM Gaussian distributed, random initial conditions on a 540x540 square grid. Simulations were run for a dimensionless time of 200,000, a factor of ten beyond previously reported results. The
- simulations also covered a broad range of values of the mean composition, including several at values that had not previously been reported. For each composition and for time intervals of no
9:12AM longer than 5000 in dimensionless time, the structure factor was calculated for sixty separate runs and averaged. The pair correlation function was then calculated from the average structure
factor and its first zero crossing,$R_G (t)$, taken as a measure of the average domain size, was determined. An equation of the form $R_G (t)=at^b+c$ was then fit to our data over the
dimensionless time range from 5000 to 200,000. In contrast to previous work, we find that the scaling exponent $b$ varies with mean coomposition and does not appear to be consistent with the
Lifshitz-Slyozov result $b$ = 1/3. The largest deviation occurs at a mean composition of 0.2, where $b=0.244\pm 0.003$. We discuss the possible effects of morphology on both the scaling law
and the time it takes to reach the scaling regime. [Preview Abstract]
Monday, A8.00007: Domain Growth in 2D Hexagonal Patterns with Diffuse Interfaces
March Daniel A. Vega, Leopoldo R. G\'omez, Ricardo J. Pignol
2006 The coarsening process in planar patterns has been extensively studied during the last two decades. Although progress has been made in this area, there are still many open questions
9:12AM concerning the basic mechanisms leading the system towards equilibrium. Some of these mechanisms (including curvature driven growth, grain rotation and defect annihilation) have mostly been
- addressed in systems displaying sharp interfaces. In this work we traced the dynamics of phase separation in hexagonal patterns with diffuse interfaces through the Cahn-Hilliard model. By
9:24AM studying orientational and translational order and densities of topological defects we were able to identify a mechanism of coarsening simultaneously involving curvature driven growth, front
propagation and grain rotation. In this regime we found that different correlation lengths characterizing the hexagonal pattern increase logarithmically with time. [Preview Abstract]
Monday, A8.00008: Oscillatory patterns near the instability threshold in extended systems with reflection symmetry
March Alexander Nepomnyashchy, Irina Smagin, Vladimir Volpert, Alexander Golovin
2006 It is well known that the envelope function of a modulated traveling wave spontaneously generated by a short-wave instability is governed by a complex Ginzburg-Landau equation (CGLE). Various
9:24AM modulation phenomena, which include the nonlinear development of a modulational instability of periodic waves in the supercritical region, as well as the formation of stable modulated waves
- in the subcritical region, have been extensively studied in the framework of CGLE. The nonlinear interaction between two waves moving in the opposite directions is described by a system of
9:36AM two non-locally coupled CGLEs that has not been studied in detail yet. We use this system for studying several phenomena related to modulations of standing waves: (i) nonlinear development of
a modulational instability; (ii) propagation of defects in standing-wave patterns; (iii) subcritical modulated waves. The results are applied to problems of transverse instabilities of fronts
in combustion and explosive crystallization. [Preview Abstract]
Monday, A8.00009: Effects of the Deep of Quench on the Mechanisms of Pattern Formation of Sphere Forming Block Copolymers
March Leopoldo R. G\'omez, Daniel A. Vega, Enrique M. Vall\'es
2006 The disorder-order transition of a two dimensional sphere forming block copolymer is studied through the Cahn-Hilliard model at different deeps of quench. The process of microphase separation
9:36AM and kinetic of pattern formation are controlled by the spinodal and order-disorder temperatures. In the spinodal region the deep of quench strongly affect both, ordering times and density of
- topological defects. As the spinodal temperature is approached, the density of disclination becomes very small and grains show a perfect orientational and translational order. In a narrow
9:48AM region of temperatures the system relax towards equilibrium via the nucleation and growth mechanism. In this region the critical grain size is approximately one lattice constant in the
neighborhood of the spinodal line and diverges as the order-disorder temperature is approached. [Preview Abstract]
Monday, A8.00010: Feedback Control of Pattern Formation
March Liam Stanton, Alexander Golovin
2006 Global feedback control of spatially-regular patterns described by the Swift-Hohenberg (SH) equation is studied. Two cases are considered: (i) the effect of control on the competition between
9:48AM roll and hexagonal patterns; (ii) the suppression of sub-critical instability by feedback control. In case (i), it is shown that control can change the stability boundaries of hexagons and
- rolls. Particularly, for certain values of the control parameter, both hexagons and rolls are unstable, and one observes non-stationary patterns with defects. In case (ii), the feedback
10:00AM control suppresses the unbounded solutions of a sub-critical SH equation and leads to the formation of spatially-localized patterns. [Preview Abstract]
Monday, A8.00011: Grain boundary stability in stripe configurations of non potential, pattern forming systems
March Jorge Vinals, Zhi-Feng Huang
2006 We describe numerical solutions of nonpotential models of pattern formation in non equilibrium systems to address the motion of grain boundaries separating large domains of stripe
10:00AM configurations. One of the models allows for mean flows. Wavenumber selection at the boundaries, boundary instability, and defect formation and motion at the boundary are described as a
- function of the distance to onset. [Preview Abstract]
Monday, A8.00012: Mesoscale Theory of Grains and Cells: Crystal Plasticity and Coarsening
March Surachate Limkumnerd, James Sethna
2006 Line-like topological defects inside metals are called dislocations. At high temperatures, polycrystalline grains form from the melt and coarsen with time: these dislocations can both climb
10:12AM and glide. At low temperatures under shear the dislocations (which allow only glide) form into cell structures. While both the microscopic laws of dislocation motion and the macroscopic laws
- of coarsening and plastic deformation are well studied, we have had no simple, continuum explanation for the evolution of dislocations into sharp walls. We present here a mesoscale theory of
10:24AM dislocation motion which provides a quantitative description of deformation and rotation, grounded in a microscopic order parameter field exhibiting the topologically conserved quantities.
The topological current of the Nye dislocation density tensor is derived from a microscopic theory of glide driven by Peach-Koehler forces between dislocations using a simple closure
approximation. The evolution law leads to singularity formation in finite time, both with and without dislocation climb. Implementation of finite difference simulations using the upwind
scheme and the results in one and higher dimensions will be discussed. [Preview Abstract]
Monday, A8.00013: Numerical Studies of annular electroconvection in the weakly nonlinear regime
March Peichun Tsai, Zahir A. Daya, Stephen W. Morris
2006 We study 2D electrically-driven convection in an annular geometry by direct numerical simulation. The simulation models a real experiment which consists of a weakly conducting, submicron
10:24AM thick liquid crystal film suspended between two concentric electrodes. The film is driven to convect by imposing a sufficiently large voltage $V$ across it. The flow is driven by a surface
- charge density inversion which is unstable to the electrical force. This instability is closely analogous to the mass density inversion which is unstable to the buoyancy force in conventional
10:36AM thermally-driven Rayleigh-B{\'e}nard convection. The important dimensionless parameters are a Rayleigh-like number $R$, proportional to $V^2$, a Prandtl-like number $P$, equal to the ratio of
the charge and viscous relaxation times, and the radius ratio $\alpha$, characterizing the annular geometry. The simulation uses a pseudo-spectral method with Chebyshev polynomials in the
radial direction and Fourier modes in the azimuthal direction. We deduce the coefficient $g$ of the leading cubic nonlinearity in the Landau amplitude equation from the computed amplitude of
convection. We investigate the dependence of $g$ on $\alpha$ and $P$ and compare the results to experimental data and to linear and nonlinear theory. [Preview Abstract]
Monday, A8.00014: Demodulation of Electroconvective patterns in Nematic Liquid Crystals
March Gyanu Acharya, Joshua Ladd, J.T. Gleeson, Iuliana Oprea, Gerhard Dangelmayr
2006 We present the results of pattern formation in electroconvection of liquid crystal 4-ethyl-2-fluoro-4'-[2-(trans-4-pentylclohexyl)-ethyl]biphenyl(I52) with planar alignment. The pattern was a
10:36AM function of three control parameters: applied ac voltage, driving frequency and electrical conductivity. Over certain range of conductivity, the initial transition (supercritical Hopf
- bifurcation) leads to right and left traveling zig and zag rolls .For the demodulation of images, Fourier transform (FT) of a time series of images were taken with the sampling rate greater
10:48AM than the Hopf frequency. To demodulate zig/zag rolls, the region around \textbf{k}$_{n }$( the wave vector of a given mode) of interest at one quarter of the FT was taken setting all FTs
zero. Taking the index of the maximum FT value at that region as the reference point, again this region was separated into four parts and redistributed at four corners. The absolute value of
the inverse FT of the modified function gives the required envelope. [Preview Abstract]
Monday, A8.00015: Pattern Formation and Dynamics in Electroconvection of Nematic Liquid Crystals: a Theoretical and Experimental Study of the Weak Electrolyte Model
March Iuliana Oprea, J.T. Gleeson, Gerhard Dangelmayr
2006 Ginzburg Landau formalism is used in the study of electrohydrodynamic convection in a planar layer of nematic liquid crystal based on the weak electrolyte model. Stable wave patterns
10:48AM predicted by weak electrolyte model near a Hopf bifurcation of the basic state are analyzed and bounds for the Eckhaus stability are obtained. The weak electrolyte model, that treats the
- conductivity as a dynamical variable, is tested by quantitative comparison of experimentally measured and theoretically calculations of specific parameters, such as the recombination rate and
11:00AM charge transport, for the nematic I52. The experimentally observed spatiotemporal chaos evolving at the onset is qualitatively compared with the spatiotemporal chaos obtained in the numerical
simulations of the four globally coupled Ginzburg Landau equations describing the dynamics of the amplitudes of the bifurcated patterns. [Preview Abstract]
Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
Become an APS Member Renew Membership Librarians
Submit a Meeting Abstract Join an APS Unit Authors
Submit a Manuscript Get My Member Number Referees
Find a Journal Article Update Contact Information Media
Donate to APS Students
© 2024 American Physical Society | All rights reserved | Terms of Use | Contact Us
Headquarters 1 Physics Ellipse, College Park, MD 20740-3844 (301) 209-3200
Editorial Office 100 Motor Pkwy, Suite 110, Hauppauge, NY 11788 (631) 591-4000
Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700 | {"url":"https://meetings.aps.org/Meeting/MAR06/Session/A8?showAbstract","timestamp":"2024-11-11T04:12:20Z","content_type":"text/html","content_length":"36702","record_id":"<urn:uuid:25b2abd9-0ecd-420c-83df-b946fc6ce371>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00788.warc.gz"} |
Find only interior points from set of points
Accepted Answer
Commented: Image Analyst on 18 Apr 2023
I have a collection of interior and exterior points and need to remove all exterior points from the set. The points can be in any order and any shape. The exterior shape does not have to match the
interior shape. However, the inner shape NEVER intersects the outer shape. Both shapes share a common "center" point and I have that data. For example, I could have the points like...
t1 = linspace(0,2*pi, 100);
t2 = wrapTo2Pi(t1 + 0.05);
r1 = 5;
r2 = 10;
r3 = 7;
r4 = 12;
x1 = r1*cos(t1);
y1 = r2*sin(t1);
x2 = r3*cos(t2);
y2 = r4*sin(t2);
xGiven = [x1, x2];
yGiven = [y1, y2];
scatter(xGiven , yGiven);
Assume, that the only information provided is xGiven, and yGiven. No information on how they were constructed is provided.
Here, I only want the points of the interior ring. The common point shared is [0,0] since both shapes are symmetric about that point.
The only toolbox I have access to is the signal processing toolbox.
Any suggestions would be greatly appreciated.
0 Comments
13 views (last 30 days)
Find only interior points from set of points
Try this:
% Demo by Image Analyst separate point sets using the convex hull.
clc; % Clear the command window.
close all; % Close all figures (except those of imtool.)
clear; % Erase all existing variables. Or clearvars if you want.
workspace; % Make sure the workspace panel is showing.
t1 = linspace(0,2*pi, 100);
t2 = wrapTo2Pi(t1 + 0.05);
scatter(xGiven , yGiven, 'filled');
title('All Points', 'FontSize', fontSize);
% First get exterior points. They are the convex hull.
chIndexes = convhull(xGiven , yGiven);
xExterior = xGiven(chIndexes);
yExterior = yGiven(chIndexes);
% Now get the interior points. They are whatever is not the convex hull
interiorIndexes = setdiff(1:length(xGiven), chIndexes);
xInterior = xGiven(interiorIndexes);
yInterior = yGiven(interiorIndexes);
% Plot red dots for the exterior points
plot(xExterior, yExterior, 'r.', 'MarkerSize', markerSize);
title('Exterior Points', 'FontSize', fontSize);
% Plot magenta dots for the interior points.
plot(xInterior, yInterior, 'm.', 'MarkerSize', markerSize);
title('Interior Points', 'FontSize', fontSize);
2 Comments
I was wondering that myself, but I didn't have time to delve into it. I'd have to zoom way in to see if that point is actually not on the convex hull, or if it is, if there is a bug in the function.
More Answers (1)
As an another possible solution, how about using boundary function to detect exterior points?
The following is an example:
% Sample data
x = rand(100, 1);
y = rand(100, 1);
% Identiry exterior points
k = boundary(x, y);
% Create index
idxExterior = false(size(x));
idxExterior(k) = true;
idxInterior = ~idxExterior;
% Show the result
ax1 = nexttile;
scatter(x,y, 'b.')
daspect([1 1 1])
title('All points')
ax2 = nexttile;
scatter(x(idxExterior), y(idxExterior), 'r.')
daspect([1 1 1])
title('Exterior points')
ax3 = nexttile;
scatter(x(idxInterior), y(idxInterior), 'm.')
daspect([1 1 1])
title('Interior points')
linkaxes([ax1, ax2, ax3], 'xy')
2 Comments
Looks like with boundary the points don't have to be convex. The "outer" points can go in and out, have protrusions and bays.
help boundary | {"url":"https://se.mathworks.com/matlabcentral/answers/1948818-find-only-interior-points-from-set-of-points","timestamp":"2024-11-14T07:10:36Z","content_type":"text/html","content_length":"164632","record_id":"<urn:uuid:227432f1-838b-4ce6-923d-6adea4928120>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00502.warc.gz"} |
(UP 2002) 3. निम्न के मान ज्ञात कीजिए- 5. ∫1−cosxdx... | Filo
Question asked by Filo student
(UP 2002) 3. निम्न के मान ज्ञात कीजिए- 5.
Not the question you're searching for?
+ Ask your question
Video solutions (3)
Learn from their 1-to-1 discussion with Filo tutors.
15 mins
Uploaded on: 10/11/2022
Was this solution helpful?
6 mins
Uploaded on: 10/11/2022
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text (UP 2002) 3. निम्न के मान ज्ञात कीजिए- 5.
Updated On Oct 12, 2022
Topic Calculus
Subject Mathematics
Class Class 12
Answer Type Video solution: 3
Upvotes 328
Avg. Video Duration 12 min | {"url":"https://askfilo.com/user-question-answers-mathematics/up-2002-3-nimn-ke-maan-jnyaat-kiijie-5-32333439303334","timestamp":"2024-11-09T01:41:07Z","content_type":"text/html","content_length":"310627","record_id":"<urn:uuid:2ea5a8c9-c476-44fb-b1f3-d14509a6e8fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00598.warc.gz"} |
Author oscarbenjamin
Recipients agthorr, belopolsky, christian.heimes, ethan.furman, gregory.p.smith, mark.dickinson, oscarbenjamin, pitrou, ronaldoussoren, sjt, steven.daprano, stutzbach, terry.reedy,
tshepang, vajrasky
Date 2013-08-19.13:15:55
SpamBayes Score -1.0
Marked as Yes
Message-id <CAHVvXxSoDxb2MWeQfzRdBhFHGj-f3CAfPs0BHdGSVLwbrsZH9A@mail.gmail.com>
In-reply-to <1376884746.55.0.0278802992198.issue18606@psf.upfronthosting.co.za>
I've just checked over the new patch and it all looks good to me apart
from one quibble.
It is documented that statistics.sum() will respect rounding errors
due to decimal context (returning the same result that sum() would). I
would prefer it if statistics.sum would use compensated summation with
Decimals since in my view they are a floating point number
representation and are subject to arithmetic rounding error in the
same way as floats. I expect that the implementation of sum() will
change but it would be good to at least avoid documenting this IMO
undesirable behaviour.
So with the current implementation I can do:
>>> from decimal import Decimal as D, localcontext, Context, ROUND_DOWN
>>> data = [D("0.1375"), D("0.2108"), D("0.3061"), D("0.0419")]
>>> print(statistics.variance(data))
>>> with localcontext() as ctx:
... ctx.prec = 2
... ctx.rounding = ROUND_DOWN
... print(statistics.variance(data))
The final result is not accurate to 2 d.p. rounded down. This is
because the decimal context has affected all intermediate computations
not just the final result. Why would anyone prefer this behaviour over
an implementation that could compensate for rounding errors and return
a more accurate result?
If statistics.sum and statistics.add_partial are modified in such a
way that they use the same compensated algorithm for Decimals as they
would for floats then you can have the following:
>>> statistics.sum([D('-1e50'), D('1'), D('1e50')])
whereas it currently does:
>>> statistics.sum([D('-1e50'), D('1'), D('1e50')])
>>> statistics.sum([D('-1e50'), D('1'), D('1e50')]) == 0
It still doesn't fix the variance calculation but I'm not sure exactly
how to do better than the current implementation for that. Either way
though I don't think the current behaviour should be a documented
guarantee. The meaning of "honouring the context" implies using a
specific sum algorithm, since an alternative algorithm would give a
different result and I don't think you should constrain yourself in
that way.
Date User Action Args
2013-08-19 oscarbenjamin set recipients: + oscarbenjamin, terry.reedy, gregory.p.smith, ronaldoussoren, mark.dickinson, belopolsky, pitrou, agthorr, christian.heimes, stutzbach,
13:15:55 steven.daprano, sjt, ethan.furman, tshepang, vajrasky
2013-08-19 oscarbenjamin link issue18606 messages
2013-08-19 oscarbenjamin create | {"url":"https://bugs.python.org/msg195630","timestamp":"2024-11-11T20:07:24Z","content_type":"application/xhtml+xml","content_length":"13354","record_id":"<urn:uuid:f6499408-7a29-43fd-bbac-1809daa9daae>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00513.warc.gz"} |
Search for | AFGC Wiki
Leave us a comment for us to improve the website
FINITE ELEMENT MODELING AND COMPUTATION... Foreword
Leave us a comment for us to improve the website It could be: a disagreement, a proposal for a better phrasing, a proposal for a bibliography reference, a link to a relevant internet webpage, a
practical example illustrated or not, ... | {"url":"https://wiki.afgc.asso.fr/search","timestamp":"2024-11-12T02:17:45Z","content_type":"text/html","content_length":"78501","record_id":"<urn:uuid:b9e8b206-fa95-422e-8dd4-0e1a64bf92e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00870.warc.gz"} |
RoundingFiasco: rounding variants floor, ceil and truncate for floating point operations +-*/√…
There is an exact definition for +-*/√ over the real numbers in mathematics. However for performant, flexible and ergonomic numerical computations one ought to restrict oneself only having a finite
subset of rational numbers. The most common data type for such use cases is the single and double floating point format.
Combining two real floating point numbers by an operator yield a mathematical and exactly defined result. This exact result might not be representable as a floating point number. One has to round.
The most common way of rounding is rounding to the nearest representable float. This rounding variant helps to minimize the accumulation of rounding errors over several floating point operations.
Other rounding variants floor, ceil and truncate are useful for computing error bounds of chained floating point instructions. floor chooses the lesser neighbor of the representable results. ceil
chooses the greater float. truncate chooses the float that is closest to zero.
This library implements the floating point instructions in pure hasekell. They do not use `c++` with fegetround for example. That way they can be used in the WebAssembly backend of ghc since
WebAssembly does neither support rounding variants nor fegetround.
This module is supposed to expose the fastest possible clean interface of rounding variants. Should there ever be some compiler intrinsics for rounding variants then these shall be used in a future
Internally the module heavily utilizes the Rational data type. First the operations result is calculated twice. One time exact with the help of Rational. Then there is also a
round-to-nearest-even-on-tie result calculated. After that both numbers are compared to investigate if the round-to-nearest-even-on-tie result was rounded in the correct direction by chance. Should
that not be the case the other neighbor is determined and returned.
Every combination of number type (Float, Double) and operator (+,-,*,/,√,id) is exported separately. The exported functions are supposed to be useful for interval arithmetic.
Note: This package has metadata revisions in the cabal description newer than included in the tarball. To unpack the package including the revisions, use 'cabal get'.
Maintainer's Corner
For package maintainers and hackage trustees | {"url":"https://hackage.haskell.org/package/RoundingFiasco","timestamp":"2024-11-07T16:32:53Z","content_type":"text/html","content_length":"17971","record_id":"<urn:uuid:a019f52b-afb4-4201-8b46-e80db2a9bef2>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00399.warc.gz"} |
"Both sides are odious" you immediately gave yourself away there. The actions of the armed wing of an occupied people against their occupiers and oppressors are not odious, they are entirely
justified. Are you going to tell me that the armed wing of the Warsaw Ghetto were "odious". This is classic sophistry meant to hide the truth. Without Hamas the residents of Gaza are simply unarmed
concentration camp residents, sheep ready for the Israeli slaughter and ethnic cleansing. Israel is a settler colonizing apartheid regime that wants to establish its own lebensraum based on the
Zionist interpretations of the assertions of a religious text that is about as historically accurate as the Iliad.
The answer has always been simple, the UNSC takes the lands into its management by the necessary force and imposes the 1947 boundaries, which will involve the removal of all the illegal settlers.
The school curricula will also have to be imposed, as with post-WW2 Germany to remove the teachings of religious supremacy and racist hatred.
Of course, this will not happen so Israel will continue on its course to eventual destruction as the power of the US and the West recedes. The Israelis are simply adding to the absolute hatred that
will be rent upon them rather than seeking a true compromise (which they have never, ever tried). We are seeing the desperate thrashings of a declining Empire and its vassal.
The 1947 boundaries were imposed by a UN without the Soviet Union yet taking its chair at the UNSC and China represented by Taiwan. So these boundaries, which were highly slanted to benefit the
Jewish population, were basically an imposition of the Western powers and lack any real legitimacy.
Expand full comment
Totally agree, with one exception. See my comment about the "creation" of the state of Israel. Under the UN Partition Plan land would be given to the Zionists only if both Zionists and Arabs
approved the plan. The Arabs flatly reject it, at which point the Zionists took matters into their own hands and the result was the Nakba.
Expand full comment
It's a tough topic and Aleks has done well.
Expand full comment
It is certainly a tough topic if the underlying assumptions are faulty.
Expand full comment
Assumption sometimes rhymes with arrogance.
Expand full comment
Thank you Roger for articulating a thorough response. It is wonderful to read that someone has a full grasp on the situation. It would wonderful to read more from you...
Expand full comment
I am not going to make this a long comment. I might write a longer one later on (though I can’t promise because I don’t want to break my promise).
You’re too easy on Israel/Zionists and Jews. I’m not saying that the bulk of Jews and Israelis are to blame for all this. What I’m saying is they’re not just innocent bystanders that are being used
by some hidden force like unwitting pawns.
I have studied both Jewish history and religion extensively, in addition to observations of modern Jews both in public and private. This is what I have to say.
The majority of Jews are brainwashed. In fact, person for person they are the most brainwashed people on earth. Since birth they are inculcated with two false axioms:
1) They are hated by everyone and everyone wants to kill them. Not because of their attitudes towards the nations they live among or their actions against those nations and/or its people. But
because of their religion/ethnicity. Only because they’re jewish, and they’re blameless in all of it while the others are pure evil.
2) They are superior. They have a superior mind. They have a superior religion. They have a superior army. In fact, they are so superior that they represent a different kind of human that there are
the Jews and then there are everyone else (goyim). I’m sure you’ve heard of the superiority of the “Jewish mind” or the purity of the “Jewish soul”. These statements are taken seriously. Some will
say that’s strictly the Talmudic Jews (Orthodox and the like). Not true. Even secular Jews believe the same. They just don’t present it in religious terms.
These two points create a sense of supremacy and victimhood wherein
1) Any act they do that is demonstrably evil is accepted by them as justified,
2) The response to any perceived slight (real or imaginary) against them needs to be so great that the destruction of their adversary needs to be total regardless of how disproportionate the
reaction is, and
3) They all need to stick together no matter how evil the act perpetrated is when the perpetrator is Jewish
Obviously that does not describe ALL Jews and there are some that have woken up from the brainwashing and speak out on the side of the truth (so called “self hating Jews”). However, you can see
clearly from the reactions to the Gaza bombing and the plight of the Palestinians for the last 75 years that it applies to the majority.
So although, like you, I accept that a nation for the Jewish people has the right to exist, I do not - NOT FOR ONE SECOND - accept that they are innocent pawns in this great game. They are not only
willing participants, they are also driving the train.
If you take that into account the calculus changes considerably.
I hope you do that.
Expand full comment
Very good post. I accept your expertise. Thank you for your contribution. You're always free to expand your explanation 👍
Most/All people fail after having criticised Jews/Zionist when asked about what to do. Yes, these people perhaps genuinely have a superiority complex. But other people have as well. What to do?
Both, you and I say they should have their land. All further questions are never addressed. By no-one.
Expand full comment
What to do?
That’s the so-called “Jewish Question” now isn’t it? Has been around for a very long time.
The Roman solution doesn’t work.
Them having their own land where it’s only their people is one axis of breaking down that brainwashing. When you’re confronted by that reality of who they are every day and there is no one else to
blame, people’s minds change.
Another thing is the breakdown of their religious beliefs. Did you know that Jews are forbidden from even reading the New Testament? In Israel it is illegal to sell it in bookstores. Why? Because
it would shatter the Talmudic veil that the elites (and other brainwashed Jews) use to keep them in line. Sadly most Protestant Americans have fallen under the heresy of Christian Zionism so
instead of fulfilling their role in introducing the Jews to THEIR messiah, they support keeping him away from them.
Another especially for diaspora Jews is to be identifiable who is and who is not Jewish. Jews are not white. This is self evident as there are Ethiopian Jews, Moroccan Jews, Iranian Jews etc. They
are there own group of origin, even by their own admission, so it does them no insult to be identified as such. I know this brings bad memories but throughout history that was the case. What that
does is it assigns credit to them when they do good but also blame when they don’t. It’s actually very fair.
However in my opinion, the best thing is for them to join humanity. They need to be de-brainwashed. Destroy the myth that it was always the others fault and they’re always blameless. That other
times it had nothing to do with their Jewishness but normal empire behavior (like the Romans). In short, neither their suffering nor their excellence is due to their Jewishness or that they’re
unique or special in it. It’s painful when you’ve spent your whole life believing this but they’re growing pains and that’s good for them.
However, none of this achievable until the non-Jews accept those facts and in large enough numbers. It is also important to approach it unemotionally without hate, which only repeats the cycle.
If it was easily solvable we wouldn’t be in this situation today but here are some ideas to start.
Expand full comment
Lastly, what about Israel/Palestine?
That’s the easiest to solve actually. And it has already been solved by the UN 242 resolution (or 181 but that’s not happening).
Two states along the borders defined in the resolution which provides for a contiguous and viable Palestinian state. Could be different as long as both are UNHAPPY with it but I’m more inclined to
just stick with the resolution. If people have to move then they move. Tough luck. Some reasonable compensation can be agreed upon.
Peace treaty with ALL their neighbors. Only Egypt and Jordan is NOT ENOUGH.
Security guarantees from the UNSC and especially from the Great Powers (Russia, China, USA).
We’re a long way from this - if there is still an Israel after this - but that is the only viable sustainable solution.
Expand full comment
Thank you for posting your comment. Very informative. Well written.
Expand full comment
Thank you
Expand full comment
Other people have that sense of superiority. However, there are important differences.
It’s not just superiority but also victimhood. The combination is what makes it dangerous because then everything is excusable and sticking together as a group is paramount to everything and
supersedes any notion of good or morality. In fact, helping your group is the only thing that is good ... everything else is in service to that.
Second, network. They are global. Whether by chance or by design it is what it is. The sense of cohesion - again - comes from the two false axioms they are taught. If you’re superior and everyone
else is the enemy then you better stick to your people.
Third, verbal intelligence. They are, and that is a fact, good with words (on average). With that comes the ability to convince and persuade, which can be used to influence other people for good
but also to deceive. That element is the least of the three though. Plenty of people are good “with words” and it is a skill that can be taught.
Expand full comment
They are the only people that can punch you in the face and sincerely say "OUCH"!
Expand full comment
"It has the right to exist because it was established and legitimized by the United Nations. By all major nations"
By that reasoning, large & powerful nations have the "right" to steal land from small & weak nations, evict the small nations' people, & give it to someone else.
By what right? Power. That is might makes right, no more or less.
Expand full comment
Well, you need some regulations. If there would be none, we would live in anarchy. And yes, you're right. That was the situation in 1948. Might was right. Is it the right thing? I won't judge that.
I haven't been born back then. We can only write about it.
By the way. Almost all nations existing currently somehow got their borders by some kind of power projection. Most of them several centuries in the past. Take America. Was it right how it was
colonialized and the native people displaced and killed? I won't comment on that but does it mean that America (USA) has no right to exist?
We can't go down that road and discussion. At least, I won't do that. It brings us down to the darkest places a human being can go.
By the way... America was only one example out of dozens. But it is the most well-known.
Nevertheless, I fully understand you and your feelings. But again, it is not up to me to judge on that. We can only observe and describe. Nothing more.
Expand full comment
America earned its right to exist by defeating the British and establishing the original 13 United States of America. What they did subsequently, however, was to kill off the Native Americans and
expand west through wars and usurpation of land (though some land they purchased, such as the Louisiana Territory and Alaska). That genocide of Native Americans and the colonial expansion were the
wrong part, not the establishment of the State.
The Zionists did what the Americans did in their expansion efforts - they took by force what wasn't theirs. Yes, you can say that "might makes right" and America today is therefore legitimate, but
I doubt you will convince the surviving Native Americans of that - at least not the ones with any pride left among them.
Expand full comment
That it was somehow acceptable in 1948, or has been done historically, does not make it ethically right, morally right, acceptable, "the right thing to do" or in any way, shape or form, *smart.*
Any more than the fact that genocide has been committed historically.
It makes it the direct cause of "terrorism" aka asymmetric warfare by the robbed.
Right now, it is on track to lead to a hot WW3 & possible global nuclear holocaust.
The US was built on genocide; that doesn't make it right. It does make it very bad karma. The sins of the fathers are visited upon their children. Live by genocide; die by genocide. If we do not do
better, we will get what our forefather's deserved...
Expand full comment
I can follow your "morally wrong" argumentation.
Expand full comment
You were right not to get lost in the high weeds of Zionism & who really is a Semite. Your perspective has also unplugged all the emotional generators unknowingly juicing haphazard actions, void of
any idea of repercussions. Good show.
All of us who have no skin in this game -- other than cringing at the possibility of a worst case for the homo sapiens sapiens' evolutionary platform -- comprehend exactly what you are saying.
Thank you for your efforts.
Expand full comment
Thank you very much.
Expand full comment
"the high weeds of Zionism"
I'd say a filthy sewer is a better metaphor.
Expand full comment
For many, you are spot on. But there is a much, much deeper and more profound energy at work here. It overrides logic, all the positive emotions, any and all virtue. It's as if part of our genetic
autonomy has been lobotomized.
You may need to do a little research before reacting. But, what we are looking at is an embedded Conditioning (e.g. Pavlov's dogs) that has become perpetually self-reinforced. . .so well-hidden
behind the veil of religion.
Off the top of your head, what are the first two human emotions in the story of Eden? Guilt and shame. And this guilt and shame, branded in our psyches, are alive and well today whether we like it
or not.
Perhaps the question we should be asking is "How do you un-condition a Conditioned Response?" Good luck! The guilt and shame and all the subsequent negative emotions are repeated over and over
requiring no direction or input from our minds. It is a perpetual machine of (Conditioned) Stimulus --> (Conditioned) Response.
We need to figure this out. And, Aleks -- like many others -- are right in the middle, doing just that. Thank you.
Expand full comment
Thank you👍
Expand full comment
Realization that martyrdom sentiment you brought here and in your previous articles might actually be the case sends cold shivers down my spine. But the more I think about it and the more I talk
with people from that region, the more I realize that they are totally capable of this kind of thinking and action. Absolutely soul crushing perpective.
What makes you think Russia and China are down with the martyrdom plan? Especially considering that Iran covertly triggered the attack. This does not seem like a solution. The human lives cost is
Expand full comment
Very good Lux. I share the same feelings.
To be honest... I can't use this as an analysis... But my personal opinion is that we are witnessing a giant chicken game between the East and the West. And that the East (Russia and China) is
counting on a withdrawal of the United States from places they occupy worldwide to avoid a global disaster. And I also believe that this can go seriously wrong.
I do not support everything BRICS does. One need to keep it's own head working.
Expand full comment
Exactly, it is in the interests of the rising powers to maintain peace. For the falling power its interests are in war before it is too weak, or as a murder suicider take everything down with it if
it cant be in control.
Expand full comment
Your analysis makes a lot of sense, Aleks. But from humanitarian point of view I sincerely hope that the East has some other plan in this game. There should be people in each block who realize that
this is not the way forward.
Expand full comment
I fully agree.
Expand full comment
Doesn't seem like a solution to me either, rather like Palestinians will be, or already are used like proxies.
What a cruel world we live in!
Expand full comment
We do...
Expand full comment
You just need to rise above the atheist materialist sentiment and you too will realize that Man is not an animal or an individual but a creature meant to be part of something bigger.
Self-sacrifice, including martyrdom but also in it's lesser forms, is a normal and natural part of human life that is actually what enables us to be where we are today. Civilization is impossible
without "turning the other cheek". And civilized peoples fight through martyrdom! (Savages fight by ambushing their opponents thereby creating the possibility of not suffering any casualties
whereas civilized folk fight in formations which guarrantie some casualties on your own side - and also necessitates that somebody take the front row where it's virtually certain they will get
killed or wounded. This is impossible to pull off without a culture of martyrdom.)
> What makes you think Russia and China are down with the martyrdom plan?
Because the people who will do the dying are OK with the plan? That plus participation in the culture of martyrdom which is an obligatory prerequisite for civilization - as hinted above.
Expand full comment
I agree that there is a notion of self-sacrifice in Slavic and many other traditions (probably Chinese too, am not an expert). The thing is, it is noble when it is a conscious choice for the
greater good. I am not sure the people of Gaza or the cities that could potentially be nuked are conscious of the choice and their fate. What is definitely not present in Slavic culture is
sacrificing others for your own good. Hence my reservations regarding global east bein a part of such a plan. This is not the way to solve issues in general. As you said, humans are not (entirely)
animals. I also come from a deep feeling that human life is the greatest value there is.
Expand full comment
There is no real honor or justice in sacrificing others, especially if they didn't know they were to be sacrificed. I agree with you. As for Gaza and others, I don't really know what is going on in
there. IF the decision has been made to intentionally risk getting nuked then there are a number of ways the decision could have been rationalized. One dumb and straightforward way is to say "we
have been selected by our people to provide for them but we can't do that without risking those people; this risk seems worthwhile".
¯\_(ツ)_/¯ I dunno.
Expand full comment
Another quick one.
I believes that some elements were aware of the Hamas attack and let it happen. Not necessarily at the highest levels in the US or Israel but someone knew and allowed it to happen. If you’re
looking for evidence it’s out there.
Israel wants the US to fight its war against Iran. There is no conceivable reason why Iran would want to start this now. Not with the entrance into the BRICS and it’s détente with Saudi Arabia. US
decline was already in progress and in another 3-5 years it would have reached its final destination without a single shot fired. This, though doesn’t put this in jeopardy, it introduces a black
swan that could either slow down (good for the empire) or speed it up (bad).
Israel is aware of this and due to a combination of miscalculation and hope believe that this can help it finally achieve its goal of greater Israel before the end of US power in the region.
I’m also not so sanguine on the US not coming to the rescue of Israel if things go really badly. You don’t take into account the influence of the Israel lobby and influence of Jewish Americans in
general in US politics. They perceive the attack on Israel as an attack on Jewish people everywhere (or trying to portray it that way) so they will not let go that easily. The only thing that will
put a limit on this is the Pentagon and the self preservation of non-Jewish elites.
Expand full comment
Yes, I think Aleks fails to take into account that Israel, the US and the UK are all controlled by the same group.
Expand full comment
No I don't. I wrote that literally exactly that way in the previous article. In Genesis.
Expand full comment
Well, in that case, when you write, "To be able to protect Israel against a concentrated attack, which COULD (I’m not convinced, yet) follow after a Gaza invasion, America would need to abandon all
/most other areas. Which it won’t," in my opinion, your assessment is wrong. An attack on Israel is an attack on the US and the US will do whatever it takes to defend Israel (and to provide a
convenient distraction from political and financial collapse in the US. Whatever it takes.
Expand full comment
I fully believe that all parties involved wanted this conflict to happen very badly.
Every party for its own reasons. Maybe I failed to point that out. I tried to hint to it in the final chapter of this article for example.
Expand full comment
I think that’s where we differ.
My understanding (theory) is based on that someone somewhere in the US and Israel knew the attack will happen and let it happen. They might not have known the exact time and date or how brutal it
will be but they knew.
If that’s the case, then Israel and/or US wanted it to happen not Iran (for the reasons I mentioned above). They wanted it for the same reason that even though Israel was aware of the attack on the
Yom Kippour war, they didn’t do a preemptive strike.
First, ethnically cleanse Gaza. They expected the barbarity of the attack would keep world opinion with them. It didn’t and they miscalculated.
Second, draw Syria and Iran in and have a pretext for that war. They are still focused on Russia. Since both are friends and allies of Russia, this - in their again wrong calculation - stretches
Third, three of the new entrants to BRICS are directly affected by this - Saudi Arabia, Egypt, and Iran (and potentially the UAE to some extent). It doesn’t stop that but a military victory might
make them question that decision. Again, wrong calculation.
I’m not married to this theory, and it’s possible that BOTH sides wanted it. However, the argument for only US/Israel with the evidence from the Israeli response on the day of the attack makes it
Expand full comment
I like your subtitles to your articles, very appropriate:
Genesis before... now Exodus. :)
Expand full comment
Haha thank you :) You're the first one to mention it :)))
Expand full comment
I agree with most of what you write about this. This could have all been avoided if only the Arab and Jewish communities, both descendants of the "Holy Land" would have just decided to respect each
other and get along as they did even before the Roman Empire. But, religious zealots from both sides can't do that. I don't believe there will be a 2 state solution, I believe there will be one
State with Jewish, Christian, and Moslem faith-based communities living under Arab control. Just like the old days, natural and mostly peaceful. I believe the U.S. will continue its decline and
fall due to inept, greedy, and corrupt leadership at all levels of government regardless of which party is in control. The wild card in all of this is "The Village Idiot" now sitting in the
Whitehouse. No telling what he might do, he doesn't know. I am a hillbilly from WV and I have no skin in this game, just an opinion.
Expand full comment
Good comment. Thanks👍
Expand full comment
Multicultural States exist only as a continuous power broking by a strong hegemon: e.g. GAE works as a power broking of the different mafias (AIPAC, CIA, Hohols, Eurocrats, Banking Cartels, China
Otherwise, they are fiction, like Lebanon: in Lebanon everything is "privatized" i.e. controlled by ethnic clique (Sunni, Shia and Maronites).
Expand full comment
You're right :))
I pointed out what the interests of several powers might be.
How the reality later could look like is an entirely different question.
And I won't speculate yet, as long as still the big games didn't start.
But basically you're right.
Expand full comment
Expand full comment
Imho Swiss are a single population that speaks three languages. That said, at a closer look Switzerland looks a lot like Lebanon.
Expand full comment
True. Definitely the case for Ukraine (Hohols as you affectionately called them :) )
Expand full comment
Bullshit, the classic "both sides are bad" sophistry which hides the reality of the settler colonizers and their drive for lebensraum as the real cause of the conflict. This all started with the
massive influx of settlers from the 1920s onwards and then the Nakba ethnic cleansing which was the real cause of the first Arab-Israeli War. The Israelis have gone out of their way to kill every
peace initiative and to undermine Palestinian leaders who pushed for a peaceful solution. When the US is in terminal decline then the Israelis may sue for peace, after attempting to grab as much
land as possible in the interim. The Arabs will remember their continual treachery and the result will be inevitable, the Israelis are sealing their own demise as a nation.
Expand full comment
"The Israelis have gone out of their way to kill every peace initiative and to undermine Palestinian leaders who pushed for a peaceful solution".
They even kill their own that want peace IE Yitzhak Rabin. And that had NuttyYahoo's dirty fingerprints all over it.
Expand full comment
You twist events to mask the jewish terrorism that was introduced by the Irgun, Stern gang, and other genocidal greedy ahskeNAZI filth.
You ignore that all (Christian, Jew, and Muslim) prior to that in Palestine did respect each other and got along quite well.
I don't think you're a troll trying to insert Zionist disinfo, but rather a person who is naive and with a child like "understanding" of the issues. And unfortunately that seems to be common.
Expand full comment
Actually, the Partition Plan was never implemented because its implementation depended on both Arab and Zionist approval. The Arabs flatly rejected it. Consequently, the Zionists took the decision
to take the land anyway. Again, Foreign Policy Journal has an excellent write-up on the history of the partition Plan:
Bottom line - The UN did NOT create Israel. The Zionists stole the land from the Palestinians, forced them off the land, killing many thousands of innocent people and bulldozing their homes. Only
by the "rights of might" (and a few bribes) did the UN finally give in and recognise Israel as a legitimate state. The subsequent 75 years since has seen nothing but pain and further loss for the
Palestinians who never won their freedom and whose land and people continue to be squeezed into smaller and smaller areas. Israel has never wanted a two-state solution as they see all this
Palestinian land (and more than just Palestine!) as God-given and will never, ever compromise on that - no matter where it takes the rest of the world (WWIII and/or Samson Option). On the other
hand, the Palestinians see this land as their ancestral lands and will never, ever accept that Israel is anything other than a cruel occupier who needs to be banished and their lands and homes
As for there being an Israel that is ruled by Arabs, this was once possible as they lived in peace with one another, but since the Zionists moved in, no such possibility exists today - or ever
will. This is a fight to the death. You say that it is not important to distinguish Jew and Zionist. That is a mistake. It is crucial to do so for a proper understanding of what is going on there.
Jews and Arabs are capable of living together in peace. Zionists are not as they are on a "holy" mission.
Expand full comment
Judaism and Ziosim cann not be separated. One gave birth to the other.
The occupiers brought terrorism to Palestine. Ask the Brits how their administrators soldiers fared.
Begin, the hideous deformed looking creature, even admitted they were terrorists and justified it as means to an end. Looks like he was correct after all, the "end" is coming.
Expand full comment
I also agree that a regional war is on the near horizon. My daughter knows an Australian moron in the army and he said they have been told by their US overlords to get ready for deployment.
So once again the deadheads in the AOF will participate in more war crimes and die for Zionism and empire. And the fuckwits will believe they are doing it to keep Australia safe.
This is actually a great opportunity to rid ourselves of fascists within our countries. They will die under missile barrages. Too bad. Not really.
Expand full comment
Haha you don't like this moron, do you🤣😅👍
Well, I have also information that Australia is gearing up for war.
Whatever that might mean...
Expand full comment
" I have also information that Australia is gearing up for war"
Beyond what the MSM reports? Do tell.
A Russian naval ship has docked in Indonesia, Khinzals on board.
Say goodbye to Pine Gap, Uncle Sam.
Expand full comment
Thanks for the piece. However, The elephant 🐘 in the room that you didn’t touch on is the solution that’s better than your solution! Instead of the outdated 2 state solution, have you thought about
the ONE state solution? One man. One vote. Democracy at its best. That’s the only way forward. Just as the name Rhodesia was changed to Zimbabwe, so will the name Israel change. Call it something
else. How about Israelistine?? I was disappointed to see your solution was not imaginative enough. Thank you again for the piece.
Expand full comment
I have nothing against that solution. Really. If the international community (which one now?1🤣) agrees to such a scenario, that's good as well.
I'm Serb and a former Yugslav. Yugoslavia initially, 1945 was one of very few nations to support exactly this one state solution.
Nevertheless, it's not up to me or us...
Expand full comment
The Two State solution was never taken seriously by the Zionists. It was a ploy to give time for more land grabs.
They can't and won't tolerate a neighbouring Palestinian nation state, as nation states have a UN seat and the right to a full military.
Expand full comment
Two state solution died in the 1990's. Anyway, nowadays, only about 30% of Palestinians support the idea. The two state solution train has long ago left the station.
Expand full comment
A one state solution will end in a two state solution after a civil war.
Birth rates of Palestinians and Orthodox Jews are high, with secular Jews low in relation. What you end up with is two groups that hate and want to kill each other (because ironically they are
similar) holding most of the power.
That’s a recipe for civil war. After many dead on both sides (probably again more Palestinians) they’ll decide to separate and end up with ...
... A Two State Solution.
Expand full comment
If the Jews can free themselves from Zionism, there is no reason why they can’t coexist peacefully. They did before 1948 or certainly before the “Zionist project”. Jews have to decide whether they
want to live as part of the neighborhood as normal neighbors or they want to remain an outpost of western interests/colonialism. Fear of Israel acted as a deterrent for decades. Now, that fear is
gone for ever and so Israel is truly at a crossroad. Live/coexist as normal neighbors or continue to see the noose getting tighter and tighter. The “barbarians” (using the term as an expression)
are not only at the gate but they’ve stormed/penetrated the gate! The genie is out of the bottle. The ball is in Israel’s court. I hope they choose wisely. No reason not to coexist.
Expand full comment
Look. The only valid reason against the two state solution is the both need to be viable and contiguous. Arguably that’s a tough task and requires detailed negotiations but I’m sure a solution that
both will be UNHAPPY about can be found.
Yes I said unhappy.
The single state runs into what I described. Demographics is destiny. There’s just no way around it. Now by some miracle the Palestinians when their standard of living increases they might have
less babies. But what about the Orthodox Jews? They’re pretty wealthy with high standard of living but still have as many as possible. Even the secular Israelis are having problems with them RIGHT
NOW because of their demographics. I don’t see this ever changing.
Expand full comment
It does look like Israel's rhetoric has backed itself into the Gaza invasion option.
This situation is like a slow moving train wreck, we can all see it coming, but can nothing to stop it.
Expand full comment
Unfortunately, yes…
Expand full comment
The comments from IOF scum seem to show they are planning to continue bombing until nothing is left above ground. And then bunker buster? They don't have the courage for a ground invasion.
It's frustrating as the bombing campaign causes more Palestinian civilian casualties than a ground invasion would. The carpet, and precision, bombing should be the red line for the Resistance.
Expand full comment
So appreciate your gifts; you have many. I say this with a bit of the “bright bulb” syndrome; I read, listen, watch, sense…and clarify, adjust, ponder my thinking/acions. My responses are measured;
emotional maturity has been gained thru many decade, aka…shelf life. Learning curve. I stand to be corrected often as I came to study of the Great Work, late in life. Sure wish you were “local”; so
I could squeeze and hug you, right. Old lady with a virtual cat❤️🐈⬛
Expand full comment
Thank you so much ma'am. I appreciate always your very nice comments 🤗
Expand full comment
Thank You, Aleks. This is very good work. You're getting lots of flak.
We know what that means...
Sigh , you are over the target.
Expand full comment
Thanks John 🤗👍
Expand full comment
I'm going to be impolite and post the cow game in this situation - https://mikehampton.substack.com/p/red-cow-god-war-middle-east-al-aqsa
Expand full comment
Never seen such excellent analysis of this madness by people who care in the alternative media as we do now independent of any State propaganda and Alex is doing his best to discuss with clarity
and reach understandings about this insanity .
This is an upside that has never happened before that I can see.
It’d say it’s checkmate for he USA, Israel on their cynical delusional rules based order.
No nukes unless the USA goes first which they did before in Japan.
The final bluff?
Expand full comment
Thank you:))
Expand full comment | {"url":"https://bmanalysis.substack.com/p/israel-bff/comments","timestamp":"2024-11-10T01:06:34Z","content_type":"text/html","content_length":"702212","record_id":"<urn:uuid:7e60933c-d056-4b49-85b1-c436cf7c3601>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00142.warc.gz"} |
NCERT Solutions for Class 12 Maths Chapter 7 Integrals (Ex 7.8) Exercise 7.8 - Infinity Learn by Sri Chaitanya
Study MaterialsNCERT SolutionsNCERT Solutions For Class 12 MathematicsNCERT Solutions for Class 12 Maths Chapter 7 Integrals (Ex 7.8) Exercise 7.8
NCERT Solutions for Class 12 Maths Chapter 7 Integrals (Ex 7.8) Exercise 7.8
NCERT Solutions for Class 12 Maths Chapter 7 Exercise 7.8 (Ex 7.8) and all chapter exercises produced by qualified teachers according to NCERT (CBSE) book rules are available as a free PDF download.
Class 12 Maths Chapter 7 Integrals Exercise 7.8 Questions with Solutions will assist you in revising the entire syllabus and achieving higher grades. Register to receive all exercise solutions
through email.
Do you need help with your Homework? Are you preparing for Exams? Study without Internet (Offline)
Exercise 7.8
NCERT Solutions for Class 12 Maths Chapter 7 Integrals Exercise 7.8
The NCERT solutions for Ex 7.8 Class 12 Maths are considered the most acceptable alternative for CBSE students when it comes to exam preparation. There are a lot of exercises in this chapter. We’ve
included Exercise 7.8 Class 12 Maths NCERT solutions in PDF format. You can either download this solution or study it immediately from our website/app.
Infinity Learn’s in-house subject matter specialists addressed the exercise’s problems/questions with great care and precision, adhering to all CBSE criteria. Any student in Class 12 who understands
all of the concepts from the Maths textbook and is well-versed in all of the problems from the exercises can easily achieve the greatest possible grade on the final exam. Students can easily grasp
the pattern of questions that may be asked in the exam from this chapter and study the marks weightage of the chapter with this Class 12 Maths Chapter 7 Exercise 7.8 solutions to prepare for the
final examination adequately.
Apart from these NCERT solutions for Class 12 Maths Chapter 7 Exercise 7.8, this chapter contains numerous exercises with numerous questions. As previously said, our in-house subject specialists
solve/answer all of these questions. As a result, all of these are guaranteed to be high quality, and anyone can use them to study for exams. It is critical to grasp all of the topics in the
textbooks and solve the questions from the exercises supplied next to them to achieve the highest possible grade in the class.
1. What is integral in simple words?
2. What makes integrals so difficult?
3. Where can I find authentic NCERT Solutions for Chapter 7 Integrals in Class 12 Maths online?
Q. What is integral in simple words?
Ans. An integral is also known as the area under a curve in Calculus. The area encompassed by the curve of a function and the x-axis is the integral value of that particular equation, as mathematical
functions can be represented on graph paper. In class 12 calculus, you’ll come across a set of formulas for computing the integration of various functions. Integral is sometimes known as
anti-derivative, and you’ll notice that some functions’ integration formulations are the inverse of their differentiation formulas. One of the most significant topics in Class 12 Mathematics is
integral calculus.
Q. What makes integrals so difficult?
Ans. Integrals could be a nightmare for a student who skips any lesson or struggles to understand the basic notion of any issue. As a result, you must practice a variety of sums and become familiar
with the various levels of difficulty. To fully comprehend any topic, practice as many sums as possible.
Q. Where can I find authentic NCERT Solutions for Chapter 7 Integrals in Class 12 Maths online?
Ans. On Infinity Learn, you can find dependable NCERT Solutions for Class 12 Maths Chapter 7- Integrals. Our team of subject matter experts created the NCERT solutions on Infinity Learn, and they are
among the best-rated integral solutions available online. All of the sums in NCERT Class 12 Maths Chapter 7 are worked out step by step so that students can double-check their answers. This also aids
them in recognizing their errors. For the NCERT Maths solutions for integrals, our subject matter specialists followed the new CBSE rules for Class 12. These NCERT solutions are available for free
download and can be used for practice. | {"url":"https://infinitylearn.com/surge/study-materials/ncert-solutions/class-12/maths/chapter-7-integrals-exercise-7-8/","timestamp":"2024-11-07T10:54:32Z","content_type":"text/html","content_length":"170252","record_id":"<urn:uuid:7c0a01a8-d1ce-4d7c-b973-4fd6a10a39b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00121.warc.gz"} |
Who provides assistance with Bayesian analysis in Stata? | Hire Someone To Take My SAS Assignment
Who provides assistance with Bayesian you can look here in Stata? Introduction ============ From 1984 to 2002 the prevalence of BOLD was found to be lower in male and female subjects than in adults,
but its gender and age-related differences were not clearly understood. The main aim of this paper is to argue for possible gender and age related differences between a relatively large cohort of
high level cohort workers (HCW) and one population which has recently become operational as a group study. An objective of this paper is to report data from an apparently representative HCW
population on the incidence, prevalence, incidence rate and chronology of cognitive impairment as measured by the Behavioural Analyzer (BA) at the National High Level Collaborative Study (NHLS). Our
goal is to analyze the data and to show that the BOLD data represent a reliable and valid tool for investigating cognitive functions, and which allow us to make inferences of the diagnostic value of
the BOLD method. Inequalities of such small populations could result from the lack of standardized methods to quantify the degree to which features of brain function are increased or decreased by
clinical processes. For instance, the presence of an early (mean task, memory) memory deficit can only be due to the aging process, but it can create an early memory deficit in an asymptomatic
condition. The time needed to detect and treat a memory deficit is a time-limited part of the management of brain function. Even the most acute memory impairment which may contribute to the
development of cognitive impairment usually has a few hundred brain deaths per year ([@ref-5]; [@ref-11]). In particular, it is well known that with a normal degree of memory decline the amount of
brain damage in the early stages of development usually exceeds 100 times the age-specific mean at the earliest stages ([@ref-4]). This is a factor which is clinically significant with regard to
cognitive function. Indeed, data from the U.K. [@ref-2] suggested that even these small populations with slightly more acute memory decline may have a definite cognitive deficit. Our objective here
is to present data from a relatively representative HCW population and to illustrate the nature, times and methodology of this cohort, thereby allowing click here now to draw from normal to very ill
patients without any health complications or inpatients with mild cognitive impairment, and to make inferences on their neuropsychological profile. Method ====== Study selection ————— We studied 130
(57 male, 70 female) first-year HCWs in a multicentre NHLS, a community-based population-based study on 2,354 Dutch adult individuals aged 19 to 55 years.
Professional Fafsa Preparer Near Me
We selected subjects over 8 years with IQ (56), 2T (H 11.3, 8 of 14 possible CGMI test), 2Who provides assistance with Bayesian analysis in Stata? Consider a distributed life, whose individuals share
experiences. By its nature, this design supports growth by minimizing individuality (i.e., individuals belonging to the same group, or “groups”). This interaction argument can be used to justify the
control of the design while maintaining stability, to reduce the variability of the design as relevant as possible, to adjust the results perfeivly according to the context in which they are actually
applied, and to induce the variation in the designs. The analysis of Stata’s design space allows us to interpret some of its behavior in terms of random errors versus in other forms of deviation.
Using this framework, we can show that the in-fact difference in distributions of different designs ($d_{\rm {In}_{N}}$ versus $d_{\rm {In}_{\B}}$) gives the variance of the variance of the design –
the variance will be on average worse the design then required. There are several other frameworks, albeit related to time-invariances – which can be used to study some issues with the Inverse model.
These are: > **[Time-variability-based]{}**: Time-variability-based designs naturally include the selection, selection, transfer and differentiation of stimuli during an experiment; these types or
patterns have been used as a tool for planning experiment designs. The aim of time- variance analysis is to generalize studies of time varying designs. > > **[Random errors-based]{}**: Random error
analysis (RAE) adds analysis to study structure in micro- and macroscopic systems; this type of analysis reveals patterns that may influence the results of experiments. This type of analysis can also
be justified in terms of interactions between design variables and variables in these systems. For instance, RAE can be applied to testing designs proposed in the design space and to other
specifications of the non-linear regression approach, although it has also been used for several studies. How do we describe in the context of time-variability-based design development and testing?
Suppose we have a design of a computer system, for example, a network, whereupon the number of devices in support of the plan go now reduced in proportion to a total set of devices. Denote the go to
this site of children included in the system, whereupon their parents will be specified. The number of devices, to which the number of children is reduced by one, will be measured. To evaluate the
effects of variation in the number of children due to the number of devices, we may consider a potential deviation in the control of the deviation (i.e., in any kind of variations of the number of
Take My Math Class For Me
The two levels of deviation are discussed after a particular design has been developed, or at least needs to be investigated in the present day. It turns out that the scale and the deviation are not
independent. They are both correlated at high frequencies, with higher values suggesting growth. However, the model doesn’t take into account this difference. Rather, it calls for the creation of the
design spaces per unit of deviation and emphasizes the choice of model parameters. By comparing space-time dependence via two-dimensional space-time dependence, we can define an interaction between
one key elements of the design and the environment. This interaction has been used to explain that the system and environment do not interact per unit of deviation. When the system is created, it
takes a large fraction of space to take in as a whole, however, without considering whether the given space is taken as a separate population or a space itself, however, the interaction increases
with the ratio of the number of devices (to the number of devices) or the ratio of the number of nodes (to the number of nodes). By choosing the sample size his explanation that the environment is
present, and using the standard deviation, we can say that the two factors of such a design are independent if they have nonWho provides assistance with Bayesian analysis in Stata? This is a quick
entry. If you didn’t make this post prior to when Bayesian analysis was fully conducted, this issue can easily be resolved. Since Stata is released in September, everyone who works on Bayesian
analysis of statistical models will be able to do so on July 2007. In order to make a difference, Stata is adding an index that identifies which variables are significant and how they are distributed
with a distribution. It should be mentioned that this index is going out the windows of interest, so if I were to do group analyses, I would expect a smaller difference in the distributions between
two variables per year. The same applies to Bayes factors, and that’s fine. Stata knows this as such because there are a decent number of papers available, like R4R4 and its predecessors, that show
quite how good they are for modeling, even after controlling for some confounding factors. Likewise, we don’t know when the data come out but it sort of looks like this is still pretty far away and
no good. As a recent example of making a difference, I’m going to tell you about a couple things that were done before Stata did so except for groups. The first was to start using R to generate data,
I give a date using an input date, and that will lead to a time period known as the D, which will be after the D. How do you know if you can get a D of exactly this type, or not? Why. Here are my
results about the first 2,000 covariates.
Overview Of Online Learning
And what do you get from that? First, that is the output of Co: Each cluster you get in any time period will have a different ratio of time blocks, giving a better fit to the data than if the data
had the same binomial distribution. Second, is it also more likely that the most efficient way to pick which covariates are important in data is by using bootstrap is used instead of var. I didn’t
get the second variable you’ve got by guessing but certainly you get why stochastic data normally seems so likely. And, and, finally, this is a description of a cluster and the order in which the
estimates come from. I do think that Bayesian analysis in Stata is very much a case of having the confidence or variability rate so what the authors were trying to do was called the Bayesian index.
This is a quantity that was previously much of a function of the data and it is certainly different that in the MCMC approach. I expect the MCMC Bayesian methods to be even less useful in real-world
data, even before Stata set dates are added to the data. Again, I’m not sure if your findings have been of the accuracy that Stata has or whether they have a bias toward more data and/or are just
being made more or less fun out of the fact that Bayesian analysis is easier, or maybe that the results are too much like in other models (such as Random forest) where they say on the other hand, you
always don’t get the result from using a statistic like the bootstrap to sample from, or that the most likely predictors from those predictors just do the data and don’t fit the data at all. For that
reason I’m sure it gets a bit messy from Stata perspective, though. Not sure if you took the time to sort that out? That’s a great post about Stata, and one for which you probably should consider how
you were able to apply Bayesian analysis on anything other than statistical models. And, of course, there are other applications to be found in the application chapter however. You may think about
the results of those authors in explaining how they can apply Bayesian analysis on real data and how you can do most software and so on. But they don’t | {"url":"https://sashelponline.com/who-provides-assistance-with-bayesian-analysis-in-stata","timestamp":"2024-11-03T03:06:22Z","content_type":"text/html","content_length":"130902","record_id":"<urn:uuid:96bce13b-8a5c-46f9-878c-64d676b34a87>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00430.warc.gz"} |
Far Apart In Understanding Only I Think. - an Astronomy Net God & Science Forum Message
First, I think you make it quite clear that you didn't understand my post. Your comment concerning facts being awareness based completely misses the point. Your comment, "We already have primitive
wiring which does the interpretation automatically", is an explanation: i.e., a fact which only exists within a theoretical context.
What you seem to be missing is that I am performing a holistic attack on that very problem. That is, if you want an explanation (which is what you call a theoretical context), that explanation must
be consistent with the facts which will exist within that theoretical context. The problem very clearly is that we have no idea of what the facts are until we develop that context. That being the
case, we must first conceive of a method of developing a completely general context without constraining the issue of "facts" in any way.
>>>Even logical and mathematical centered learning is not immune to this prewiring.>QED is mainly combination of special relativity and quantum mechanics. How can you obtain QED if you don't accept
SR?In addition, the standard model (SM) has 19 or so constants which do not emerge from your definitions of 'Stanford Reality', does it?>> If one defines downhill using a carpenters level, then that
merely tells the experimenter what the theory means when it says downhill.Is an observation ever not theory-laden from your view? >I don't understand the relationship between 'time' and 'the object
of observation'. Usually time is defined as a fourth coordinate of a 4-D geometry introduced to account for change in a 3-D spatial geometry. Why do you leave this mathematical approach for one that
is more psychologist based? >Okay, there's many avenues we could go here, but let me ask the first question that occurs to me. An electron is an elementary particle absolutely necessary to account
for electromagnetism.>Isn't the electron real in that case?>> If you define m (mass?) without reference to the Higgs boson, then where is the standard model (SM) in all of this (one of more accurate
theories, btw)? Surely you don't want the SM to be revised as a result of your model, right? [If so, then more theories of physics are being dumped by you than what you accept of physics]. >The
problem that I see is that you are treading into areas of deep uncertainty as to the interplay between mathematical relationships and the laws of physics.>>When you play with mathematics enough you
are liable to get physics. This is part of the mystery of the universe (at present), and I am not so sure if anyone understands why that is the case.>No classical experiment is any more definitive
than is "water flows downhill" if one defines downhill to be the direction water flows.This is what I consider the weakest part of your argument.>You have to strengthen why experiments give the
results that our best theories expect. Just saying that they are rigged by linguistics seems to me to be an empty argument. > I might be more tempted to think that maybe we are only finding what we
choose to see> (for example, [that the speed of] light is a constant having no theoretical reason that I know of for being what it is).
I think I show a very good reason for it being exactly what it is. If you could follow what I present I think you would agree with me.
I think Planks constant is a phenomena much more to the point. Gamow wrote a series of "Mr. Tompkins" stories. See: Mr. Tompkins in Wonderland (1939) or perhaps Mr. Tompkins Learns the Facts of Life
(1953). I suspect my reference is to the first but it could possibly be the second as my experience was after 53. As these are the only references I could find, I suspect "Mr. Tompkins goes to
Quantum Land" is a part of one or the other. Most of his presentations are quite well thought out but the one about Quantum Land is just plain wrong. Mr. Tompkins goes on safari to a land where
Planks constant is very large and Gamow describes his experiences.
I was studying Quantum at the time and it led me to try and figure out what the universe would really look like if Plank's constant were large. At the time I was never able to solve the problem:
i.e., find a solution which was completely compatible with every connection to Planks constant I was aware of! Every time I got to a point where it seemed I could establish some size and time
parameters (measuring sticks and clocks) it turned out that what I had, depended on the value of the constant. The effort kept me going in circles for quite a while. At the time I just decided the
whole issue was just to complex for me to hold in my head at once but it did start me wondering about exactly how we define these things.
If you examine the mathematics of "Stafford Reality" you will see that the actual numeric value of Planks constant is of no consequence at all. Just as the speed of light is no more than the ratio
between what clocks measure and what meter sticks measure (they are each no more than coordinates in the geometry used to display the "objects" - facts?, whatever?), Planks constant is essentially no
more than the ratio between "time" (related to the Fourier transform of energy) and "what clocks measure". Yeh, I know, "time" is "what clocks measure" by definition - I have heard that more than
once! Back to my contention, Physicists have over defined their concepts. If you define something more than once, you create illusions of relations. That is why it is very important to define things
once and only once.
With regard to the above issue, lots of "brilliant" unknown rejects of the scientific academy have designed devices which violate conservation of energy. Some of those have even presented scientific
proofs that their devices must work and have obtained a great deal of money to fund their efforts. I have had a surplus of experience examining a number of such bogus proofs. I was hired as a
consultant on several "inventions" taken very seriously by investors. By the way, none of the people who hired me ever invested in such things though many others have.
If you examine those proofs you will find that the commonest way used to achieve their results is to change definitions of terms as they go through their derivation. It is not difficult at all to get
energy from nowhere if you start doing algebra with the standard thermodynamics equations and combine equations where v is average velocity with equations were v is actual velocity (a very common
error). Most of the people who do that kind of thing actually believe their algebra is correct. I don't think I ever met one who did it intentionally as most of them are willing to spend their last
dime to promote their ideas. (And I never converted one either!)
What I am getting at is the fact that defining something more than once is a very dangerous thing to do if you are interested in exact logic. As I have said elsewhere, I think most scientists would
agree with me on that; the problem is, they believe someone else (not them) has already disposed of the issue.
Reminds me of a joke: A business man and an economist are walking down the street. The business man says, "Hey look there's a twenty dollar bill in the gutter!" The economist says, "No, that couldn't
be! If it were there, someone would have picked it up!"
Have fun -- Dick | {"url":"http://www.astronomy.net/forums/god/messages/13148.shtml","timestamp":"2024-11-10T16:44:06Z","content_type":"text/html","content_length":"23367","record_id":"<urn:uuid:37dd0624-385a-4867-8053-44e3fa5d6bfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00316.warc.gz"} |
The Stacks project
Lemma 21.7.2. Let $\mathcal{C}$ and $\mathcal{D}$ be sites. Let $u : \mathcal{C} \to \mathcal{D}$ be a functor. Assume $u$ satisfies the hypotheses of Sites, Lemma 7.21.8. Let $g : \mathop{\mathit
{Sh}}\nolimits (\mathcal{C}) \to \mathop{\mathit{Sh}}\nolimits (\mathcal{D})$ be the associated morphism of topoi. For any abelian sheaf $\mathcal{F}$ on $\mathcal{D}$ we have isomorphisms
\[ R\Gamma (\mathcal{C}, g^{-1}\mathcal{F}) = R\Gamma (\mathcal{D}, \mathcal{F}), \]
in particular $H^ p(\mathcal{C}, g^{-1}\mathcal{F}) = H^ p(\mathcal{D}, \mathcal{F})$ and for any $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ we have isomorphisms
\[ R\Gamma (U, g^{-1}\mathcal{F}) = R\Gamma (u(U), \mathcal{F}), \]
in particular $H^ p(U, g^{-1}\mathcal{F}) = H^ p(u(U), \mathcal{F})$. All of these isomorphisms are functorial in $\mathcal{F}$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 03YU. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 03YU, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/03YU","timestamp":"2024-11-14T21:04:51Z","content_type":"text/html","content_length":"15273","record_id":"<urn:uuid:9623ebf0-b0c5-467b-8621-07d48b16b146>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00578.warc.gz"} |
Solve each equation for the specified variable. See Section 2.2. $$ F=\frac{9}{5} C+32 \text { for } C $$
Short Answer
Expert verified
C = \frac{5}{9} (F - 32)
Step by step solution
- Start with the given formula
The given equation is: F = \frac{9}{5} C + 32 and we need to solve for the variable \( C \).
- Subtract 32 from both sides
To isolate \( C \), first subtract 32 from both sides of the equation: F - 32 = \frac{9}{5} C
- Multiply both sides by \( \frac{5}{9} \)
Multiply both sides by \( \frac{5}{9} \) to solve for \( C \): \( C = \frac{5}{9} (F - 32) \)
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
algebraic equations
Algebraic equations are mathematical statements that show the equality between two expressions. In our example, we have the equation: \[F = \frac{9}{5} C + 32\]. Here, \(F\) and \(C\) are variables,
and our goal is to solve for the specified variable, which is \(C\). Algebraic equations can be solved through various methods, such as addition, subtraction, multiplication, and division. Each step
aims to simplify the equation until the desired variable is isolated.
isolation of variable
Isolation of a variable means rearranging the equation so that the variable in question stands alone on one side of the equation. In the given problem, we need to isolate \(C\). Here's how we do it:
• First, we have \(F = \frac{9}{5} C + 32\).
• We start by subtracting 32 from both sides to get \(F - 32 = \frac{9}{5} C\).
• To further isolate \(C\), we multiply both sides by \( \frac{5}{9} \), resulting in \( C = \frac{5}{9} (F - 32)\).
This process of isolating\( C\) involved inverse operations to 'undo' the initial operations on \(C\).
conversion formulas
Conversion formulas allow us to change values from one unit or form to another. In our exercise, the original equation \( F = \frac{9}{5} C + 32 \) converts a temperature from Celsius to Fahrenheit.
To convert it back from Fahrenheit to Celsius, we isolate \( C \) creating a new conversion formula:
• \(C = \frac{5}{9} (F - 32)\)
This formula enables us to convert a Fahrenheit temperature back to Celsius, demonstrating the versatility of algebraic equations in real-world applications.
step-by-step solutions
Step-by-step solutions break down complex problems into manageable steps. This approach ensures clarity and aids in understanding. Our initial equation was \( F = \frac{9}{5} C + 32 \):
• Step 1: Subtract 32 from both sides to get \( F - 32 = \frac{9}{5} C \).
• Step 2: Multiply both sides by \( \frac{5}{9} \): \( C = \frac{5}{9} (F - 32) \).
Each step uses basic algebraic operations to simplify and solve the equation, making it easier for anyone to follow and understand the process. | {"url":"https://www.vaia.com/en-us/textbooks/math/intermediate-algebra-11-edition/chapter-9/problem-85-solve-each-equation-for-the-specified-variable-se/","timestamp":"2024-11-03T16:17:59Z","content_type":"text/html","content_length":"246210","record_id":"<urn:uuid:a875918c-0055-4b86-99ac-f20bc700debd>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00529.warc.gz"} |
Research Seminars
Boris Breizman, Boris Breizman Institute for Fusion Studies UT Austin, USA
Instabilities of Alfvén eigenmodes (AEs) are of significant concern because they can enhance the cross-field transport of alpha particles beyond the neoclassical level in fusion plasmas. The
threshold value of alpha-particle pressure for exciting AEs depends critically on the damping rate of AEs. The damping mechanisms include kinetic damping due to interactions with thermal particles,
continuum damping due to Alfvén continuum crossing, and radiative damping due to emitting kinetic Alfvén waves (KAWs). The radiative damping is substantial and can even prevail in high-temperature
burning plasmas. We revisit the radiative damping theory for TAEs in low-shear plasmas. In contrast to earlier papers, we provide the damping calculations in real space rather than Fourier space.
This approach is straightforward technically and more enlightening from a physics standpoint.
Strong E×B plasma flow shear is beneficial for reducing turbulent transport. However, traditional methods of driving flow shear do not scale well to large devices such as future fusion power plants.
In this paper, we use a large number of nonlinear gyrokinetic simulations to study a novel approach to increase flow shear: decreasing the momentum diffusivity to make the plasma "easier to push". We
first use an idealized circular geometry and find that one can obtain low momentum diffusivity at tight aspect ratio, low safety factor, high magnetic shear and low temperature gradient. This is the
so-called Low Momentum Diffusivity (LMD) regime. To drive intrinsic momentum flux, we then tilt the flux surface, making it up-down asymmetric. In the LMD regime, this intrinsic momentum flux drives
strong flow shear that can significantly reduce the heat flux and increase the critical temperature gradient by up to 25%. We also consider the actual experimental geometry of the MAST tokamak to
illustrate that this strategy can be practical and create experimentally significant flow shear. Lastly, a preliminary prediction for the SMART tokamak is made.
The phase space Koopman-van Hove (KvH) equation can be derived from the asymptotic semiclassical analysis of partial differential equations. Semiclassical theory yields the Hamilton-Jacobi equation
for the complex phase factor and the transport equation for the amplitude. These two equations can be combined to form a nonlinear version of the KvH equation in configuration space. There is a
natural injection into phase space, where it becomes the standard linear KvH equation, as well as a natural projection of phase space solutions back to configuration space. For integrable systems,
the KvH spectrum is the Cartesian product of a classical and a semiclassical spectrum. If the classical spectrum is eliminated, then, with the correct choice of Jeffreys-Wentzel-Kramers-Brillouin
(JWKB) matching conditions, the semiclassical spectrum satisfies the Einstein-Brillouin-Keller quantization conditions which include the correction due to the Maslov index. However, semiclassical
analysis uses different choices for boundary conditions, continuity requirements, and the domain of definition. Finally, although KvH wavefunctions include the possibility of interference effects,
interference is not observable when all observables are approximated as local operators on phase space; interference effects require nonlocal operations.
I. Joseph, “Semiclassical Theory and the Koopman-van Hove Equation,” arXiv:2306.01865, J. Phys. A: Math. Theor. 56 (2023) 484001
We report our recent work [1] on internal transport barrier induced by fishbone in tokamak plasmas. Fishbone bursts have been observed to strongly correlate to internal transport barrier (ITB)
formation in a number of tokamak devices. A simple model incorporating the fishbone dynamics and ion pressure gradient evolution is proposed in order to investigate the key physics parameters
assisting the triggering of ITB. The time evolution of fishbone is described by the well-known predator-prey model. For each burst cycle, the energetic particles (EPs) resonantly interact with
fishbone and are radially expelled from inner region leading to a radial current. A compensating bulk plasma return current and, hence, poloidal flow can be induced if the fishbone cycle frequency is
greater than the poloidal flow damping rate. When the shear of the poloidal flow exceeds a critical value, the turbulent fluctuations are suppressed and the bulk ion pressure gradient transits to the
high-confinement state. It is shown that this process is only sensitive to the deposition rate of the trapped EPs within the q=1 surface, but not sensitive to other parameters. A quantitative formula
for the shearing rate of poloidal flow induced by fishbone bursts is derived and verified numerically.
In addition, an update on the Gyrokinetic-MHD Energetic particle hybrid Code GMEC will be given. The code has been recently developed for efficient and accurate simulations of energetic
particle-driven Alfven instabilities and energetic particle transport in magnetic fusion plasmas such as ITER [2,3]. The GPU version of GMEC has also been developed successfully with a large speedup
over the CPU version.
[1] Z. Y. Liu and G. Y. Fu, “A simple model for internal transport barrier induced by fishbone in tokamak plasmas”, J. Plasma Phys. 89, 905890612 (2023)
[2] P. Y. Jiang et al., "Development of a gyrokinetic-MHD energetic particle simulation code. I. MHD version”, Phys. Plasmas 31, 073904 (2024)
[3] Z. Y. Liu et al, "Development of a gyrokinetic-MHD energetic particle simulation code Part II: Linear simulations of Alfven eigenmodes driven by energetic particles,", Phys. Plasmas 31, 073905
In this work, we investigate the impact of electromagnetic effects on plasma turbulence self-organization at low magnetic shear using nonlinear gyrokinetics. Our previous electrostatic studies showed
that turbulent eddies extend along magnetic field lines for hundreds of poloidal turns when the magnetic shear s is low or zero. Such ``ultra-long’’ eddies have significant consequences on turbulent
transport due to parallel self-interaction. At low magnetic shear, parallel self-interaction induces strong corrugations in plasma profiles at low-order rational surfaces, including the formation of
stationary current layers. When electromagnetic effects are considered, turbulence-generated currents lead to the development of stationary zonal magnetic potential, locally flattening the safety
factor profile to form staircase structures or broaden the safety factor profile minimum. This represents a crucial feedback mechanism between turbulence and the imposed safety factor profile,
resulting in a reduction in turbulent transport. We study the corrugated safety factor profiles using both the local flux tube code GENE and the global particle-in-cell code ORB5. To further explore
this interaction, we employed a novel extension of the flux tube model, allowing simulations of non-uniform magnetic shear profiles, including minimum-q profiles relevant for Internal Transport
Barrier (ITB) formation. Our findings indicate that turbulence-generated current layers can flatten the imposed non-uniformity across the entire domain or substantially widen rational surface
regions, consistent with global simulation results. We believe these results are relevant for understanding ITB formation and inform long-standing experimental observations.
An economic viable fusion reactor must operate in optimized scenarios with respect to the MHD stability, avoiding confinement losses caused by thermal plasma instabilities and inefficient plasma
heating by energetic particle driven modes. Gyro-fluid code FAR3d was developed to characterize and optimize the MHD stability in present day Stellarator and Tokamak experiments, as well as
foreseeing the MHD stability limits and optimization trends for future fusion reactors. Present talk is dedicated to summarizing the FAR3d project research lines dedicated to evaluating the linear
and saturation phase of thermal plasma and energetic particle driven instabilities in different fusion devices as well as the identification of optimization trends reproduced in dedicated
Schemes involving higher-beta fusion devices have led to intensified interest in electromagnetic turbulence in fusion plasmas. I will discuss this topic in two radically simplified settings:
ITG-driven turbulence in a Z-pinch and interchange-driven turbulence in open-field-line systems. Both can be described by minimal models involving just a few “fluid” fields: density and temperature
plus velocity and magnetic perturbations perpendicular to the mean magnetic field. Despite a drastic reduction in complexity that many seasoned fusion practitioners might consider excessive and
indeed deeply immoral, these simple paradigms yield a wealth of interesting behaviour, some of which seems to capture key physical signatures of electromagnetic effects reported by more realistic
numerical explorations — and might perhaps inform future such explorations (failing that, it is interesting enough to pique the curiosity of some theoreticians). Increasing beta in ITG-driven
turbulence is shown to move the threshold for the low-to-high-transport (Dimits) transition towards lower temperature gradients. This is because in the struggle between the Reynolds and diamagnetic
stresses that determines this transition in an electrostatic plasma [1], the Maxwell stress brought in by the Alfvenic activity associated with higher beta opposes the Reynolds stress and tips the
system towards higher-transport states [2]. In the interchange-driven MHD turbulence, increasing beta (which in all such systems is equivalent also to increasing the parallel size of the system,
increasing the temperature gradient or the field-line curvature) also pushes the system from an electrostatic low-transport state to an electromagnetic (very)-high-transport one — this is perhaps the
simplest conceivable system exhibiting the Dimits transition [3] (simpler even than [1]). It is shown however that when 2D interchange motions are forbidden, the low-beta electrostatic state can be
replaced by an electromagnetic one with very high transport and some rather strange properties (while it is interesting theoretically, as well as perhaps in some analogous astrophysical systems, the
relevance of this last result to fusion devices is left to post-seminar discussion). Overall, the takeaway message is that electromagnetic turbulence in fusion plasmas is a physics-rich subject that,
while challenging practical modellers, should also excite theoreticians.
[1] P. Ivanov et al. JPP 86, 855860502 (2020) & JPP 88, 905880506 (2022)
A beam of runaway electrons (REs) within a cold, resistive background plasma can be unstable to resistive hose modes. Helical deflection of the current centroid of the beam away from its initial
position is imperfectly canceled by eddy currents induced in the cold background plasma. The resulting macro-scale, over-stable, kink-like oscillations grow on a time scale determined by the
resistivity of the cold plasma. Using a fluid model for the RE beam, we find that these high frequency modes are faster growing than resistive MHD instabilities when resistivity is large [Phys.
Plasmas 31, 010701 (2024)]. Initial value calculations using the fluid RE model implemented in the extended MHD code NIMROD are compared to a 1D ODE eigenvalue code, and a semi-analytic dispersion
relation for a uniform beam current equilibrium. Nonlinear simulations of the hose mode are presented, and the consequences for RE electron confinement in post-disruption tokamaks—where the
background thermal plasma is cold and the resistivity is large—are discussed.
The Hasegawa-Mima equation is a local 2-D model for drift-wave turbulence in a magnetized plasma possessing a density gradient. Steady zonal flows with sinusoidal variation are known to be unstable
to Kelvin-Helmholtz instabilities when u₀ / V* > λ² / Lₙ², where u₀ is the amplitude of the flow, V* is the electron diamagnetic drift velocity, λ is the wavelength of the flow, and Lₙ is the density
gradient scale length [Zhu, et. al, Phys. Plasmas 25, 082121 (2018)]. With the addition of viscosity, we find that the parameter range for linear instability is modified, and that an additional
unstable mode exists at very low kᵧ. The growth rate of the instability with viscosity is shown to go as kᵧ² at low kᵧ' and the frequency corresponds with the drift wave frequency ω*.
Accretion is a fundamental cosmic process involved in phenomena ranging from planet formation to X-ray binaries and the generation of jets from protostars and black holes. Evidence suggests
magnetorotational instability (MRI) drives the turbulence that transports angular momentum outwards, allowing gas to spiral inwards in adisk. MRI-induced turbulence depends on coherent magnetic
fields, which are susceptible to dissipation unless maintained by a dynamo. The fundamental processes sustaining MRI turbulence and dynamos remain elusive due to the historical separation of
mean-field dynamo and angular momentum transport problems.
We investigate MRI turbulence and dynamo phenomena using direct statistical simulations in a zero net-flux, unstratified shearing box. Our approach begins with the development of a unified mean-field
model that integrates the traditionally decoupled problems of large-scale dynamo action and angular momentum transport in accretion disks. The model includes a hierarchical set of equations,
capturing up to second-order correlators, with a statistical closure approximation used for third-order correlators. We emphasize the complex web of interactions connecting various components of the
stress tensors—Maxwell, Reynolds, and Faraday—through shear, rotation, correlators associated with mean fields, and nonlinear terms. By identifying the dominant interactions, we pinpoint the key
mechanisms essential for the generation and maintenance of MRI turbulence. Our general mean-field model for MRI turbulence allows for a self-consistent formulation of the electromotive force,
accounting for both inhomogeneities and anisotropies. In the context of large-scale magnetic field generation, we identify two critical mechanisms: the rotation-shear-current effect, which generates
radial magnetic fields, and the rotation-shear-vorticity effect, responsible for vertical magnetic fields. We provide explicit, non-perturbative expressions for the transport coefficients associated
with these dynamo effects. Importantly, both mechanisms rely on the inherent presence of large-scale vorticity dynamo within MRI turbulence. Lastly, I will discuss the associated magnetic helicity in
such unstratified turbulent disks.
Microtearing instability is one of the major sources of turbulent transport in high-β tokamaks. These modes lead to very localized transport at low-order rational magnetic field lines, and we show
that flattening of the local electron temperature gradient at these rational surfaces plays an important role in setting the saturated flux level in microtearing turbulence. This process depends
crucially on the density of rational surfaces, and thus the system-size, and gives rise to a worse-than-gyro-Bohm transport scaling for system-sizes typical of existing tokamaks and simulations.
Entropy is a good fluid quantity for many quasi-equilibrium systems of physics, chemistry, and information. It has a good property of representing a thermodynamic state, a heat exchange, and a
preferential direction of the irreversibility. Although the entropy concept can be useful to analyze the equilibrium (e.g. magnetic equilibrium), it has been hardly understood to be meaningful for
the non-equilibrium, opened, and highly nonlinear system like fusion kinetic plasmas. In this presentation, we find the possibility of using entropy concept to interpret the nonlinear unsteady
systems by two important examples of the fusion theory: (1) the zonal flow saturation for trapped electron mode (TEM) turbulence, and (2) the frequency chirping of Berk-Breizman (BB) model for
energetic particle driven instability.
Thanks to the entropy property describing the energy exchange by ohmic heating and a triad transfer in the wave-particle interactions, the entropy balance analysis is useful to find the energy flow
of a drift-wave turbulence between the driving component by external constraints (temperature and density gradients), the electromagnetic waves, the plasma kinetic free energy, and the collisional
dissipation, which giving the free-energy cascade in terms of the spatial wavevector. In this study, we examine the contribution of the zonal flow on the TEM saturation quantitatively by implementing
the entropy balance diagnostics in CGYRO simulations. For a TEM case of a high ratio of the electron density gradient to the electron temperature gradients, the effective zonal flow shearing rate (or
the advection rate by the zonal flow) is much lower than the growth rate, so the perpendicular diffusion is important as much as the zonal flow saturation, as shown in the entropy transfer
evaluation. It could be useful to quantify the role of the zonal flow and the perpendicular diffusion, and improve the existing saturation model of a reduced model (e.g. TGLF) for the TEM mode.
The BB model is widely used for interpreting the experimental observations of a periodic or a frequency chirping phenomena due to the non-disruptive MHD instability by energetic particles. If the
bump-on tail instability is saturated by the external sink with a small collision, the free energy balance equation can be obtained by defining a new effective temperature and using the quasilinear
part of the entropy production. The modified free energy balance is still useful to understand the nonlinear phenomena. In the chirping case, we found that the simulation results of the BB model
regarding the chirping frequency and the kinetic distribution shape agree with the analysis maximizing the entropy of the background distribution function with the constraints of the external
dissipation and Ampere’s law. It implies that the equipartition by entropy increase still sufficiently holds for the nonlinear unsteady process with a small Kubo number.
Stellarator magnetic fields must be optimized to achieve the confinement quality required for fusion reactors. Specifically, in order to exhibit radial neoclassical transport at low collisionality as
small as tokamaks, stellarators need to be approximately omnigenous [1]. A magnetic field is omnigenous if, for all particles, the radial component of the drift velocity averages out to zero over the
lowest-order orbits. Although radial neoclassical transport has typically been minimized indirectly by means of figures of merit such as the effective ripple [2], recent developments have enabled to
carry out this minimization using accurate calculations of radial neoclassical transport at low collisionality [3]. However, not only neoclassical transport across flux-surfaces is important for
stellarator optimization. In general, parallel transport produces a net electric current, known as bootstrap current, that can modify the magnetic field and, therefore, needs to be evaluated during
the optimization process. This is particularly important if the goal is the design of approximately quasi-isodynamic fields (quasi-isodynamicity is the concept in which the magnetic configuration of
Wendelstein 7-X is based on), a subclass of approximately omnigenous fields that have the additional property of giving small bootstrap current [4] and are compatible with island divertors. However,
precise calculations of the bootstrap current were too slow to be included in a stellarator optimization loop so far.
In this seminar I will present MONKES (MONoenergetic Kinetic Equation Solver) [5], a new neoclassical code that solves the same monoenergetic drift-kinetic equation as DKES [6], the workhorse of
neoclassical transport calculations in stellarators. MONKES was conceived, among other things, to satisfy the need for fast and accurate calculations of the bootstrap current. By exploiting the
tridiagonal structure of the drift-kinetic equation in a Legendre basis, it is possible to obtain accurate results for all monoenergetic transport coefficients (i.e. those giving radial as well as
parallel transport) at low collisionality using a single core in approximately one minute. These features make MONKES ideal for its inclusion in stellarator optimization suites for direct
optimization of the bootstrap current. Apart from optimization, MONKES can be used for the analysis of experimental discharges and be integrated into predictive transport frameworks.
[1] J. R. Cary and S. G. Shasharina. Phys. Plasmas 4, 3323 (1997).
[2] V. V. Nemov et al. Physics of Plasmas 6, 4622 (1999).
[3] J. L. Velasco et al. Journal of Computational Physics 418, 109512 (2020).
[4] P. Helander and J. Nührenberg. Plasma Physics and Controlled Fusion 51, 055004 (2009).
[5] F. J. Escoto et al. Nuclear Fusion 64, 076030 (2024).
[6] S. P. Hirshman et al. Physics of Fluids 29, 2951 (1986).
Understanding the impact of tungsten on tokamak performance and its mitigation by low-Z impurities is key to the success of ITER operations. A novel self-consistent gyrokinetic model, integrated into
the XGC code [1,2], has been developed to comprehensively analyze tungsten transport and radiation. In this model, the tungsten ions are represented by a few bundles, with their fractional abundance
determined by the atomic balance between ionization and recombination processes derived from ADAS rates [3]. The electron cooling by tungsten radiation is also derived from ADAS rates. This new model
enables the study of tungsten radiation, transport dynamics, and interactions with low-Z species. Two complementary tungsten studies will be presented. First, we study the impact of nitrogen on the
collisional and turbulent peaking factors of tungsten in a WEST plasma.Analysis of tungsten radiation with synthetic diagnostics will be discussed. Second, we explore the intricate interplay between
collisional and turbulent transport in an H-mode plasma scenario of ASDEX Upgrade.
The Columbia Stellarator eXperiment (CSX), currently in the design phase at Columbia University, is focused on investigating quasi-axisymmetric plasma with a small aspect ratio, and on validating
recent developments in stellarator technology, theory, and optimization techniques. It is designed to test some of the theoretical predictions of quasi-axisymmetric plasmas, in particular plasma flow
damping, MHD stability properties, and the study of trapped particle confinement. The magnetic field is generated by a set of two circular and planar poloidal field coils (PF coils) alongside two
shaped interlinked coils (IL coils), with the potential consideration of additional coils to enhance shaping or experimental flexibility. The PF coils and vacuum vessel are repurposed from the former
Columbia Non-Neutral Torus (CNT) experiment [1]. The two IL coils will be wound "in-house" at Columbia University, using non-insulated High-Temperature Superconducting (HTS) tapes. These coils
undergo shape and strain optimization to produce the desired plasma configuration while adhering to numerous engineering constraints. Discovering a plasma shape that aligns with the physics
objectives and can be produced by such a restricted number of coils poses a significant challenge. Indeed, the constrained coil set’s limited capacity to produce varied plasma shapes hinders the
application of the traditional two-stage stellarator optimization approach. Instead, novel single-stage optimization techniques are employed, where plasma and coils are optimized concurrently.
Despite an increased problem complexity due to the larger number of degrees of freedom, these methods find optimized plasma shapes that can be generated by coils that satisfy engineering constraints.
We discuss two single-stage optimization methodologies[2, 3, 4]. We explore their application to the CSX experiment’s design, aiming to identify configurations that fulfill engineering constraints
and generate a plasma within a desired regime for the experiment’s physics objectives.
[1] Pedersen, T. S. et. al. (2006). Construction and Initial Operation of the Columbia Nonneutral Torus. Fusion Science and Technology 50 (3), 372-381
[2] Jorge, R. et. al. (2023). Single-stage stellarator optimization: combining coils with fixed boundary equi libria. Plasma Physics and Controlled Fusion 67 (7), 074003
[3] Giuliani, A. et. al. (2022). Direct computation of magnetic surfaces in Boozer coordinates and coil opti mization for quasisymmetry. Journal of Plasma Physics 88 (4), 905880401
[4] Giuliani, A. et. al. (2023). Direct stellarator coil design using global optimization: application to a compre hensive exploration of quasi-axisymmetric devices. arXiv:2310.19097
New tridimensional plasma structures, that are oscillatory and classified as non-separable ballooning modes, can emerge in inhomogeneous plasmas and undergo resonant mode-particle interactions, e.g.,
with a minority population, that can lead them to modify their spatial profiles. Thus, unlike the case of previously known ballooning modes their amplitudes are not separable functions of space and
time. The relevant resonance conditions are intrinsically different from those of the well-known Landau conditions for (ordinary) plasma waves: they involve the mode geometry and affect different
regions of the distribution in momentum space at different positions in configuration space. The novel resonant mode-particle interactions constitute a direct (linear) process to exchange energy
between different populations without the inefficiencies of nonlinear coupling processes. The new ballooning modes are relevant to circumbinary disks associated with pairs of black holes and to
fusion burning plasmas that include an initially thermal population of fusion reacting nuclei and a population of high energy nuclei (reaction products). It is reasonable to expect that the
distributions of the reacting nuclei in momentum space will not remain strictly Maxwellian and that the resulting reaction rates will be different from those evaluated for (conventional) thermalized
In this talk we summarize some of our recent work of interest to PPPL researchers, in two parts:
Part I: First implementation of gyrokinetic exact linearized Landau collision operator and comparison with models. Previous gyrokinetic simulations have used model collision operators with
approximate field-particle terms of unknown accuracy and/or have neglected collisional finite Larmor radius effects. This work demonstrates significant corrections using the first formulation [1, 2]
and implementation [3, 4] of the gyrokinetic Fokker-Planck-Landau collision operator with the exact linearized field-particle terms. Realistic nonlinear gyrokinetic simulations of fusion plasma
turbulence show significant corrections relative to the Sugama model collision operator for temperature-gradient-driven trapped electron mode turbulence and zonal flow damping, and for microtearing
modes (the exact operator is now released in GENE-3.0). Future work will extend novel spectral methods implemented for the drift-kinetic operator [5] to the gyrokinetic operator.
Part II: Broadening of the Divertor Heat Flux Profile in DIII-D QH-Modes, Matched by XGC. Multi-machine empirical scaling predicts an extremely narrow heat exhaust layer in future high magnetic field
tokamaks, producing high power densities that require mitigation. In the experiments presented [6], the width of this exhaust layer is nearly doubled using actuators to increase turbulent transport
in the plasma edge. This is achieved in low collisionality, high confinement edge pedestals with their gradients limited by turbulent transport instead of ELMs or low-n MHD modes. The exhaust heat
flux profile width and divertor leg diffusive spreading both double as a high frequency band of (TEM) turbulent fluctuations propagating in the electron diamagnetic direction doubles in amplitude.
The results are quantitatively reproduced in electromagnetic XGC particle-in-cell simulations which show the heat flux carried by electrons emerges to broaden the heat flux profile, directly
supported by Langmuir probe and infra-red imaging measurements.
[1] B. Li and D. R. Ernst, Phys. Rev. Lett. 106(19) 195002 (2011). https://doi.org/10.1103/PhysRevLett.106.195002
[2] Q. Pan and D. R. Ernst, Phys. Rev. E 99, 023201 (2019). https://doi.org/10.1103/PhysRevE.99.023201
[3] Q. Pan, D. R. Ernst, and D. Hatch, Phys. Rev. E Lett. 103, L051202 (2021). https://doi.org/10.1103/PhysRevE.103.L051202
[4] Q. Pan, D. R. Ernst and P. Crandall, Physics of Plasmas 27, 042307 (2020). https://doi.org/10.1063/1.5143374
[5] M. Landreman and D. R. Ernst, J. Comput. Phys. 243, 130 (2013). https://doi.org/10.1016/j.jcp.2013.02.041
[6] D. R. Ernst, A. Bortolon, C. S. Chang, S. Ku et al., Phys. Rev. Lett. (2024) accepted for publication. https://arxiv.org/abs/2403.00185
Particle methods for kinetic simulations are numerically stable, easy to use, convenient for irregular geometries. The main challenge is the stochastic noise that could make the simulations
unaffordable in low-speed problems (i.e., low signal-to-noise ratio) as well as transient problems where the time-averaging scheme is invalid. A lot of efforts have been made on the noise reduction
by modifying the traditional direct simulation Monte Carlo (DSMC) method in solving the Boltzmann-like equations. At this presentation, I will introduce the direct simulation BGK (DSBGK) method,
which can solve BGK-like equations as good approximations to the Boltzmann equation in many problems. As a duality of the DSMC method and the lattice Boltzmann method (LBM), the DSBGK method adopts a
large number of simulated particles to represent the distribution function in the phase space, as used in the DSMC method but different from the LBM, while it updates the variables of each particle
by integration of the kinetic equation along the corresponding trajectory, as modeled in the LBM but different from the DSMC method. The increments of particles’ variables inside each cell during
each time step are obtained by the integration and the corresponding summations are used to regulate (not recompute) the macroscopic variables of the cell concerned, according to the mass, momentum
and energy conservation laws. The previous values of cell’s variables are kept as anchors in the auto-regulation scheme to significantly reduce noise associated with the particles’ random movements
into and out of each cell. Simulation results in several problems will be presented to show the noise reduction as well as the accuracy validation. Performance comparison with other particle and
deterministic methods will be discussed.
Resonant interactions between runaway electrons (REs) and whistler waves in a tokamak may lead to pitch angle scattering of the REs. An increase in RE pitch angles may give rise to the energy
dissipation of the runaways via synchrotron radiation. DIII-D experiments on whistler waves have indicated a possibility of intentionally launching whistler waves to mitigate the deleterious effects
of REs on the plasma facing components via resonant interactions with whistlers [1,2]. In present work, we have use the coupled KORC-AORSA model to numerically analyze the complex nature of the
interactions between whistler waves and runaway electrons in DIII-D. In this framework, we follow full orbit trajectories of large RE ensembles using the Kinetic Orbit Runaway Electron (KORC) code in
the presence of whistler wave fields calculated by All Orders Spectral Algorithm (AORSA) code in a DIII-D experimental equilibrium. The nature of RE transport (diffusive/non-diffusive) [3] is
analyzed in the presence of whistler fields and the impact of whistler field amplitudes and frequencies is observed on the pitch angle scattering of REs. Our findings indicate a significant increase
in RE energy and scattering of the runaways to large pitch angles for whistler fields exceeding a threshold amplitude. The coupled KORC-AORSA simulation model can be further used to get physical
insights into tokamak experiments on whistler waves- REs interactions.
[1] D. A. Spong et al., Phys. Rev. Lett., 120, 155002 (2018).
[2] W. W. Heidbrink at al., Plasma Phys. Control. Fusion, 61, 14007 (2019).
A comprehensive understanding of electromagnetic effects on the microinstability properties of tokamak plasmas is becoming increasingly important as experimental values of the plasma beta and,
therefore, electromagnetic fluctuations will be higher in reactor-relevant tokamak scenarios. Despite significant numerical progress in understanding the behaviour of instabilities such as the
micro-tearing mode (MTM) or kinetic ballooning mode (KBM), there is still a lack of clarity about the fundamental physical processes that are responsible for them, owing to the complexity of full
toroidal geometry. Constructing simplified models offers a path towards distilling the fundamental physical ingredients behind electromagnetic destabilisation. This talk focuses on electromagnetic
instabilities driven by the electron-temperature gradient (ETG) in a local 'toy' model of a tokamak-like plasma. The model has constant equilibrium gradients (including magnetic drifts, but no
magnetic shear) and is derived in a low-beta asymptotic limit of gyrokinetics. A new instability is shown to exist in the electromagnetic regime, the so-called 'thermo-Alfvénic instability' (TAI),
whose physical mechanism hinges on a competition between diamagnetic drifts (due to the ETG) and rapid parallel streaming along perturbed field lines. Using linear gyrokinetic simulations, the TAI's
presence is confirmed in slab geometry. The mapping of the TAI onto a more realistic tokamak equilibrium is considered, demonstrating that it survives aspects of the transition to toroidicity. A
comparison is then drawn with the properties of the MTM and KBM, contextualising the TAI within the wider 'zoo' of electromagnetic instabilities commonly observed in tokamak simulations.
High energy particle resonances play an important role in particle confinement in toroidal fusion devices, both tokamaks and stellarators. In stellarators a resonance that matches the periodicity of
the equilibrium field produces islands in particle orbits which increase in size with particle energy and can induce loss. As demonstrated in the Japanese stellarator LHD, the presence of a high
frequency resonance invariably gives rise to a strong Alfven mode that causes particle loss. In a nonsymmetric stellarator a resonance does not produce a well defined island structure in the orbits,
but typically scatters ten percent of orbits of all energies and pitch randomly, modifying mode growth and saturation properties. Avoiding the presence of high energy particle resonances should be a
part of device design.
Recently, the numerical scheme presented in [1] enabled explicit gyrokinetic simulations of low-frequency electromagnetic instabilities in tokamaks at experimentally relevant values of plasma beta.
This scheme resolved the long-standing "cancelation problem" that previously hindered gyrokinetic particle-in-cell code simulations of electromagnetic phenomena with inherently small parallel
electric fields. Moreover, the scheme did not employ approximations that eliminate critical tearing-type instabilities. Here, we report on the implementation of this numerical scheme in the global
gyrokinetic particle-in-cell code GTS. Additionally, we present a comprehensive set of verification simulations of numerous electromagnetic instabilities relevant to present-day tokamaks. These
simulations encompass the kinetic ballooning mode (KBM), the internal kink mode, the tearing mode, the micro-tearing mode (MTM) and toroidal alfven eigenmode (TAE) destabilized by energetic ions,
which are all instrumental in understanding tokamak physics. We also showcase the preliminary nonlinear simulations of the kinetic ballooning instability and (2,1) island formation due to the tearing
mode instability. These simulations validate the accuracy of the scheme implementation and pave the way for studying how these instabilities affect plasma confinement and performance. [1] A.
Mishchenko, M. Cole, R. Kleiber, A. Konies, Phys. Plasmas 21 (2014) 052113.
Recently, a flurry of activities has been carried out on the isotopic effects in JT-60U [1], JET [2] and DIII-D [3], which has shown favorable confinement trend for heavier hydrogen isotopes. The
consensus from these experimental observations was that this is an unsolved puzzle in tokamak plasmas. This is in fact not quite accurate. When the favorable effects was first observed on TFTR [4,5]
for hydrogen, deuterium and tritium experiments, a theoretical attempt was indeed made to understand the results by Lee and Santoro [6]. Apparently, this paper has not attracted much attention in the
community. Recently, a paper by Lee and White [7] on the H-mode physics has also described the isotope effects at the H-mode pedestal. In this talk, the theoretical interpretations on these isotopic
effects based on 1) the resonance broadening theory [8] in the core as well as 2) the force balance equation for the pedestal from the gyrokinetic theory [7] will be described. The implementation of
the related physics in an initial value code such as GTC [9] and/or GTS [10] will also be discussed.
[1] H. Urano and E. Narita , Plasma Phys. Control. Fusion 63, 084003 (2021)
[2] L. Horvath, C. F. Maggi, A. Chankin et al., Nuclear Fusion 61, 046015 (2021)
[3] L. Schmitz, Phil. Trans. R. Soc. A381: 20210237 (2022)
[4] S. D. Scott, M. C. Zarnstorff, C. W. Barnes, R. Bell et al., Phys. Plasmas 2, 2299 (1995)
[5] S. D. Scott, G. W. Hammett, C. K. Phillips et al., IAEA-CN-64/A6-6 (1997)
[6] W. W. Lee and R. A. Santoro, Phys. Plasmas 4, 169 (1997)
[7] W. W. Lee and R. B. White, Phys. Plasmas 26, 040701 (2019)
[9] Z. Lin, T. S. Hahm, W. W. Lee et al., Science 281, 1835 (1998).
[10] W. X. Wang, Z. Lin, W. M. Tang, W. W. Lee et al., 13, 092505 (2006)
The first principle gyrokinetic numerical experiments investigating the isotopic dependence of energy confinement achieve a quantitative agreement with experimental empirical scaling, particularly in
Ohmic and L-mode tokamak plasmas. Mitigation of turbulence radial electric field intensity |δEr|2 and associated poloidal δE×B fluctuating velocity with the radial correlation length l_cr ∝ Mi^0.11
strongly deviating from the gyro-Bohm scaling is identified as the principal mechanism behind the isotope effects. Three primary contributors are classified, the deviation from gyro-Bohm scaling,
zonal flow and trapped electron turbulence stabilization. Zonal flow enhances isotope effects primarily through reinforcing the inverse dependence of turbulence decorrelation rate on isotope mass
with ω_c ∝ Mi^-0.76, which markedly differs from the characteristic linear frequency. The findings offer insights into isotope effects, providing critical implications for energy confinement
optimization in tokamak plasmas.
The generation of small scale, mean or large scale magnetic fields in cosmos and astrophysical bodies is an important problem in astrophysical plasmas. A possible mechanism behind these multi scale
magnetic energy growth is explained via dynamo action. Shear flows [1] often coexist in astrophysical conditions and the role of flow shear on the onset of dynamo is only beginning to be
investigated. The paradigm of investigation of the exponential growth of magnetic field caused by the interaction of small-scale velocity fluctuations and a flow shear; is commonly referred to as the
“shear dynamo problem” [2]. Various laboratory experiments [3], as well as numerical studies have been performed to understand these astrophysical scenarios in detail. According to conventional
understanding, for a large scale or mean field dynamo, a lack of reflectional symmetry (e.g., non-zero fluid or kinetic helicity) is required, where as for small scale or fluctuation dynamo it is
not. Obviously the role of fluid or kinetic helicity on the onset of dynamo action is a sensible question to ask.
In this present work we have analyzed kinematic dynamo model i.e, a case wherein (magnetic field does not back-react on velocity field) using a flow recently proposed by Yoshida and Morrison (YM)
[4]. An interesting and useful aspect of this flow is that, it is possible to inject finite fluid helicity in the system, by systematically varying certain physically meaningful parameter. Using
direct numerical simulation, we demonstrate that by systematically injecting finite fluid helicity, a systematic route emerges that connects “non-dynamo” to “dynamo” regime [5]. Time-averaged
magnetic energy spectrum, for various magnitudes of injected fluid helicity is calculated and it is observed that, the spectra contain a visible maxima at a higher mode number, which is the
distinguishing feature of small scale dynamo (SSD) [5]. However for a nonlinear dynamo or self-consistent dynamo model, the nonlinear effects start to change the flow (once the magnetic field is
large enough) to stop further growth in magnetic field energy, i.e, the flow and magnetic field “back react” on each other. The influence of helical and non-helical drive in such a nonlinear or
self-consistent dynamo model is shown to have some crucial dynamics [6]. Evidence of small-scale dynamo (SSD) activity is found for both helical and non-helical drives [6]. The spectrum analysis
shows that the kinetic energy evolution adheres to Kolmogorov’s k^−5/3 law, while the magnetic energy evolution follows Kazantsev’s k^3/2 scaling. These scalings are observed to be valid for a range
of magnetic Prandtl numbers (Pm) [6]. We have performed the above said studies using an in-house developed, multi-node, multi-card GPU based weakly compressible 3D Magnetohydrodynamic solver (GMHD3D)
[7, 8]. Details of this study will be presented.
[1] S. Biswas & R. Ganesh, Phys. Fluids 34, 065101 (2022).
[2] S. Biswas & R. Ganesh, Phys. Plasmas 30, 112902 (2023).
[3] R. Monchaux, M. Berhanu, et al., Phys. Rev. Lett. 98, 044502 (2007).
[4] Z. Yoshida & P. J. Morrison, Phys. Rev. Lett. 119, 244501 (2017).
[5] S. Biswas & R. Ganesh, Physica Scripta, Volume 98, Number 7.
[6] S. Biswas & R. Ganesh, Manuscript under Preparation (2024).
[7] S. Biswas, R. Ganesh et al. “GPU Technology Conference (GTC-2022)”,
[8] S. Biswas & R. Ganesh, Computers and Fluids 272 (2024) 106207.
Future devices like ITER will have limited capacity to drive toroidal rotation, increasing the risk of instabilities like resistive wall modes. Fortunately, many experiments have found that tokamak
plasmas rotate “intrinsically”, that is, without applied torque. The modulated-transport model shows that such rotation may be caused by the interaction of ion drift-orbit excursions with the strong
spatial variation of the turbulent momentum diffusivity [1]. The model predicts intriguing qualitative behavior, such as a strong dependence of edge intrinsic toroidal rotation on the major-radial
position of the X-point, which was subsequently measured on TCV [2]. The model has also been experimentally validated through further dedicated tests [3, 4], as well as via application in the new
European whole-device transport model IMEP [5]. However, certain applications will require a relaxation of the underlying assumptions. In particular, the original model required the turbulent
momentum diffusivity to decay exponentially in the radial direction, while experiments often exhibit a more complicated variation. In this work, we generalize the modulated-transport model to allow
the turbulent momentum diffusivity to depend on space in an axisymmetric but otherwise arbitrary way. To enable this generality, we assume that the normalized diffusivity is weak, roughly equivalent
to assuming that the pedestal-top ion transit time is short compared to the transport time across the pedestal, a condition that is almost always met for experimental applications. Given the
increased flexibility, along with a technically much easier calculation, the new approach may serve as a basis for future extensions, including shaped geometry and trapped particles as well as the
retention of momentum transport by neutrals.
[2] T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001 (2015).
[3] J. A. Boedo et al., Phys. Plasmas 23, 092506 (2016).
[4] A. Ashourvan, B. A. Grierson, D. J. Battaglia, S. R. Haskey, and T. Stoltzfus-Dueck, Phys. Plasmas 25, 056114 (2018).
[5] T. Luda et al., Nucl. Fusion 61, 126048 (2021).
Axisymmetric modes in elongated plasmas are normally associated with a well-known ideal instability resulting in a vertical shift of the whole plasma column. This vertical instability is stabilized
by means of passive feedback consisting of eddy currents induced by the plasma motion in a nearby wall and/or in plasma-facing components. When a thin resistive wall is considered, the n=0 mode
dispersion relation can be studied analytically with reduced ideal MHD models and is cubic. Under relevant conditions, two roots are oscillatory and weakly damped. These oscillatory modes present
Alfvénic frequency and are dependent on plasma elongation and on the relative position of the plasma boundary and of the wall. The third root is unstable and represents the so- called resistive wall
mode (RWM) [1]. We focus on the two oscillatory modes, dubbed Vertical Displacement Oscillatory Modes (VDOM), that can be driven unstable due to their resonant interaction with energetic ions.
The fast ion drive, involving MeV ions in present days tokamak experiments such as JET, may overcome dissipative and resistive wall damping, setting an instability threshold, as described in Ref.
[2]. The effects of energetic particles are added within the framework of the hybrid kinetic-MHD model. An energetic ion distribution function with ∂F/∂E > 0 is required to drive the instability,
achievable with pitch angle anisotropy or with an isotropic distribution in velocity space with regions of positive slope as a function of energy. The latter situation can be achieved by considering
losses of fast ions or due to fast ion source modulation [3-4]. The theory presented here is partly motivated by the observation of saturated n=0 fluctuations reported in [4,5], which were initially
interpreted in terms of a saturated n=0 Global Alfvén Eigenmode (GAE). Modeling of recent JET discharges using the NIMROD [6] extended-MHD code will be presented, focusing on mode structure and
frequency dependence. It is early for us to conclude whether the mode observed at JET is a VDOM rather than a GAE, nevertheless, we discuss the main points of distinction between GAE and VDOM that
may facilitate their experimental identification.
[1] T. Barberis, et al. 2022, J.Plasma Phys. 88, 905880511
[2] T. Barberis, et al 2022 Nucl. Fusion 62 06400
[3] Ya.I. Kolesnichenko and V.V. Lutsenko 2019 Nucl. Fusion 59 126005
[4] V. G. Kiptily et al 2022 Plasma Phys. Control. Fusion 64 064001
[5] H. J. C. Oliver et al. 2017 Phys. Plasmas 24, 122505
[6] C. Sovinec et al. and the NIMROD Team 2004 J. Comp. Phys. 195 355
We discuss recent progress in understanding the role of transport physics in density limit phenomena. Our approach is one which combines theory and experiment. Contrary to the conventional wisdom
that the density limit is enforced by MHD instability, findings indicate that the L−mode density limit is associated first with the degradation of the edge E × B shear layer. The latter occurs for
k‖² vₜₕₑ² / ω νₑᵢ<1. Shear layer decay leads to strongly enhanced turbulence spreading and increased production of density 'blobs'. Interestingly, the spreading flux increases more rapidly with
increasing n / n_G than does the particle flux. Shear layer decay is linked to a decline in zonal flow production.
A simple model for flow, fluctuation and density evolution reveals that the edge density will increase with edge heat flux (power). This favorable trend results from increased Reynolds stress flow
drive at higher power. It provides physical insight into the power scaling of density limit, now observed in experiments. A scaling of n ∼ P^(1/3) is suggested for the case of ITG turbulence.
We briefly discuss recent density limit experiments in negative triangularity plasmas, as well as aspects of the H−mode density limit phenomenon. Implications for burning plasma are discussed.
Contributions from Ting Long, SWIP ; Rameswar Singh ,UCSD ; Rongjie Hong, UCLA and DIII−D ; Zheng Yan, Univ Wisc and DIII−D ; and George Tynan, UCSD are acknowledged.
Starting from the assumption that saturation of plasma turbulence driven by temperature-gradient instabilities in fusion plasmas is achieved by a local energy cascade between a long-wavelength outer
scale, where energy is injected into the fluctuations, and a small-wavelength dissipation scale, where fluctuation energy is thermalized by particle collisions, we formulate a detailed
phenomenological theory for the influence of perpendicular flow shear on magnetized-plasma turbulence. Our theory introduces two distinct regimes, called the weakly and strongly sheared regimes, each
with its own set of scaling laws for the scale and amplitude of the fluctuations and for the level of turbulent heat transport. We discover that the ratio of the typical radial and poloidal
wavenumbers of the fluctuations, i.e., their aspect ratio, plays a central role in determining the dependence of the turbulent transport on the imposed flow shear. Our theoretical predictions are
found to be in excellent agreement with numerical simulations of two models of magnetized plasma turbulence: (i) an electrostatic fluid model of slab electron-scale turbulence, and (ii)
Cyclone-base-case gyrokinetic ion-scale turbulence.
We present the first local delta-f nonlinear gyrokinetic (GK) simulations based on a gyro-moment (GM) approach, which exploits the projection of the distribution functions onto a Hermite-Laguerre
velocity- space basis. We first demonstrate that, in contrast to gyrofluid models, the GM approach reproduces the Dimits shift, notably, with a coarser velocity space resolution than the continuum GK
GENE code. In addition, we reveal that the choice of collision operator model (Dougherty, Sugama, Lorentz and Landau) significantly impacts the level of turbulent transport through multi-species
zonal flow damping.
In addition, we show for the first time that the GM approach is able to bridge the gap between GK and reduced fluid modelling by its exact equivalency to the model of Ivanov et al. 2020 when
considering the same limits. Leveraging its efficiency and multi-fidelity capability, we finally use the GM approach to explore the impact of triangularity in realistic DIII-D edge conditions across
a range of models, spanning from GK electron-ion multi-scale simulations to the reduced fluid limit.
Turbulence is one of the key ingredients in shaping H-mode pedestals. Identifying the relevant turbulent transport mechanisms in a pedestal, however, is a great scientific and numerical challenge.
Here, we address this challenge by global, nonlinear gyrokinetic simulations of two pedestals: One from ASDEX Upgrade (Type-I ELMy H-mode) and one from JET (hybrid scenario H-mode). The global
simulations permit to calculate heat fluxes due to ion-scale turbulence in the steep gradient region encompassing the full pedestal from top to foot. They are supported by detailed characterizations
of gyrokinetic instabilities via local, linear simulations at pedestal top, center and foot as well as dedicated nonlinear electron-scale heat flux calculations. Simulations are performed with the
gyrokinetic, Eulerian, delta-f code GENE (genecode.org) and employ a new code upgrade of its global, electromagnetic model that enables stable simulations at experimental plasma beta values.
In both investigated pedestals from AUG and JET, we find turbulent transport to have a complex radial structure that is multi-scale and multi-channel. Electron transport in the AUG pedestal is found
to transition in scale. At the pedestal top ion-scale TEM/MTM instabilities fuel electron transport whereas in the pedestal center electron-scale ETG transport takes over. Turbulent ion heat flux is
present at the pedestal top and strongly reduces towards the steep gradient region. Magnetic shear is found to locally contribute to the stabilization of microinstabilities and reduction of heat
flux. In the JET pedestal, transport due to ITG is found to play a much more important role, particularly on the pedestal top/ outer core. In both pedestals, ExB shear is confirmed to strongly reduce
heat fluxes in the global, nonlinear simulations. We discuss implications of our results for the applicability of quasi-linear transport models in the pedestal.
A validated and predictive first-principles scaling of the operational density limit is presented. The scaling is based on consideration of plasma turbulence in the tokamak boundary supported by the
results of first-principles simulations. The scaling is validated against a multimachine database that includes results from the AUG, JET and TCV tokamaks. By revealing a dependence of the
operational density limit on the power crossing the separatrix, the result we obtain has consequences for ITER operation and the design of future fusion reactors.
Understanding the formation of large-scale structures in weakly magnetized plasmas represents a crucial step towards developing predictive design capabilities for E×B devices dedicated to
investigating fundamental plasma physics phenomena. MISTRAL is such a device based at PIIM laboratory to study plasmas in cross-field configuration (E⊥B). The formation of coherent rotating
structures in MISTRAL is supposed to be due to an interplay between various instabilities and the E×B flow. However, a definitive understanding of which instabilities are accountable for their
emergence and the specific triggers involved remains elusive. An experimental investigation of MISTRAL plasmas has been performed to lay the basis for the theoretical modeling. A two-fluid model has
been developed to discuss the linear stability of rotating plasma columns. Prior works have demonstrated that rotating plasma columns are susceptible to centrifugal flute modes. However, most of the
existing models rely on the low-frequency approximation (LFA), which holds true when the instability frequency and equilibrium flow frequency are considerably smaller than the ion-cyclotron
frequency. This assumption is challenged in numerous laboratory plasma devices, including weakly magnetized plasma columns like MISTRAL. To address this limitation, a radially global dispersion
relation describing the centrifugal instability without the LFA has been derived and linear stability analysis is performed. A comparison has been made between the results obtained using the
dispersion relation with the radially local approximation and those obtained using the radially global dispersion relation. This comparison revealed the non-applicability of the local solution to
MISTRAL-like plasma systems. Due to the high fraction of neutrals in the present plasma system, the model is further extended to include the effects due to ion-neutral collisions. In this first step,
the ion-neutral collision frequency is assumed to be small as compared to the ion-cyclotron frequency. The dispersion relation is then solved with finite ion-neutral collisionality and the linear
stability analysis is conducted.
In magnetic confinement fusion plasmas, many instabilities have a flute mode character. The field-aligned coordinates bring the benefit of efficient resolution of parallel mode structure along the
magnetic field direction. However, the curvilinear coordinates make equations and codes more complex especially in high order PDE.
The Compile-time Symbolic Solver (CSS) is developed to solve PDEs and ODEs in finite difference method from vector equations directly. CSS is a general-purpose finite difference framework for
generating finite difference codes easily and greatly reducing the risk of implementation mistakes.
For physics model, CSS supports arbitrary equations in arbitrary curvilinear coordinates and multiple boundaries for both PDEs and ODEs. For memory distribution, N-dimension distribution grids with
hybrid TBB and MPI parallelization in arbitrary dimensions are implemented. For numerical method, CSS employs Method of Line in numerical difference with arbitrary grid points and offset. The
N-dimension B-spline is implemented with arbitrary orders for pushing particles. CSS employs PARDISO to solve matrix problem and Runge–Kutta method for time advance. CSS is a C++20 template
metaprogramming code which guarantee zero-overhead at runtime. Furthermore, the instruction optimization makes the codes generated by CSS much faster than usual codes.
We have used CSS to generate the Gyrokinetic-MHD Hybrid Code GMEC, 3D Field and the Particle calculation code FP3D and a fluid ITG code. For GMEC, we propose a new shifted metric method which is able
to stabilize numerical instabilities and avoid the interpolation from MHD field-align grids to particle flux coordinate grids at the same time. The equilibriums can be analytical ones or numerical
ones calculated by VMEC or DESC. We have used GMEC to simulate ballooning modes (IBM) with or without the diamagnetic drift term and tearing modes. The simulation results agree well with those of the
eigenvalue code MAS. The n=20 IBM costs only 17 seconds using 448 cores. We have also used GMEC to simulate energetic particle-driven TAEs in a circular equilibrium and a CFETR equilibrium. The
results of an n=3 TAE agree well with those of M3D-K code.
We have also used CSS to generate the test particle code FP3D for calculation of magnetic surfaces, rotation transform, particle orbits and neoclassical transport in both tokamaks and stellarators.
We have used FP3D to simulate ripple losses in EAST tokamak and neoclassical transport coefficient in NCSX. The results are consistent with previous results. FP3D has been used in design and
optimization of stellarators successfully.
[1] P. Y. Jiang, et al. CSS: Compile-time symbolic solver for finite difference method. To be submitted.
Accurately predicting lower hybrid current drive (LHCD) in the weak-damping regime is an outstanding challenge, which suggests important physics is missing in present-day ray-tracing/Fokker-Planck
(RTFP) models. In this work, the impact of filamentary scrape-off layer (SOL) turbulence on LH waves is investigated using a new multi-scale scattering model. When coupled to an RTFP code, the
resulting simulations of LHCD in Alcator C-Mod show RF power deposition profiles robustly peaked on-axis, leading to good agreement with experimental Motional Stark Effect and hard X-ray
measurements. Therefore, it is shown that the rotation of the perpendicular wave-vector due to SOL turbulence is sufficient to bridge the discrepancy between simulation and experiment. Notably, this
model predicts an asymmetric broadening of the transmitted wave-spectrum, which is attributed to full-wave scattering effects in the presence of spatially coherent turbulence. This asymmetry leads to
rotation of incident power away from the plasma core when SOL densities are sufficiently high. RTFP modeling shows this effect plays a significant role in the anomalous drop in LHCD efficiency
observed at high densities.
The multi-scale scattering model has two steps. (1) Single filament-wave interactions are solved in full-wave formalism using a Mie-scattering technique. (2) Multiple of these filament-wave
interactions are modeled using the radiative transfer approximation, in which a photon’s scattering probability depends on the statistical properties of the filament population. The radiative
transfer equation (RTE) is then solved using a Monte Carlo scattering term in a ray-tracing model, allowing for self-consistent coupling to RTFP codes. For verification and comparison against other
models, the RTE is also solved in a simple slab geometry using a Markov chain. This model shows good agreement with ray-tracing in the Wentzel-Kramer-Brillouin (WKB) limit, and predicts greater,
asymmetric scattering beyond the WKB limit. Good agreement is also found with numeric full-wave solutions at sufficiently low filament packing-fraction, which is consistent with the validity limit of
the radiative transfer approximation.
It should be emphasized that this multi-scale scattering model retains many important full-wave effects while remaining computationally inexpensive, allowing fast parameter scans and inter-shot
analysis. In addition, this model is highly applicable to the modeling of electron cyclotron wave scattering since the radiative transfer approximation is increasingly valid for waves at higher k.
B. Biswas et al., “Spectral broadening from turbulence in multiscale lower hybrid current drive simulations,” Nuclear Fusion, 63, 1 (2022).
B. Biswas et al., “A hybrid full-wave Markov chain approach to calculating radio-frequency wave scattering from scrape-off layer filaments,” Journal of Plasma Physics, 87, 5 (2021).
Odd viscosity is a dissipationless transport coefficient that arises is certain classes of parity broken fluids. While typically studied in 2D, the generalization to 3D parity broken flows leads to a
much wider class of transport coefficients. In the context of plasmas this parity breaking manifests as gyro viscosity, and is associated with viscous stresses perpendicular to the flow. In this talk
I will take a phenomenological approach and outline which transport coefficients are allowed by the symmetry of isotropic 2D and 3D flows. I will then look at some specific systems that highlight a
subset of these coefficients, in particular parity odd Hele-Shaw flows and ferrofluids. I will also discuss some microscopic mechanisms that lead to parity broken flows, and look at some applications
to active matter and condensed matter systems.
We report the status of the Gyrokinetic-MHD Energetic particle hybrid Code GMEC being developed for simulations of energetic particle (EP)-driven Alfven instabilities and EP transport in magnetic
fusion plasmas such as ITER. In the hybrid model, electrons are treated as a fluid, EPs and thermal ions are described by gyro-kinetic equations. The energetic particle effects enter in the
gyrokinetic vorticity equation via pressure terms which are obtained by solving the gyrokinetic equations using PIC method. The field-aligned coordinates and meshes are used to efficiently resolve
mode structures of high-n Alfven modes. Five-points 4th order finite differences and 4th order Runge-Kutta method are used for numerical differentiations and time advance respectively. The
Compile-time Symbolic Solver (CSS) is developed to generate coding from vector equations directly. CSS is a C++20 template metaprogramming code. It expands vector equations into components scalar
equations at compile-time, and greatly simplifies coding of differential equations in toroidal curvilinear coordinates. Both MPI and TBB are used for parallelization. Up to now, a simplified version
of GMEC has been developed with initial verifications for ideal ballooning modes and EP-driven TAEs. The alpha particle-driven Alfven eigenmodes in the Chinese Fusion Engineering Test Reactor (CFETR)
have also been simulated successfully. Details of GMEC and its applications will be presented.
XGC-S is the stellarator version of the global gyrokinetic code XGC, originally developed for whole-volume modeling of tokamaks. We will cover the following topics: 1. The development history and
basic code descriptions, 2. A recent application to the Large Helical Device (LHD), and 3. Future capabilities. We consider isotope effects in LHD under the influence of radial electric field and
heavy hydrogen components. Both radial electric fields and heavy hydrogen components have similar impacts on thermal conductivity, involving the elongation of the mode structure to increase the heat
flux and mode suppression to decrease the heat flux. Quasi-linear estimations indicate that these competitive effects lead to a favorable mass number dependency. Finally, we will present the future
capabilities of XGC-S, including new meshing schemes, an electrostatic field solver that could handle complicated magnetic fields in the edge region, extension to a total-f method and electromagnetic
physics, and GPU offloading. The electrostatic field structure may explain the up-down asymmetry of divertor particle flux observed in LHD. The advanced XGC features, such as kinetic electrons and
multi-species collision, are yet to be implemented in XGC-S. These are also promising to address important issues in LHD.
Trinity3D+GX is a framework that leverages multi-scale gyrokinetic theory to model macro-scale profile evolution in fusion plasmas (tokamaks and stellarators) due to micro-scale turbulent processes.
In this talk I will first provide a brief background on the multi-scale gyrokinetic theory underpinning the model. I will then discuss the GX gyrokinetic code, which has been developed as a GPU
native code that uses an efficient pseudo-spectral discretization scheme to target fast turbulence calculations for fusion reactor design and optimization. This enables GX to be embedded as the
micro-turbulence model in the Trinity3D transport solver for tractable fusion profile prediction (and evolution) calculations. I will highlight some preliminary results of modeling W7X plasmas with
the Trinity3D+GX system and discuss future plans for using the framework in experimental studies as well as stellarator FPP design and optimization.
The dynamics of energetic particles and tearing modes and the interactions between them are of great significance for magnetically confined fusion plasmas. In this review, we focus on the issue: the
influence of energetic particles on tearing modes. The influence of energetic particles on tearing modes is described on the basis of a general dispersion relation for tearing modes. The effects of
energetic particles are considered separately in the outer region and the island region of a tearing mode.The physics mainly results from themodification of the perturbed parallel current by
energetic particles without wave–particle resonance. In addition, the resonance between energetic particles and tearing modes is also reviewed. Our descriptions of physical phenomena here are based
on an analytical approach, while the experiments and simulations are used to illustrate and confirm our results. Finally, a number of open issues are discussed.
The mathematics and physics of each of the three aspects of magnetic field evolution—topology, energy, and helicity—is remarkably simple and clear. When the resistivity η is small compared to an
imposed evolution, a/v, timescale, which means Rm ≡ μ0va/η >> 1, magnetic field line chaos dominates the evolution of field-line topology in three-dimensional systems. Chaos has no direct role in the
dissipation of energy. A large current density, jη ≡ vB/η, is required for energy dissipation to be on a comparable time scale to the topological evolution. Nevertheless, chaos plus Alfv ́en wave
damping explain why both timescales tend to be approximately an order of magnitude longer than the evolution timescale a/v. Magnetic helicity is injected onto tubes of field lines when boundary flows
have vorticity. Chaos can spread but not destroy magnetic helicity. Resistivity has a negligible effect on helicity accumulation when Rm >> 1. Helicity accumulates within a tube of field lines until
the tube erupts and moves far from its original location.
Future devices like ITER will have limited capacity to drive toroidal rotation, increasing the risk of instabilities like resistive wall modes. Fortunately, many experiments have found that tokamak
plasmas rotate “intrinsically”, that is, without applied torque. The modulated-transport model shows that such rotation may be caused by the interaction of ion drift-orbit excursions with the strong
spatial variation of the turbulent momentum diffusivity [1]. The model predicts intriguing qualitative behavior, such as a strong dependence of edge intrinsic toroidal rotation on the major-radial
position of the X-point, which was subsequently measured on TCV [2]. The model has also been experimentally validated through further dedicated tests [3, 4], as well as via application in the new
European whole-device transport model IMEP [5]. However, certain applications will require a relaxation of the underlying assumptions. In particular, the original model required the turbulent
momentum diffusivity to decay exponentially in the radial direction, while experiments often exhibit a more complicated variation. In this work, we generalize the modulated-transport model to allow
the turbulent momentum diffusivity to depend on space in an axisymmetric but otherwise arbitrary way. To enable this generality, we assume that the normalized diffusivity is weak, roughly equivalent
to assuming that the pedestal-top ion transit time is short compared to the transport time across the pedestal, a condition that is almost always met for experimental applications. Given the
increased flexibility, along with a technically much easier calculation, the new approach may serve as a basis for future extensions, including shaped geometry and trapped particles as well as the
retention of momentum transport by neutrals. [1] T. Stoltzfus-Dueck, Phys. Rev. Lett. 108, 065002 (2012). [2] T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001 (2015). [3] J. A. Boedo et al.,
Phys. Plasmas 23, 092506 (2016). [4] A. Ashourvan, B. A. Grierson, D. J. Battaglia, S. R. Haskey, and T. Stoltzfus-Dueck, Phys. Plasmas 25, 056114 (2018). [5] T. Luda et al., Nucl. Fusion 61, 126048
Neoclassical tearing modes (NTMs) are identified as one of the main performance limiting, resistive MHD instabilities that exist in tokamak plasmas. They impose a limit on fusion gain, as well as
plasma confinement time. Resulting from filamentation of the current density that flows through the tokamak plasma, they modify the equilibrium magnetic topology, breaking down the tokamak toroidal
symmetry by forming a chain of magnetic islands. For large magnetic islands (i.e. much larger than the banana orbit width of trapped ions, ρbi), the plasma thermal pressure gradient is removed across
them. This therefore reduces the total core plasma pressure, degrading tokamak confinement. The pressure profile flattening across the islands generates a hole in the bootstrap current density close
to the island centre (”O-point”), enhancing the current density filamentation even further and amplifying the island. This provides the main drive for magnetic island growth. Fortunately, the NTM
behaviour significantly differs from the above when magnetic islands are small, i.e. their width is comparable to ρbi. Experimental observations found that there is some threshold magnetic island
width, 2wc ≈ (2 − 3)ρbi [1, 2], below which the pressure gradient is partially restored inside the island, providing its ”self-healing”. This wc is a key parameter of the NTM theory and is
responsible for quantifying the NTM control system [3]. A novel drift island formalism is derived in [4] to quantify the NTM threshold in a low beta, large aspect ratio tokamak plasma. Reference [5]
improves this theory further by introducing plasma shaping and finite beta effects. In particular, it is found that (1) a higher triangularity plasma is more prone to NTMs (in agreement with tearing
mode onset relative frequency measurements in DIII-D, 2022), (2) the conventional (ε 1/2 , where ε is the tokamak inverse aspect ratio) NTM threshold dependence on the tokamak inverse aspect ratio is
revisited for finite aspect ratio and (3) the NTM threshold dependence on poloidal beta is obtained and successfully benchmarked against the EAST threshold island width measurements (2022). Effects
of the background electric field on the NTM threshold are also investigated [6]. While NTMs are one of the main sources of confinement limit in a tokamak, the core pressure is also significantly
influenced by the pedestal physics. A nonlinear electromagnetic global gyrokinetic theory is derived in [7] to ensure that the effects associated with sharp pressure gradients and the conse- quential
high bootstrap and Pfirsch-Schluter currents are fully captured in the nonlinear electromagnetic gyrokinetic theory, while allowing arbitrary magnetic field configurations and finite orbit width
effects and ensuring consistent ordering. A reduced version of this theory (Bθ ≪ B0, where B0 is the total equi- librium magnetic field and Bθ is its poloidal component) has been implemented in the
local turbulence code GS2 (to be referred to as NEO GS2) to quantify the impact of higher order gyrokinetics in sharp pressure gradient regions where the bootstrap current becomes large (such as the
pedestal plasma and a spherical tokamak core plasma) [8]. The dominant impact is found to be on kinetic-ballooning modes (KBMs). In particular, it is found that the KBM growth rate is significantly
suppressed by inclusion of neoclassical equilibrium effects at large density gradients, representative of the tokamak pedestal val- ues. The latter appears only when the neoclassical electrostatic
potential, Φ 1 0 = Φ 1 0 (ψ, θ), dependent on poloidal angle, θ, is calculated consistently with plasma quasi-neutrality. Electrostatic modes are also found to be impacted by the neoclassical
equilibrium physics. In contrast, the impact on micro-tearing modes (MTMs) is found to be minimal, based on the test cases considered. References [1] Z. Chang, J. D. Callen, E. D. Fredrickson et al.
Phys. Rev. Lett. 74 (1995) 4663 [2] R.J. La Haye, R.J. Buttery, S.P. Gerhardt et al. Phys. Plasmas 19 (2012) 062506 [3] E. Poli, C. Angioni, F.J. Casson et al. Nucl. Fusion 55 (2015) 013023 [4] A V
Dudkovskaia, J W Connor, D Dickinson, P Hill, K Imada, S Leigh and H R Wilson Plasma Phys. Control. Fusion 63 (2021) 054001 [5] A V Dudkovskaia, L Bardoczi, J W Connor, D Dickinson, P Hill, K Imada,
S Leigh, N Richner, T Shi and H R Wilson Nucl. Fusion 63 (2023) 016020 [6] A V Dudkovskaia, J W Connor, D Dickinson, P Hill, K Imada, S Leigh and H R Wilson Drift kinetic theory of neoclassical
tearing modes in tokamak plasmas: polarisation current and its effect on magnetic island threshold physics, to be submitted, (2023) [7] A V Dudkovskaia, H R Wilson, J W Connor, D Dickinson, F I Parra
Plasma Phys. Control. Fusion 65 (2023) 045010 [8] A V Dudkovskaia, J W Connor, D Dickinson, H R Wilson Plamsa Phys. Control. Fusion 65 (2023) 054006
In magnetic confinement fusion research, predicting turbulent transport in tokamak edge plasma and its effect on fusion device operation is crucial for determining confinement properties. In order to
better understand the causes and evolution of electron heat transport in tokamak discharges, a quasilinear transport model has been developed for use in integrated predictive modeling studies. Recent
analyses of H-mode plasmas [1] suggest that small-scale instabilities localized near the rational surface, such as microtearing (MT) modes, have a significant effect on con- finement. MT modes draw
on the electron temperature gradient as a free-energy source and rearrange magnetic topology through the creation of ion-Larmor-radius-scale magnetic islands, thereby playing a role in determining
pedestal characteristics. The stability of MT modes has been extensively studied theoretically, showing that a slab current sheet is stable in the absence of collisions [2]. To evaluate the
parametric dependencies of MT and determine new saturation rules, a reduced kinetic transport model for MT has been developed using an electromagnetic quasilinear theory. This reduced model solves
the Vlasov and Maxwell equations, and its evaluation inside the resistive layer is obtained from a system of two equations linking the magnetic vector potential and the electric potential. To solve
numerically, this system of equations, an eigenvalue code has been developed. The reduced transport model has been tested and compared with gyrokinetic simulations using JET experimental data,
showing good agreement. Analysis of nonlinear gyrokinetic simulations shows that this quasilinear transport model for microtearing repro- duces gyrokinetic trends for a variety of parameter regimes
[3]. The impact of the electric potential on nonlinear saturation is examined using this model. The electric potential plays a key role in microtearing destabilization by boosting the growth rate of
this instability in the presence of collisions. Instability and saturation physics are exam- ined for different pedestal cases and radial positions, with a special focus on the role of electric field
fluctuations and the role of zonal flows and fields. In the saturated state, it is found that removing electrostatic fluctuations causes a flux increase, whereas linear stabilization had been
observed. This is consistent with a change in saturation mechanism from temperature corrugations to zonal-field and zonal-flow-based energy transfer. References [1] D.R. Hatch et al., Nucl. Fusion
56: 104003 (2016). [2] M. Hamed, et al., Physics of Plasmas, 26(9): 092506 (2019). [3] M. Hamed, et al., Physics of Plasmas 30: 042303 (2023)
Burning plasma in magnetic fusion relies on sufficient a-power and adequate energy confinement time. While D-T experiments with high ion temperature regime (Ti>Te) produced a-power up to ~4 MW and
tE~0.7s with an ion heating system, the present ITER, expecting ~5s of tE, has only electron heating systems. No high-power electron heating systems can substitute a-power to identify the threshold
electron heating power to achieve discharges with Ti>10 keV due to ion temperature clamping and practical problems (e.g., antennas and narrow resonance layer). In the confinement section, the
difference between L and H mode (ETB) is attributed to the configuration difference and the edge density is largely controlled by influx plasmas induced by outflux plasmas from the limiter/divertor
plates. The influx plasma is not a quiescent one and the fluctuation level should be high. The ITB position of the various improved confinement regimes such as “Supershot” and “Super H-mode” is
highly correlated with the heating profile footprints. Note that ITB positions are formed where the fluctuations are minimum. Transport models that supported the physics of ITB and/or ETB are
examined including ExB shear, ITG marginality, etc. A reasonably large size (Vp<200m3 is much smaller than ITER, Vp~800m3) ignition device is feasible with optimized ion heating system and device
geometric factors.
Magnetic nozzle/mirror configurations with converging-diverging magnetic field are used in numerous plasma applications for fusion, electric propulsion and material processing. In fusion devices such
as open mirrors and tokamak divertors, the diverging magnetic field is use to reduce the thermal loads to the walls. In electric propulsion, magnetic nozzle is employed to convert the plasma thermal
energy into the kinetic energy of the directed flow producing the thrust. Ion acceleration by the magnetic nozzle is used in ion plasma sources for material processing. In many of these applications,
plasma flow and acceleration share many common patterns. We present the results of the fluid model taking into account the effects of anisotropic ion pressure. Further generalization includes the
role of the induced azimuthal magnetic field and plasma rotation, i.e., coupling with Alfven wave dynamics. It is shown that the inhomogeneous magnetic field couples the axial plasma flow with the
evolution of the azimuthal magnetic field and plasma rotation resembling the problem of the magnetically driven flow in astrophysical jets and winds. The kinetic effects have been investigated using
the quasineutral hybrid model with kinetic ions and isothermal Boltzmann electrons and full kinetic model including the ions and electrons in quasi-two dimensional (paraxial) model.
Explicit Particle-In-Cell (PIC) used to model low temperature plasmas are time consuming. We show that a new Sparse PIC method based on sparse grid approaches combined with the combination technique
offers a promising alternative to reduce the computational time maintaining a high accuracy in the modeling results.
A quasilinear plasma transport theory that incorporates Fokker-Planck dynamical friction (drag) and pitch angle scattering is self-consistently derived from first principles for an isolated,
marginally-unstable mode resonating with an energetic minority species. It is found that drag fundamentally changes the structure of the wave-particle resonance, breaking its symmetry and leading to
the shifting and splitting of resonance lines. In contrast, scattering broadens the resonance in a symmetric fashion. Comparison with fully nonlinear simulations shows that the proposed quasilinear
system preserves the exact instability saturation amplitude and the corresponding particle redistribution of the fully nonlinear theory. Even in situations in which drag leads to a relatively small
resonance shift, it still underpins major changes in the redistribution of resonant particles. This novel influence of drag is equally important in plasmas and gravitational systems. In fusion
plasmas, the effects are especially pronounced for fast-ion-driven instabilities in tokamaks with low aspect ratio or negative triangularity, as evidenced by past observations. The same theory
directly maps to the resonant dynamics of the rotating galactic bar and massive bodies in its orbit, providing new techniques for analyzing galactic dynamics. Reference: V. N. Duarte et al, Phys.
Rev. Lett. (2023) https://doi.org/10.1103/PhysRevLett.130.105101.
The TRANSP framework has been a workhorse for the fusion community for over three decades. To meet the needs for HPC capabilities, preparation to ITER operation and FPP design, TRANSP has undergone
substantial re-factoring and modernization during the past four years. Currently, TRANSP is being adapted to use the ITER Data Model, which opens the framework to a number of additional physics
models. Opportunities for new collaborations are discussed, including ideas for gradually expanding TRANSP to model non-axisymmetric effects.
Quantum computing (QC) is gaining attention as a potential way to speed up simulations of physical systems. The main efforts are currently focused on modeling quantum systems, e.g. for chemistry and
material science. However, applying QC to classical problems, and plasma physics in particular, can also be beneficial. In this talk, I will discuss quantum modeling of linear radiofrequency (RF)
waves, which in the future could improve the accuracy and resolution of RF simulations for fusion applications. In the first part of my talk, I will describe an algorithm for solving the
initial-value problem for the propagation of RF waves in inhomogeneous cold magnetized plasma using so-called Quantum Signal Processing (QSP) [1]. In the second part of my talk, I will describe an
algorithm for solving the boundary-value problem for dissipative linear waves propagating in a medium with a prescribed inhomogeneous dielectric permittivity using so-called Quantum Singular Value
Transform (QSVT) [2]. [1] I. Novikau, E. A. Startsev, and I. Y. Dodin, Quantum signal processing for simulating cold plasma waves, Phys. Rev. A 105, 062444 (2022). [2] I. Novikau, I. Y. Dodin, and E.
A. Startsev, Simulation of linear non-Hermitian boundary-value problems with quantum singular value transformation, arxiv:2212.09113.
Reliable prediction of the turbulent dynamics of the plasma edge remains one of the last frontiers of theory in support of designing a fusion power plant. Previous results from the LLAMA diagnostic
on DIII-D revealed a striking asymmetry in the line radiation produced between the high- and low-field sides, and a strong dependence on the direction of the toroidal magnetic field. These results
have been unexplained until now. Here we present the only first-principles reproduction of the DIII-D Lyman-alpha signal using kinetic models for the plasma and neutral species with synthetic
diagnostics. It is found that the observed change in asymmetry is due to a difference in the primary recycling location which in turn is caused by changes in the plasma flow as the toroidal magnetic
field changes direction. The asymmetry in the synthetic signal matches observation when accounting for collisional and drift physics, while turbulence is necessary to capture the order of magnitude.
Edge-localized modes (ELMs) and disruptions are two transient phenomena that can cause serious damage to the vessel in reactor-scale tokamaks and thus need to be controlled or mitigated. This talk
presents extended-magnetohydrodynamic (MHD) simulations with the goal of accurately predicting ELM stability thresholds in STs and inform the massive gas injection (MGI) layout for disruption
mitigation in SPARC. ELMs are typically associated with macroscopic peeling-ballooning (PB) modes in the edge pedestal, which arise due to strong pressure and current density gradients. While in
large aspect ratio devices these ideal-MHD modes are well understood, a long-standing problem has been the reliable modeling of such stability boundaries in some ST scenarios, particularly in NSTX.
In simulations with the extended-MHD code M3D-C1, it is found that plasma resistivity can significantly alter macroscopic edge-stability in ELMing H-mode discharges in NSTX. These discharges are
limited by resistive kink-peeling modes, while the two studied ELM-free scenarios appear limited by ideal ballooning modes. We will also present some extended-MHD analysis of PB stability in MAST and
STAR, a preliminary ST-based power plant design. We show how these extended-MHD stability thresholds are incorporated into a higher-fidelity model to predict the pedestal structure in a wider range
of tokamak scenarios. The second part of the talk focuses on extended-MHD simulations of disruption mitigation in SPARC via massive gas injection. Fully three-dimensional simulations with M3D-C1 are
carried out for various injector configurations with the primary goal of determining the effect of different MGI parameters on heat loads and vessel forces. The simulations include a model for
impurity ionization, recombination, advection and radiation, as well as spatially resolved conducting structures in the wall.
Global stability of differentially rotating systems in the presence of magnetic fields is examined. Unlike general global ideal pressure or current-driven instabilities, where an integrated
linearized self-adjoint force operator is used to realize the free energy in the flowless MHD, due to the non-self-adjoint property of the MHD force operator a nontrivial modified energy principle
with flows is required. Per Frieman & Rotenberg, I will first discuss the modified energy principle, in a differentially rotating system, and then present our results using both global eigenvalue
analysis as well as initial value calculations. We find that only global models with spatially varying fields (both magnetic and rotational) can offer the richest mode spectrum, mainly as the result
of resonances in the system. A new non-local mode, a magneto-curvature instability (Ebrahimi&Pharr ApJ 2022 https://doi.org/10.3847/1538-4357/ac892d) is obtained. This non-axisymmetric instability is
triggered due to Alfven-continuum unstable modes in the presence of non-local effects of the global spatial curvature of flow shear and magnetic field. It will be shown that as the field strength is
increased a transition from turbulence to a state dominated by global non-axisymmetric modes is obtained. The implication of this instability for accretion flows as well as laboratory plasmas will be
A mechanism by which a nonlinear perturbation involving many active triads -- a Network of Nonlinear Interactions -- was observed to trigger edge localized modes (ELMs) below the peeling-ballooning
(PB) limit [1]. The nonlinear stability of such a network of nonlinear interactions has been modeled with a network of harmonic oscillators coupled together via multiple triads [2]. This model
network has been found to transit from a quiet regime of weak nonlinear fluctuations (triads near O-points) towards a regime of strong nonlinear fluctuations (triads near X-points). During this
transition, the energy of the dominant waves is transferred to the sub-dominant waves through the strong nonlinear fluctuations, reminiscent of the ELM onset. The rapidity of this transition is found
to be inversely proportional to the intensity of the nonlinear coupling between the modes. Moreover, when the nonlinear time scale of the fluctuations is comparable to the time scale of the wave
oscillations, it is found that the system is chaotic and that the transition is the most abrupt. [1] J Dominski and A Diallo, Plasma Phys. Control. Fusion 62:095011 (2020) [2] J Dominski and A
Diallo, Physics of Plasmas 28:092306 (2021)
A systematic simulation study of energetic particle-driven n=1 mode in tokamak plasmas has been carried out using the global kinetic-MHD hybrid code M3D-K [1,2]. This work is focused on the
interaction of energetic beam ions and n=1 mode with a monotonic safety factor q profile and q0 < 1. Linear simulations with energetic co-passing particles [1] show excitation of a low-frequency mode
of fishbone type with the corresponding resonance of ωφ + ωθ = ω, where ωφ is the energetic ion toroidal transit frequency and ωθ is the energetic ion poloidal transit frequency. The simulated mode
frequency is approximately proportional to the energetic ion injection energy and orbit width. The mode structure is similar to that of internal kink mode. These simulation results are similar to the
analytic theory of Yu et al. [3]. Furthermore, linear simulations with energetic counter-passing particles [2] show that the instability is either a m/n = 1/1 energetic particle mode (EPM) or a m/n =
1/1 global Alfven eigenmode (GAE) depending on the value of central safety factor. The mode frequencies are close to the tip of Alfven continuum spectrum at the magnetic axis. The excited modes are
radially localized near the magnetic axis well within the q = 1 surface. The main wave particle resonance is found to be ωφ + 2ωθ = ω. The nonlinear simulation results show that there is a long
period of quasi-steady-state saturation phase with frequency chirping up after initial saturation. Correspondingly, the energetic particle distribution with low energies is flattened in the core of
plasma. After this quasi-steady phase, the mode amplitude grows again with frequency jumps down to a low value corresponding to a new mode similar to the energetic co-passing particle-driven low
frequency fishbone while the energetic particle distribution is flattened for higher energies in the core of plasma.
The formation of the H-mode pedestal in magnetized plasmas remains a mystery nearly forty years after it was first experimentally observed [1]. In the ensuing years, it has been observed in nearly
every machine experiment. Many different theories have also been proposed to explain its existence. Recently, Burrell [2] argued that sheared E x B flow was most likely responsible for the formation
of the pedestal and the improved confinement. This is different from the claim by Lee and White [3] that the formation of the pedestal is the real reason for the improved confinement and,
furthermore, its formation subsequently gives rise to the E x B flow. Moreover, they claimed that the H-mode is related to a delicate force balance between the ion pressure gradient and the
gyroviscosity arising from the ion Finite Larmor Radius (FLR) effects [3]. The ion FLR effects are also found to be responsible for the creation of the radial electric field well for the H-mode
[3,4]. This electric field, together again with the FLR effects associated with the E x B drift, can produce a poloidal current, which modifies the pressure balance. Furthermore, it has been shown
that this delicate balance produces a force free field, ∇ x B = (4π /c) J∥, which is the real physics behind the H-mode [3]. A recent experimental paper by Zweben et al. [5] has shown that there is
no noticeable change for the poloidal flow in the pedestal region just before the L-H transition, which differs from the claim by Burrell [2]. To answer this chicken-and-egg question, we propose to
use the existing fully electromagnetic gyrokinetic codes, e.g., GTC [6] and GTS [7] to simulate the physics inside the separatrix with proper modifications. Specifically, the charge separation due to
the finite Larmor radius effects in the regions with steep pressure gradient should be taken into account. Details will be discussed. 1. F. Wagner et al., Phys. Rev. Lett. 53, 1453 (1984) 2. K. H.
Burrell, Phys. Plasmas 27, 060501 (2020) 3. W. W. Lee and R. B. White Phys. 26, 040701 Plasmas (2019) 4. W. W. Lee, Phys. Plasmas 23, 070705 (2016) 5. S. J. Zweben, A. Diallo, M. Lampert, T.
Stoltzfus-Dueck, and S. Banerjee, Phys. Plasmas 28, 032304 (2021) 6. Z. Lin, T. S. Hahm, W. W. Lee, W. M. Tang and R. B. White, Science 281, 1835 (1998) 7. W. X. Wang, Z. Lin, W. M. Tang, W. W. Lee
et al. Phys. Plasmas 13, 092505 (2006)
Particle-in-cell has been a go-to approach for modeling plasmas in the environments of compact astrophysical objects for the last decade. Yet, there is no single publicly available code that includes
all relevant radation-plasma coupling processes and is capable of modeling global systems. In this talk I will describe development of a new-generation PIC code for extreme astrophysical plasmas,
Entity. The code is based on the Kokkos framework, which enables efficient implicit multi-architecture portability including GPUs. The code features algorithms for various radiation-plasma coupling
processes, such as Compton scattering, production of electron-positron pairs and their annihilation. The code is designed in general coordinate system, defined by the metric functions; this enables
the Entity to also efficiently tackle the global (full-system) models of the magnetospheres of compact objects, which require algorithms on non-cartesian (spherical, cubed sphere) non-uniform grids,
and even full general relativity.
Pulsar radio emission may be generated in pair discharges which fill the pulsar magnetosphere with plasma as an accelerating electric field is screened by freshly created pairs. In this talk we
present a simplified analytic theory for the screening of the electric field in these pair discharges and use it to estimate total radio luminosity and spectrum. The discharge has three stages.
First, the electric field is screened for the first time and starts to oscillate. Next, a nonlinear phase occurs. In this phase, the amplitude of the electric field experiences strong damping because
the field dramatically changes the momenta of newly created pairs. This strong damping ceases, and the system enters a final linear phase, when the electric field can no longer dramatically change
pair momenta. Applied to pulsars, this theory may explain several aspects of radio emission, including the observed luminosity, 10^{28} erg s^{-1}, and the observed spectrum, ω^{-1.4+-1.0}.
In magnetised plasma sheaths, such as the ones forming next to divertor or limiter targets in a fusion device, very strong electric fields arise on the length scales of the ion gyroradius and Debye
length [1]. These reflect electrons and reduce the electron flux so that equal outflow of electrons and ions (ambipolarity) to the targets can be achieved globally. The length and time scales of the
sheath are much shorter than those of the bulk plasma (e.g. the Scrape-Off Layer). It is therefore computationally much faster to simulate the plasma with models that average over the shorter scales
and thus do not resolve the sheath (e.g. fluid, drift-kinetic, gyrokinetic), although this requires boundary conditions that are consistent with the presence of the sheath. For example, in a kinetic
model the electrons must be reflected by the sheath electric field back into the plasma [2]. Hence, the phase space reflection-absorption boundary, or cutoff, of electrons fully specifies the
electron boundary conditions. We present computations of the cutoff including the effect of finite electron gyro-orbits, which makes the cutoff parallel velocity a function of magnetic moment. Ions
must be pre-accelerated into the sheath and satisfy a constraint known as the Bohm (unmagnetised) [3] or Chodura (magnetised) condition. We derive the kinetic generalisation of the Chodura condition
for general magnetic field angles including the effect, which becomes prominent at shallow magnetic field angles, of gradients of the electrostatic potential and distribution functions tangential to
the target (e.g. the ExB drifts from such gradients transport ions almost normal to the target, while parallel streaming only has a very small component normal to the target at shallow magnetic field
angles). We calculate a critical small magnetic field angle [4] below which a monotonic sheath solution cannot be found, and find that this critical angle increases with electron gyroradius. We also
present first preliminary 2-dimensional sheath electrostatic potential solutions including the spatial profile of small-amplitude fluctuations tangential to the target. This work assumes
collisionless sheaths, and therefore must be generalised to be applied to colder and more neutral-rich systems such as a detached divertor. [1] R. Chodura, Phys. Fluids (1982). [2] Parker et al. J.
Comput. Phys. (1993). [3] K.-U. Riemann, J. Phys. D: Appl. Phys. (1991). [4] R. Ewart, F. Parra, A. Geraldini, PPCF (2021).
A key challenge to achieving efficient fusion energy in toroidal devices is reducing the particle and energy losses from microinstability-driven turbulence. Efforts to optimize stellarators for
reduced turbulence transport have previously been focused on reducing the linear instability drive mechanisms, however this may conflict with other optimization constraints such as macroscopic
stability. A different approach to turbulence optimization may be formulated by attempting to reduce the nonlinearly saturated fluctuation amplitudes by increasing coupling to dissipation channels.
While multiple dissipation mechanisms exist in plasma turbulence, this talk will focus on ability of fluctuations to nonlinearly couple to stable modes to provide an effective energy sink. This
process is largely quantified by a resonant three-wave interaction lifetime between modes. In stellarators, the strength of the three-wave interaction lifetime is dependent on the geometric
properties of magnetic field lines and can be estimated effectively from linear eigenmode calculations. This talk will demonstrate the differences in stable mode coupling mechanisms between classes
of quasisymmetric geometries and the applicability of the model for stellarator optimization.
Energetic particle transport in burning plasmas depends on cross-scale interactions between macroscopic MHD mode, mesoscale Alfven eigenmodes, and microturbulence, which requires global integrated
simulations incorporating multiple physical processes. In this talk, I will first highlight gyrokinetic toroidal code (GTC) simulation finding regulation of reversed shear Alfven eigenmodes (RSAE) by
microturbulence, leading to excellent agreement, for the first time, of gyrokinetic simulation results with experimental measurements of RSAE amplitude and mode structure in the DIII-D tokamak [PRL
128, 185001 (2022)]. I will then present results from integrated simulations of energetic particle transport in the ITER operational scenarios carried out by several energetic particle codes in
SciDAC and ITPA collaborations. Finally, I will discuss progress on GTC global simulations with kinetic electrons in 3D toroidal geometry including RMP tokamaks and stellarators.
Plasma blobs are coherent turbulent structures of elevated plasma pressure in the scrape-off layer (SOL) of tokamaks that influence radial transport. Previous work has considered the effect of
X-point geometry on radial blob velocities [1]. Now a semi-analytic blob velocity scaling has been derived that includes the effects of elongation and triangularity. It predicts that increasing the
value of either shaping parameter results in slower radial blob velocities. Using the Gkeyll code, gyrokinetic seeded blob simulations have been carried out assuming DIII-D SOL parameters and
inner-wall limited (IWL) geometry, which confirmed scaling predictions. The effect of atomic neutral interactions was also explored. Finally, full flux simulations of positive and negative
triangularity IWL DIII-D discharges [2,3] are presented. The negative triangularity case has 50% more blobs than the positive case, and they are slightly faster on average, consistent with the
scaling prediction.
Stellarators have an advantage over tokamaks as fusion reactors in that they are disruption free and steady state, whereas tokomaks must be pulsed in order to produce the poloidal confining field. In
a stellarator the poloidal field is produced by making the field coils not simply vertical, but with an inclination which is periodically modulated around the torus. But this modulation has also the
effect that the magnitude of the toroidal field is not constant, but has a toroidal periodicity. This modulation of the toroidal field has however three deleterious effects. Particle resonances in a
plasma are locations where passing particle orbits return to an initial point, they repeat the same motion indefinitely, n periods toroidally and m periods poloidally. The location of a resonance is
a function of particle energy and pitch. At very low energy they are located at values of the field line helicity equal to m/n. If the toroidal period of a resonance matches the toroidal period of B,
then 1. Unperturbed passing particle orbits form islands, with width increasing with energy, and these islands to some degree impair confinement. 2. Local wells in B are formed along the resonances,
producing ripple trapping loss of particles, making early loss of fusion alpha particles a problem for reactor walls. 3. In addition, because of the toroidal dependence of B, Alfven modes do not
produce local resonance islands, with the effect of the mode localized to very near the resonance surface. Instead, a broad domain of chaotic orbits is produced, with particle diffusion in all
energies large for small mode amplitude.
Transport Barriers (TBs) are crucial to magnetic fusion, in the form of edge pedestals (and also Internal Transport Barriers (ITB)). The dominant instability of confined plasmas, the coupled Ion
Temperature Gradient and Trapped Electron Mode, must be suppressed for TBs to exist. In a complete departure from previous theoretical analysis, we use statistical mechanical concepts together with
extensive gyrokinetic simulations to show that such thermalizing instabilities cannot access the enormous free energy of their gradients due to a fundamental dynamical constraint- the fluctuation
induced charge flux must vanish. Together with basic statistical mechanics, the dynamical pathway to instability can be removed by this constraint. Velocity shear, often thought of as crucial for
TBs, is thereby rendered unnecessary. In fact, even for well-developed pedestals in present devices, simulations show that this constraint can be even more responsible for sustaining the barrier than
velocity shear. On future devices with considerably lower velocity shear, this will undoubtedly have to be true. The constraint becomes restrictive when one plasma species becomes nearly adiabatic (
= Maxwellian response) due to rapid phase space averaging (as in classical statistical mechanics). This averaging depends sensitively upon the magnetic geometry. When phase space averaging rates are
faster than the mode dynamics, and when there are also substantial density gradients, the dynamical constraint forces low growth rates independently of the amount of free energy. Nonlinear
simulations show that heat fluxes are also commensurately small (reduced by orders of magnitude).
We present a new first-principle model that allows for the proper simulation of the plasma boundary of fusion devices, which encompasses the edge and the scrape-off layer regions [1]. Developed onto
a set of fluid-like equations using a Hermite-Laguerre polynomial decomposition of the distribution function that retains the gyrokinetic (GK) Coulomb collision operator [2], the gyro-moment (GM)
model offers an ideal analytical and numerical framework to describe the wide range of plasma parameters found in the boundary region. Indeed, the GM model contains the core GK model and the fluid
and gyrofluid models used for scrape-off layer simulations as particular limits. We demonstrate that the GM approach can correctly retrieve the properties of microinstabilities that develop at low
plasma collisionality, strongly sensitive to kinetic features, in perfect agreement with the GK continuum GENE code. At the same time, we show that the GM model correctly retrieves the fluid limit at
high collisionality. A hierarchy of collision operator models with various physics fidelity is developed and numerically implemented using the same GM approach [3,4,5]. This allows us to perform a
comparison between collision operator models, revealing large deviations (compared to the Coulomb operator) in linear growth rates, collisional zonal flow damping, and turbulent transport levels.
Furthermore, we prove that the GM approach is numerically efficient from the low-collisionality banana regime in H-mode pedestals to the high-collisionality regime of the scrape-off layer. [1] Frei
B. J. et al., JPP 82 (2020) [2] Jorge R. et al., JPP 85 (2019) [3] Frei B. J. et al., JPP 87 (2021) [4] Frei B. J. et al., JPP 88 (2022) [5] Frei B. J. et al., PoP 29 (2022)
Many galaxies, including our own galaxy the Milky Way, have a ‘bar’ structure at their center— an elongated collection of millions of stars, that gradually rotates as if it were a solid body.
Galaxies are also embedded in massive dark matter haloes. When the rate at which the bar rotates resonates with a dark matter particle’s orbital frequency, the dark matter can suck angular momentum
out of the bar, causing it to slow down. Previous theories of the bar-halo interaction calculated this ‘dynamical friction’ on the bar in a manner directly analogous to Landau’s calculation of the
collisionless damping of an electric wave (and subsequently to O’Neil’s nonlinear generalisation thereof). This is no coincidence — bar-halo interactions are just one of a plethora of gravitational
dynamics problems that have direct plasma-kinetic analogues. In this talk I will introduce the astrophysical context of galactic bars and their host dark matter haloes, and describe some of the
aforementioned classic studies of the bar-halo interaction. However, those studies routinely ignored the fact that dark matter particles also experience random ‘diffusive’ kicks from other passing
dark matter clumps, gas clouds, and so on. I will describe recent work done in collaboration with Princeton plasma theorists on quantifying the impact of diffusion on bar-halo friction, a problem
which turns out to be mathematically identical to that of understanding particle energization in tokamaks. More broadly, I will argue that galactic dynamics has, over the last several decades,
largely failed to learn from its more well-developed cousin and therefore has a lot of catching up to do. But I will also argue that in return, we stellar dynamicists can provide you plasma theorists
with fresh contexts in which to ply your trade, and an opportunity to work on something both fantastically beautiful and totally useless.
As we approach the breakeven era of fusion, optimizing reactors to make them more efficient and less expensive will be critical to the wide-scale adoption of fusion as a commercial energy source. The
main challenge is to achieve high steady-state pressures in the core of the reactor to reach self-sustaining fusion conditions. At the same time, the boundary plasma must be kept sufficiently cool so
that the plasma exhausted from the hot core is not dangerous to the device walls. Turbulence is the main source of heat transport from the core to the boundary, which makes understanding how to
optimize the reactor design for turbulent transport a key to solving the competing (but coupled) core and boundary challenges. In this talk, I will present a vision for tackling this challenging
whole-device-modeling problem in a scalable way. The approach consists of four main modules: (1) fast-but-accurate core turbulence modeling with the GPU-native GX delta-f gyrokinetic code, which
leverages pseudo-spectral methods in both configuration (Fourier) and velocity (Hermite-Laguerre) space; (2) multi-scale modeling for predicting and evolving core profiles using a macro-scale
transport solver (Trinity) coupled to many radially-local GX micro-turbulence calculations in parallel, leveraging the scale separation between turbulence and transport at reactor scale; (3) kinetic
boundary turbulence modeling with the Gkeyll code, a full-f electromagnetic gyrokinetic model for the edge and scrape-off layer; and (4) transport optimization of fusion reactor designs by using
(1-3) as a massively-parallel whole-device model inside the optimization loop. For each of these modules I will highlight preliminary results, ongoing work, and future steps.
Recently, enormous progress has been made in obtaining quasisymmetry (QS) of outstanding precision through numerical optimization. Significant analytical progress has also been made possible thanks
to the asymptotic expansions near the magnetic axis (NAE). A critical factor in realizing good QS through the second-order of the NAE is the choice of the magnetic axis. However, because of the
complexity of the second-order NAE equations, analytical characterization of these preferred magnetic axes for optimal QS has not been possible so far. In this talk, we shall attempt to answer this
question for quasi-axisymmetric (QA) systems. We show that the magnetic axis is well described for small rotational transforms by the same equations that govern Euler-Kirchhoff elastic rod
centerlines (Langer and Singer, SIAM review 1996, Pfefferlé et al. PoP 2018). Surprisingly, the connection to these equations can only be made partially within the NAE framework and requires several
concepts from the soliton theory. We shall present analytical and numerical evidence supporting our insights for a broad range of QA stellarators.
Collisionless physics primarily determines the transport of fusion-born alpha particles in 3D equilibria. Several transport mechanisms have been implicated in stellarator configurations, including
stochastic diffusion due to class transitions, ripple trapping, and banana drift-convective orbits. Given the guiding center dynamics in a set of six quasihelical and quasiaxisymmetric equilibria, we
perform a classification of trapping states and transport mechanisms. In addition to banana drift convection and ripple transport, diffusive banana tip motion associated with the non-conservation of
the parallel adiabatic invariant is substantial among prompt losses, especially in equilibria close to quasiaxisymmetry. Furthermore, many lost trajectories undergo transitions between trapping
classes on longer time scales, either with periodic or irregular behavior. We discuss possible optimization strategies for each of the relevant transport mechanisms and perform a comparison between
classified guiding center losses and recently-developed metrics for banana drift convection transport. Equilibrium characteristics responsible for distinctions in transport are discussed.
Quasihelical configurations are found to have natural protection against both ripple trapping and diffusive banana tip motion leading to a reduction in prompt losses.
I will introduce our integrated program for both developing structure-preserving (SP) methods for kinetic applications and deploying these methods as high-performance tools in PETSc (Portable,
Extensible, Toolkit for Scientific computing). The metriplectic formalism used to develop these methods is introduced along with a strictly conservative, monotonic entropy particle-in-cell (PIC)
application, PETSc-PIC. We briefly discuss several methods developed for this code: a mixed Poisson solver for a C^0 electric field; strictly conservative mapping between particle and finite element
bases; monotonic entropy time integrators for collisions; and two new particle based Landau collision operators. A mature, high-order accurate, finite element based Landau collision operator with
adaptive mesh refinement, that has been optimized for accelerator architectures using the portable Kokkos programming language, is presented with results using the NVIDIA A100 and AMD MI250X
architectures. We present verification studies using plasma resistivity and compare with Spitzer resistivity. New batch GPU linear solvers have been developed for this work. This work is integrated
into the PETSc “solver” framework to provide a fully GPU time advance of the Landau collision operator. We show that collision time advance with many species is practical for realistic models of
tokamaks on today’s large-scale computers in cylindrical coordinates and that fully 3V models should be feasible in the near future.
Local nonlinear gyrokinetic simulations of tokamak plasmas demonstrate that turbulent eddies can extend along magnetic field lines for hundreds of poloidal turns when the magnetic shear is very
small. By accurately modeling different field line topologies (e.g. low-order rational, almost rational, or irrational), we show that the parallel self-interaction of such "ultra long" eddies can
significantly reduce heat transport. This reveals novel strategies to improve confinement, constitutes experimentally testable predictions, and illuminates past observations of internal transport
The power density of tokamaks scales with the plasma beta as beta^2 which makes high-beta operation an attractive choice for future high-power-density tokamak devices [Menard et al Nucl. Fusion 22].
Ultra-high-beta (beta ~ 1) configurations have previously [Hsu. et al. PoP 96] been explored at the level of asymptotic MHD equilibria by solving the Grad-Shafranov equation in the limit (delta_Hsu)^
2 ~ epsilon/(beta q^2) << 1. We extend this by obtaining exact global equilibria numerically. However, various instabilities may limit the utility of such equilibria. To that end, we present an
infinite-n ideal-ballooning and linear gyrokinetic analysis of ultra-high-beta (beta~1) equilibria for tokamaks. In the first part, we examine ideal ballooning stability. We find that alpha_MHD ~ 1/
(delta_Hsu)^2 >> 1 is large enough to make them "second-stable" to the ideal ballooning mode. Upon ensuring ideal ballooning stability, we examine their stability to the two major sources of
electrostatic turbulence: ITG and TEM, using the initial value code GS2. To understand the trend with a changing beta, we compare these equilibria with an intermediate-beta (beta~0.1) and a low-beta
(beta~0.01) equilibrium at two different radial locations: the inner core (Normalized radius rho = 0.5) and the outer core (rho = 0.8) for two different triangularities: delta = 0.4 and delta = -0.4.
We find that the ultra-high-beta equilibria are stable to both the ITG and TEM over a wide range of gradient scale lengths (R/L_T and R/L_n). Next, we perform a linear electromagnetic study of all
the nominal local equilibria to explore the possible effects of Kinetic Ballooning Modes (KBMs). We find that all the high-beta equilibria become more unstable than their low-beta counterparts in the
inner core but turn out to be much more stable than both the low or intermediate beta equilibria in the outer core. We also find that the negative-triangularity high-beta equilibria do not show any
signs of KBMs. Using a full gyrokinetic code for linear electromagnetic studies at k_perp rho << 1 can be relatively expensive. Therefore, as an alternative, we numerically solve the KBM equations of
Tang et al. in the limit omega_bi < omega < omega_be as a reduced KBM model and compare the results with GS2.
Recent significant progress has been made in the application of first-principle-based reduced turbulent transport models within integrated modelling. We focus on recent developments of the
quasilinear gyrokinetic transport model QuaLiKiz. Coupling to tokamak integrated modelling frameworks allows flux-driven core transport modelling for heat, particle, and momentum channels, with JET
discharge timescales simulated in ∼100CPUh. We sketch the basis of the QuaLiKiz transport model and its validity in comparison to nonlinear simulations. We then review validation of the model against
experimental measurements through flux-driven integrated modelling simulations including multiple transport channels and multiple ions. This capability enables the physics interpretation of
present-day experiments – where specific examples of new insight into transport mechanisms will be provided – as well as extrapolation to future machine perf6ormance. We also present a parallel
approach for fast integrated modelling based on neural network regression of an extensive QuaLiKiz run database. The neural network transport model is ×10^6 faster than QuaLiKiz itself, opening up
new possibilities for first-principle-based scenario optimization and control- oriented applications.
Ion-orbit loss is considered important to the radial electric fields Er of tokamak edge plasmas. In neoclassical equilibria, collisions can scatter ions onto the loss orbits and generate a
steady-state radial current, which may drive the edge Er away from the confined-region neoclassical value without orbit-loss. To quantitatively measure this effect, an ion-orbit-flux diagnostic has
been implemented in the axisymmetric version of the gyrokinetic particle-in-cell code XGC. The validity of the diagnostic is demonstrated by studying the collisional relaxation of Er in the core
plasmas. Then, the ion orbit-loss effect is numerically measured in the edge plasmas in the DIII-D geometry. It is found that the effect of the collisional ion orbit loss is more significant for an
L-mode plasma compared to an H-mode plasma.
In this seminar, I present a survey of the most commonly used fusion reactor systems codes in the literature, and suggest promising avenues toward increased flexibility, fidelity, and utility.
Systems codes are the "jack of all trades, master of none" of the fusion reactor modeling world. They aim to model the entire facility at a low fidelity, rather than any one phenomenon or system at
high fidelity. They integrate models from plasma physics, engineering, and economics. Systems codes range in complexity from simple spreadsheets to codes which implement numerical optimizers with
user-configurable constraints and iteration variables. In this seminar I argue that future systems codes should be modular, flexible, and extensible, allowing the user to implement different
workflows and mix-and-match models in their analyses. I identify holes in the landscape of low- and medium-fidelity models, especially those applicable to Stellarators.
Due to the success of the Maryland Centrifugal Experiment (MCX) [R. F. Ellis et. al. Phys. Plasmas 8, 2057 (2000)] and initial theoretical analyses, the Centrifugal Mirror concept is being further
explored by the construction of the Centrifugal Mirror Fusion Experiment (CMFX) at the University of Maryland. This prompts a deeper inquiry into the underlying confinement and stability properties
of centrifugal mirrors as a class of devices. We will outline the key advances of this experiment over prior rotating mirrors , and give an updated physics basis for our modelling of the experiment
and a future reactor. We will highlight a new semi-analytical calculation for the equilibrium of such a system derived in the asymptotic limit of a rapidly-rotating plasma. This solution is used to
shed light on the ultimate limits of rotation in such plasmas. We also use this magnetic equilibrium to compute approximate end loss rates and electrical properties of the plasma. Corrections to the
parallel electric field are computed to ensure all losses are ambipolar. This provides a self-consisent basis for a 0D ``systems'' model of a centrifugal mirror machine. As examples we provide design
points corresponding to CMFX and an upgraded device capable of achieving breakeven.
Non-dissipative (i.e. Hamiltonian) dynamical systems freeze flux in phase space, just as highly-conductive plasma flows freeze magnetic flux. A time integrator for a non-dissipative system is
symplectic when it freezes flux exactly. Symplectic integration is routine in canonical coordinates, where the flux tensor takes the simplest possible form. Much less is understood about symplectic
integration in the general non-canonical case, which occurs more frequently in practice. In this talk, I will present a general approach to structure-preserving integration of noncanonical
Hamiltonian systems on exact symplectic manifolds. First, the original non-canonical Hamiltonian system is embedded in a larger (essentially) canonical system as a slow manifold. Then a canonical
symplectic integrator for the larger system is identified that has approximately the same slow manifold. Provided initial conditions are selected near the slow manifold, the integrator provides a
good approximation of the original system. There would be a problem with this approach if the discrete-time slow manifold happened to have any normal instabilities; such instabilities would carry
discrete trajectories away from the slow manifold, and the good approximation properties would break down. I will explain how this potential problem is avoided using a newly-developed theory of
nearly-periodic maps. By constraining the large system's integrator to be a non-resonant nearly-periodic map, existence of a discrete-time adiabatic invariant is guaranteed. Long-time normal
stability of the slow manifold then follows from a Lyapunov-type argument.
We develop a novel approach to gyrokinetics where multiple flux-tube simulations are coupled together in a way that consistently incorporates global profile variation while allowing the use of
Fourier basis functions. By doing so, the need for Dirichlet boundary conditions typically employed in global gyrokinetic simulation, where fluctuations are nullified at the simulation boundaries, is
obviated. This results in a smooth convergence to the local periodic limit as rho_* -> 0. In addition, our scale-separated approach allows the use of transport-averaged sources and sinks, offering a
more physically motivated alternative to the standard sources based on Krook-type operators. Having implemented this approach in the flux-tube code stella, we study the role of transport barriers and
avalanche formation in the transition region between the quiescent core and the turbulent pedestal, as well as the efficacy of intrinsic momentum generation by radial profile variation. Finally, we
show that near-marginal plasmas can exhibit a radially localized Dimits shift, where strong coherent zonal flows give way to flows which are more turbulent and smaller scale.
Electron temperature gradient (ETG) turbulence in the tokamak edge pedestal has a rich three-dimensional structure, particularly at ion-gyroradius-scales. Nonlinear multiscale gyrokinetic simulations
of Joint European Torus (JET) pedestals reveal that ETG pedestal turbulence is highly inhomogeneous in the direction parallel to the magnetic field. Its parallel distribution is determined by the
magnetic field geometry, with magnetic drift and finite Larmor radius effects being particularly important. Simulations must run sufficiently long for ion-gyroradius-scale ETG turbulence to saturate
and interact with electron-gyroradius-scale ETG turbulence. Simulations without ion-gyroradius-scale ETG turbulence produce at least 65 % higher heat transport, indicating the transport-relevance of
ion-gyroradius-scale ETG for multiscale ETG-ETG interactions.
I present a solver for the Vlasov-Poisson system that computes the electric field from boundary values of the electric potential only. First, I review the formulation of boundary value problems by
integral operators and their discretization by Boundary Element Methods (BEM) with a particular focus on plasma dynamics. The reduction of dimension drastically reduces the number of unknowns while
yielding accurate values for the electric field near the boundary. Furthermore, the method exactly preserves the particles as the source of the electric field. I demonstrate the power of the BEM
approach with 3+3 dimensional numerical examples such as the formation of sheaths and a particle accelerator with complex geometry and mixed boundary values.
Particle methods for Fokker-Planck collision operators typically rely on the stochastic approach. Diffusion is interpreted as random kicks and particle motion is described by a stochastic
differential equation. This is not the only possibility, though. Diffusion can also be interpreted as a compressible vector field resulting from a gradient of an entropy functional, and the particles
pushed along this self-consistently evolving vector field deterministically. This observation puts the collisional motion to an equal footing with the Hamiltonian contribution that moves particles
along an incompressible vector field driven by the gradient of the total energy functional. In this seminar, I describe how these ideas can be exploited to discretize the non-linear Landau collision
operator, utilizing a collection of marker particles of arbitrary weights while preserving positivity, discrete-time momentum and energy conservation, and monotonic entropy evolution.
PTOLEMY is an experimental realization of a detection concept for neutrinos created in the first second following the Big Bang. New initiatives beginning with a setup in the D-Site Test Cell basement
in 2011 have yielded transformational developments in collaboration with PPPL in material science [1]. Today, we report on running the PTOLEMY transverse drift filter “in reverse” [2]. This new
technique for particle acceleration has remarkable possibilities for a “circular saw” approach to e-beam lithography and a novel tool for high efficiency, charged ion injection through transverse
drift into strong magnetic fields. A preliminary study of D+ injection into an NSTX-U-like geometry is explored. [1] High hydrogen coverage on graphene via low temperature plasma with applied
magnetic field, F. Zhao, Y. Raitses, X. Yang, A. Tan and C.G. Tully, Carbon 177 (2021) 244-251 [2] https://arxiv.org/abs/2108.10388
Runaway electrons generated during a tokamak disruption pose a severe threat to future reactor-scale devices. Due to the exponential sensitivity of the runaway generation rate to the plasma current,
robust avoidance and mitigation schemes cannot be fully validated in the medium-size tokamaks today, making comprehensive and validated runaway electron generation models essential for the
development of such schemes. In this contribution we present the Disruption Runaway Electron Analysis Model (DREAM), a new simulation tool specifically designed to study the generation of runaway
electrons during tokamak disruptions. The tool combines 1D fluid models for the background plasma (electric field, temperature, poloidal flux, ion charge states) with either fluid or kinetic models
for the electrons in tokamak geometry. To enable accurate and efficient simulations of the whole disruption, electrons are separated into three sub-populations based on their energies, allowing
different models to be used for thermal, superthermal, and relativistic electrons simultaneously. Notably, the thermal and runaway electrons can be treated using conventional fluid models, while the
superthermal electrons are evolved using a reduced kinetic equation, providing precise accounting of the transient---and thus inherently kinetic---hot-tail runaway generation mechanism. In addition
to the novel treatment of electrons, DREAM incorporates a number of physical mechanisms which have never before been brought together in a complete, self-consistent disruption simulation, including
radial transport of heat and electrons, dynamic evolution of ion charge states, collisions with partially ionized atoms, the effect of passive conducting structures on the electric field, and
hyperresistivity. The first studies conducted with DREAM indicate that fast electron radial transport may provide a path to effective runaway electron avoidance in ITER.
This talk will focus on mathematical modeling aspects in the context of stellarator design. We will discuss several aspects of the magnetic differential equation, both on a toroidal flux surface and
in a 3D volume filled with such surfaces. We will in particular describe some properties related to the periodic setting and the rotational transform, as well as their relation to the existence of
solutions. We will also describe how the general Fourier solution's singularities relate to the questions of existence and uniqueness of a solution. We will then open a discussion regarding the
choice of assumptions to derive equations of interest to study stellarator design.
Wide pedestal quiescent H-mode (WPQH) is an attractive scenario for the future burning plasmas as they operate without ELMs. Unlike the conventional H-modes, WPQH does not show power degradation of
the energy (tau_E) confinement, as the heating power is increased. In these experiments, the neutral beam heating power (PNBI) was varied between 3.7 and 5.5 MW, while the net torque from the beams
was kept nearly zero. As the PNBI was increased, reduced transport calculated by TRANSP, as well as the increased core ExB shear rate are observed; all suggesting the formation of an ion internal
transport barrier (i-ITB) and increased stored energy in the core. In this work, a local quasilinear turbulent transport model enabled by Trapped Gyro Landau Fluid (TGLF), was used to predict the ITB
and its stability analysis. tau_E calculated from the new constructed equilibria with the modelled profiles show insensitivity to the increased PNBI; these modelled profiles use TGYRO transport
solver with matched energy fluxes between TGLF and TRANSP. Linear stability analysis reveals that drift-wave instabilities in the core are stabilized by ExB shear, the Ti/Te ratio and the Shafranov
shift. Detailed analysis will be presented in this talk.
The Particle-In-Cell (PIC) method using delta-f markers is widely adopted for simulating various plasma turbulence because delta-f markers can capture small-amplitude perturbations well with low
statistical noise. Although the delta-f method can represent arbitrary phenomena without loss of generality, it is typically used to simulate weak turbulence within simple periodic boundaries,
narrowing the usefulness of delta-f simulations. This is because the delta-f method has been considered unsuitable for implementing more general physical boundaries, such as absorbing walls or
arbitrary sources. Besides, the typical delta-f simulation does not work well when strong forces are applied in the system. In this study, however, we show that the delta-f method can handle all
situations in the same way as the full-f method, even for arbitrary boundary conditions and forces in the system. The delta-f method can correctly solve the Vlasov equation in arbitrary situations if
the delta-f markers completely fill the entire phase-space of the simulation domain without any vacancy. We propose an efficient and practical way to generate new delta-f markers from the spatial
physical boundary and the virtual velocity boundary to fill the phase-space of the simulation domain within some statistical noise. This generalized delta-f PIC method is verified in various 1D-1V
physical problems such as Landau damping, two-stream instability, and plasma expansion to the vacuum. The method can be easily extended to multi-dimensional problems. For example, the global
gyrokinetic simulation code GTS can successfully simulate thermal quench phenomena with a much lower computational cost by using the novel generalized delta-f PIC method.
After two decades since two unlucky attempts to achieve the breakeven Q_DT=1 in tokamaks, this minimum fusion milestone remains unachievable. Although the tokamak program has developed all means for
progress, the most critical reserve, i.e. suppression of recycling as the powerful mechanism of plasma cooling, is not yet utilized. For 63 years recycling is close to 100% in tokamaks. This results
in excessive heating power, in limited confinement, unsuitable for burning plasma, and in unsolvable problems of power extraction. High recycling makes the entire regime complicated, while plasma
unpredictable and disruptive. In 2012 the technology of Continuously Flowing Liquid Lithium (24/78-FLlLi) was invented for reduction recycling to 50% by pumping the escaping plasma particles by a
creeping lithium layer. The suppression of the edge plasma cooling leads to a new plasma regime with an order of magnitude better confinement, much simpler plasma physics, insensitive to thermal
conduction, and to reducing presently complicated plasma surface interactions to interaction a flow of energetic particles with Li. Simpler plasma control gives hopes to disruption avoidance. The
recent assessment of low recycling regime for JET-like parameters predicts Q_DT > 5 in burning plasma at NBI power P_NBI=4 (!) MW. In the author's view, such a regime would be appropriate for the
third DT campaign on JET as well as for restoring credibility of fusion.
In addition to particle-in-cell methods, the XGC code evaluates dissipative operations such as collisions and heat sources/sinks on a 5-D grid at each fixed time step. This requires a mapping between
the marker particles and the 5-D phase-space grid. If the error in the particle energy conservation is undesirably large in this mapping it can cause non-negligible numerical heating in a steep edge
pedestal. Here we discuss a novel mapping technique in velocity-space, based on the calculation of a pseudo-inverse, to exactly preserve moments up to the order of the discretization space. The new
interpolation method relies on a particle resampling technique which is used to create, annihilate or redistribute particles in configuration space while preserving a desired number of moments. We
will also discuss details of the resampling. [A. Mollén et al. J. Plasma Phys. 87 905870229 (2021)]
Axisymmetric MHD perturbations with toroidal mode number n=0 are resonant on the magnetic field lines going through the X-points of the tokamak divertor separatrix. As a consequence, current sheets
form along the separatrix, which profoundly affect the stability of these modes. In particular, current sheets at the magnetic separatrix lead to the stabilization of n=0 vertical plasma
displacements, at least on the ideal-MHD time scale, adding an important ingredient to the mechanism of passive feedback stabilization. A weakly damped n=0 mode, with a discrete oscillation frequency
close to the poloidal Alfvén frequency, is also found. This mode may be driven unstable by the resonant interaction with fast ions, adding a new item to the catalogue of energetic particle driven
A numerical integration method for guiding-center orbits of charged particles in toroidal fusion devices with three-dimensional field geometry as described in Ref. [1, 2] is presented. Here, high
order interpolation of electromagnetic fields in space is replaced by a special linear interpolation, leading to locally linear Hamiltonian equations of motion with piecewise constant coefficients.
This approach reduces computational effort and noise sensitivity while the conservation of total energy, magnetic moment and phase space volume is retained. The underlying formulation treats motion
in piecewise linear fields exactly and thus preserves the non-canonical symplectic form. The algorithm itself is only quasi-geometric due to a series expansion in the orbit parameter. For practical
purposes an expansion to the fourth order retains geometric properties down to computer accuracy in typical examples. When applied to collisionless guiding-center orbits in an axisymmetric tokamak
and a realistic three-dimensional stellarator configuration, the method demonstrates correct long-term orbit dynamics. In Monte Carlo evaluation of transport coefficients, the computational
efficiency of quasi-geometric integration is an order of magnitude higher than with a standard fourth order Runge-Kutta integrator. Moreover, the integration method is tested for the computation of
fusion alpha losses in a realistic stellarator configuration. A Fortran program with the name “Guiding-center ORbit Integration with Local Linearization Approach” (GORILLA) is publicly available as
Open Source code on GitHub [3]. References [1] M. Eder et al. 46th EPS Conf. on Plasma Physics, 2019, ECA Vol. 43C, P5.1100. [2] M. Eder et al. Physics of Plasmas 27, 122508 (2020), https://doi.org/
10.1063/5.0022117 [3] M. Eder et al. GORILLA GitHub repository: https://github.com/itpplasma/GORILLA
In this talk, I will present a STRUcture-Preserving HYbrid code - STRUPHY - for the simulation of magneto-hydrodynamic (MHD) waves interacting with a small population of energetic particles far from
thermal equilibrium (kinetic species). Such configurations appear for instance in deuterium-tritium fusion reactors, where hot α-particles or fast ions coming from external heating devices can
resonantly interact with MHD waves and thus compromise confinement time. The implemented model features linear, ideal MHD equations in curved, three-dimensional space, coupled nonlinearly to the
full-orbit Vlasov equations via a current coupling scheme (CCS). The implemented algorithm is based on finite element exterior calculus (FEEC) for MHD and particle-in-cell (PIC) methods for the
kinetic part; it probably conserves mass, energy, and the divergence-free constraint for the magnetic field, irrespective of metric (= space curvature), mesh parameters and chosen order of the
scheme. The motivation for this work stems from the need for reliable long-time simulations of energetic particle physics in complex geometries, covering the whole range of MHD waves. In STRUPHY, the
finite element spaces are built from tensor products of univariate B-splines on the logical cuboid and can be made high-order by increasing the polynomial degree. Time-stepping is based on splitting
of a skew-symmetric matrix with implicit sub-steps, mitigating CFL conditions from fast magneto-acoustic waves. High-order time splitting schemes can be used in this regard.
We explore the issue of finite grid (aliasing) instabilities in gyrokinetic particle-in-cell (PIC) algorithms. Theory for finite grid instabilities in momentum conserving PIC applied to full particle
models including charge separation effects (e.g. Vlasov-Poisson) is well established [1], leading to the requirement of resolving the Debye length. Recent studies with momentum conserving PIC applied
to quasi-neutral gyrokinetic models show that the situation can be worse in the sense that the instability is present for arbitrary spatial resolution [2,3]. We show, however, that a simple
reformulation of the equations, making use of the continuity equation, eliminates this instability for all practical purposes. Our reformulation has connections to so-called energy-conserving PIC
interpolations [4], the split-weight scheme [5], and the vorticity equation. In addition, we explore the effects of finite beta and finite drifts. We demonstrate that our approach is absolutely
stable for static plasmas for any spatial resolution, and is stable for drifting plasmas with electron Mach numbers below unity (which is generally ensured by ambipolarity in the plasmas of
interest). [1] A. B. Langdon, J. Comput. Phys. 6 (1970) 247–267. doi:10.1016/0021- 9991(70)90024- 0. [2] G. J. Wilkie, W. Dorland, Phys. Plasmas 23 (2016) 052111. doi:10.1063/1.4948493. [3] B. F.
McMillan, Phys. Plasmas 27 (2020) 052106. doi:10.1063/1.5139957. [4] D. C. Barnes, L. Chacon, Comput. Phys. Comm. 258 (2021) 107560. doi:10.1016/j.cpc.2020.107560. [5] I. Manuilskiy, W. W. Lee, Phys.
Plasmas 7 (5) (2000) 1381. doi:10.1063/1.873955.
The collisionless plasma transport in given stochastic magnetic fields has been studied for understanding the mechanisms of the thermal quench in tokamak disruption using a global gyrokinetic
simulation code GTS. Previous studies have mostly focused on the dynamics of the passing particles along the open magnetic field lines during the thermal quench. However, we found that a considerable
amount of the electrons (<60%) can be trapped in the device due to the magnetic mirror effect although the magnetic field lines are open to the wall. A high-resolution vacuum field analysis of the
stochastic layer provides rich information regarding the 3-dimensional magnetic topology for understanding the characteristics of the plasma transport in such systems. In this study, we present a
comprehensive picture of the relation between the plasma dynamics and the 3-D topology of the stochastic layer, which is essential to understand thermal quench physics. It was found that the
consistent coupling of electron and ion dynamics through the ambipolar electric fields plays a critical role in determining the electron thermal energy transport. The 3-dimensional ambipolar
potential builds up in the stochastic layer to keep the quasi-neutrality of the plasma during the thermal quench. The ambipolar potential produces the ExB vortices that mix the plasma across the
magnetic field lines. The ExB mixing helps the high-energy trapped electrons to exit to the wall through the favorable open magnetic field lines. As a result, the electron temperature decreases
steadily within the time scale of milliseconds.
Radar resonance-enhanced multi-photon ionization (Radar REMPI) is a remote diagnostic technique which selectively ionizes a species of interest using a tunable laser to create a laser-induced plasma
(REMPI) and probes this plasma via coherent microwave scattering (radar) to obtain time-resolved information about the plasma. The high selectivity of the resonance-enhanced ionization allows us to
detect a trace species of interest in a gas filled with other species so long as we tune onto an energy level resonance of the trace species. In this talk, we present the results of our theoretical
and experimental investigation of the effects of magnetic fields and low pressures (below 1 Torr) on the Radar REMPI diagnostic. In the case of applied magnetic fields, we constructed a toy model
which predicts magnetically induced depolarization of the scattered microwaves from the REMPI plasma and observe the effect in experiments. This finding suggests a novel capability of Radar REMPI for
the standoff measurement of vector magnetic fields. In the case of low pressures, we observed experimentally that increasing microwave frequency results in a faster decay rate of the scattering
signal, which would not be expected from coherent microwave scattering. This discrepancy is resolved by considering the effect of the spatial distribution of plasma on the phase of the scattered
microwaves, which we call the decoherence effect. We will delve into the physics of the magnetic field and low-pressure effects and discuss potential applications.
Establishing the mechanisms by which the solar wind enters Earth's magnetosphere is one of the biggest goals of magnetospheric physics as it forms the basis of space weather phenomena. Magnetic
reconnection is believed to be the dominant process by which solar wind plasma enters the magnetosphere. However, for periods of northward interplanetary magnetic field (IMF), reconnection is less
likely at the dayside magnetopause, and Kelvin–Helmholtz waves (KHWs) are an important agent for plasma entry and for the excitation of ultra-low-frequency (ULF) waves. Over the past two decades,
several space missions have enabled a leap forward in our understanding of this phenomenon at the Earth's magnetopause. Moreover, numerical MHD simulations have been extensively used to study the
nonlinear evolution of the KH instability. This talk highlights the contributions to our understanding and the on-going research of KHWs and its role on the plasma transport across the magnetopause
using THEMIS (Time History of Events and Macro scale Interactions during Substorms) mission, Magnetospheric Multiscale (MMS) mission and, a global MHD simulation, OpenGGCM.
Ion orbit loss has long been considered a potential player in triggering and/or sustaining the turbulence suppression necessary for the L-H transition. A loss cone model for the X-point mediated loss
of thermal ions in diverted tokamaks finds that the loss plays a significant role in establishing the near-edge environment. The effective timescale of any one trajectory’s loss qualitatively changes
if transit occurs faster than the ion would be scattered out of the loss region; above this threshold the rate of loss is driven by the rate of transport into the loss cone, while below this
threshold the rate of loss is driven by the physical-space drift motions. The consideration of velocity-space relaxation processes along the loss region’s boundary serves both to demarcate the loss
cone along this qualitative change in the effective timescale and to estimate the steady-state loss current across the resulting modified boundary. The orbit loss calculations are implemented into
the transport code SOLPS as source terms, and the equilibrium radial current balance is investigated in L-mode plasmas near the L-H transition. Within the studied parameter space, the loss current is
directly proportional to the ion temperature and exhibits low- and high-density branch behaviors for a given input power. Analysis indicates that the return current primarily consists of two
components: an increased inward flow of ions associated with the perpendicular viscosity and a decreased outward diamagnetic flow of ions. The former drives the leading order plasma response, an
increase in the shear of the electric field and plasma rotations in the vicinity of the separatrix, while the latter involves lower order poloidal redistributions of the pressure about a flux
surface. Following these observations, experimental proposals on ASDEX Upgrade have been made to ascertain the role of ion orbit losses. [1] M. Laishram, D. Sharma, and P. K. Kaw, Phys. Rev. E 91,
063110 (2015). [2] M. Laishram, D. Sharma, P. K. Chattopdhyay, and P. K. Kaw, Phys. Rev. E 95, 033204 (2017). [3] M. Laishram, D. Sharma, and P. Zhu, Journal of Physics D: Applied Physics 53, 025204
(2019). [4] M. Laishram, Driven dust vortex characteristics in plasma with external transverse and weak magnetic field (2020), 2011.03237.
A two-dimensional (2D) hydrodynamic model is developed for characterizing dynamics of driven-dissipative dust cloud confined in an axisymmetric toroidal setup along with an unbounded sheared
streaming plasma [1, 2]. This formulation brings out several sources of the dust vorticity due to the background sheared plasma flow fields. The numerical solutions confirm the analytical structure
of the driven dust vortex flow in the linear limit [1], but also fairly predict experimentally observed nonlinear characteristics of the dust rotation such as threshold structural bifurcation, the
emergence of uniform vorticity core region, recovered scaling law for estimates the kinematic viscosity from the experimentally measurable quantities, and formation of steady-state multiple
counter-rotating and co-rotating vortices [2, 3]. The hydrodynamic model is extended for analysis of driven vortex characteristics in presence of external transverse and weak magnetic field (B) in a
planner setup and parametric regimes motivated by recent magnetized dusty plasma (MDP) experiments [4]. This analysis has shown that shear in the B can produce a sheared internal field (Ea) in
between electrons and ions due to the E × B and ∇B × B-drifts that causes rotation of the dust cloud levitated in the plasma. The flow solution demonstrates that neutral pressure decides the
dominance between the ions-drag and the Ea-force. The shear ions-drag generates an anti-clockwise circular vortical structure, whereas the shear Ea-force is very localized and gives rise to a
clockwise D-shaped elliptical structure which turns into a meridional structure with decreasing B. In the regime of high pressure and lower B, the Ea-force becomes comparable or dominant over the ion
drag and peculiar counter-rotating vortex pairs are developed in the domain. [1] M. Laishram, D. Sharma, and P. K. Kaw, Phys. Rev. E 91, 063110 (2015). [2] M. Laishram, D. Sharma, P. K. Chattopdhyay,
and P. K. Kaw, Phys. Rev. E 95, 033204 (2017). [3] M. Laishram, D. Sharma, and P. Zhu, Journal of Physics D: Applied Physics 53, 025204 (2019). [4] M. Laishram, Driven dust vortex characteristics in
plasma with external transverse and weak magnetic field (2020), 2011.03237.
Extended neoclassical plasma rotation theory has been developed based on the fluid moment equations with collisionality-extended Braginskii’s closure of the viscosity and the first-order poloidal
asymmetries in velocities, densities, and electrostatic potential [1-3]. Major recent extensions [3,4] include rotation and transport calculations with generalized D-shaped flux surfaces using the
Miller geometry [5] with the Shafranov shifts. Recent rotation calculations of DII-D and KSTAR discharges [3,4], using the nonlinear self-consistent neoclassical GTROTA code [6], indicated agreements
to within <15% to the measurements except in the very edge (rho > .90), thus providing confidence on the related transport calculations such as the radial electric field and the poloidal asymmetries.
Further extension efforts [7] are in progress with its major extension to Mikhailovskii-Tsypin’s closure of the viscosity and other edge physics to increase the accuracy on the edge rotation and
transport calculations, and to include non-axisymmetric toroidal perturbations and develop a neoclassical toroidal viscosity formalism. Capabilities of the GTROTA code have also been extended to
allow self-consistent rotation and transport calculations of up to four ion and electron species. Future plans on theory and code development, with its focus on edge rotation and transport, will be
discussed. [1] W. M. Stacey, A. W. Bailey, D. J. Sigmar and K. C. Shaing, Nucl. Fusion 25 , 463 (1985). [2] W. M. Stacey and C. Bae, Phys. Plasmas 16, 082501 (2009). [3] C. Bae, W.M. Stacey, W.M.
Solomon, Nucl. Fusion, 53 (2013) 043011. [4] C. Bae, W.M. Stacey, S.G. Lee, L. Terzolo, Phys. of Plasmas, 21 (2014). [5] R.L. Miller, M.S. Chu, J.M. Greene, et. al., Phys. of Plasmas, 5 (1998). [6]
C. Bae, W.M. Stacey, T.D. Morley, Comp. Phys. Communications, 184 (2013). [7] W. M. Stacey and C. Bae, Phys. Plasmas 22, 062503 (2015).
The plasma core is likely to be affected by 'sawteeth' playing significant role in core confinement and impurity transport. In particular, heavy impurities accumulated in the core plasma may lead to
a radiative collapse. In order to understand and control their dynamics, the characteristics of compound sawteeth and the control of sawteeth relying on current or power deposition in the vicinity of
the q=1 surface are explored by means of MHD simulations. In this work, the numerical tool used to simulate sawteeth is the XTOR-2F code, which is a non-linear tridimensional code solving two-fluid
MHD equations. Neoclassical transport is shown to be important for heavy impurities in the core region. Meanwhile, ASDEX-U measurements show that the impurity dynamics in presence of sawteeth differ
from the predictions made by transport codes. Fluid equations that model the transport of impurities in a highly collisional (Pfirsch-Schlüter) regime are implemented and coupled to the set of
two-fluid MHD equations. The simulations show a difference between the impurity profiles with and without sawteeth, consistent with experimental observations. This results from a competition between
neoclassical processes and sawtooth relaxations.
The effects of flow shear on the stability of a (2,1) tearing mode are examined using numerical and analytic studies on a number of model systems [1]. For a cylindrical reduced magnetohydrodynamic
(MHD) model, linear computations using the CUTIE code show that sheared axial flows have a destabilizing effect, while sheared poloidal flows tend to reduce the growth rate of the mode. These effects
are independent of the direction of the flow. For helical flows the sign of the shear in the flow matters. This symmetry breaking is also seen in the nonlinear regime where the island saturation
level is found to depend on the sign of the flows. I have subsequently done this study for the visco resistive kink (m= 1, n= 1) mode in the RMHD regime [2].
In the last two decades, the field of heliophysics has witnessed a tremendous surge. Research in heliophysics now encompasses a wide spectrum of fields ranging from space and magnetospheric physics
to solar and stellar physics. One of the primary objectives of heliophysics is to understand the Sun and its interactions with planets (including Earth), where space weather plays an important role.
In my presentation, I will begin with NASA’s Mars Atmosphere and Volatile EvolutioN (MAVEN) mission and delineate my study on the Martian responses to an interplanetary coronal mass ejection (ICME) -
sometimes termed as solar storm - by using a sophisticated multifluid magnetohydrodynamic (MHD) code. I will then discuss how an active young Sun transformed Mars from an early warmer and wetter
world to a desiccated and frigid planet with a tenuous atmosphere. In recent several years, it is well recognized that heliophysics plays an increasingly important role in the rapidly growing field
of exoplanetary science. I will present my studies on exoplanetary atmospheric losses due to the impact of stellar winds and storms on planets residing within close-in habitable zones of M-type
stars, which is a key factor that determines planetary habitability. Last but not least, I will introduce the ten- moment multifluid model developed at Princeton with its applications to the global
magnetosphere of Mercury and discuss how this new model may enhance the science return from ESA-JAXA BepiColombo mission to Mercury where I serve as a Co-I.
Stellarator plasmas have been observed to be nonlinearly stable even when driven beyond linear MHD stability thresholds. Hence, stellarator designs could employ nonlinear stability considerations to
relax linear stability constraints, which can often be too restrictive and costly. However, this possibility has not been systematically investigated due to the lack of a state-of-the-art nonlinear
initial-value MHD code for stellarators. We aim to fill this gap by extending the M3D-C1 code from tokamak to stellarator geometry. Our approach introduces a set of logical coordinates, in which the
computational domain is axisymmetric, to utilize the existing finite-element framework. The mapping from the logical to the physical (R, phi, Z) coordinates is then used to calculate derivatives in
the latter, in terms of which the existing physics equations are written. This way, no significant changes to the extended MHD models within M3D-C1 are required. So far, we have successfully
implemented this approach in 2D and verified its results against the existing code. Preliminary results in 3D will also be presented, including proof-of-principle resistive MHD simulations of simple
stellarator plasmas such as a rotating ellipse.
Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. They
also play a critical role in the synthesis and growth of oxide films. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are
central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently,
computational methods, based on electronic density functional theory (DFT), have found widespread use in the calculation of point-defect properties. DFT based computational methods for point defects
have inherent errors that require explicit corrections. In this talk I explain the DFT based modeling of point defects and the associated correction schemes using Cr2O3 as an example. Utilizing the
points defect data, self-diffusion in Cr2O3 is evaluated.
Edge-localized modes (ELMs) are a major concern for tokamak devices and their control is crucial for the operation of future reactor-scale machines. ELMs can be triggered when strong gradients are
present in the edge transport barrier (edge pedestal). The width and height of the pedestal is often constrained by the occurrence of ideal-MHD peeling-ballooning modes [1] as well as kinetic
ballooning modes (KBMs) in the pedestal region; this model has been successfully applied by the EPED model [2] to predict the pedestal height and width in conventional aspect ratio tokamaks. The
predictions of the EPED model however often do not accurately describe observations in machines with low aspect ratio (e.g. spherical tokamaks). For ELMing discharges in NSTX the EPED model predicts
stability. The reasons for this discrepancy might be associated with the limitation to ideal-MHD computations or the breakdown of the assumption that local ballooning theory well approximates the
stability limit of kinetic ballooning modes in the edge of low aspect ratio plasmas. With the goal of obtaining a model to predict ELMs in spherical tokamaks and to find the limiting values for
pedestal width and height, we model peeling-ballooning modes including non-ideal effects. The extended-MHD code M3D-C1 [3,4] is applied to determine the stability thresholds of peeling-ballooning
modes in NSTX discharges. We mainly focus on the influence of plasma resistivity and rotation on edge stability, but also consider two-fluid effects. By varying pedestal parameters such as the
pressure gradient and current density for given (ELMing) discharges, we are able to locate these discharges in parameter space relative to the stability boundary. In this context, we also identify
the physics mechanisms that are important to describe these macroscopic edge modes in spherical tokamaks. We find robust resistive peeling-ballooning modes well before the ideal stability threshold
is met. These modes thus extend the region of ideal peeling-ballooning instability in the investigated ELMing NSTX discharges. It is found that the actual NSTX plasmas are localized close to, or
slightly within, the unstable side of the stability boundary calculated with our model. For large aspect ratio discharges the model is benchmarked with the ELITE code, which is employed in the frame
of the EPED model. This study of macroscopic instabilities constitutes a first step towards a model to predict pedestal width and height in H-mode discharges in spherical tokamaks.
The Gkeyll computational plasma physics code aims to provide a unified framework for fluid and kinetic studies using state-of-the-art discontinuous Galerkin (DG) methods. In developing Gkeyll we
learned a number of lessons in DG theory and application, which are central to Gkeyll's algorithms and perhaps of interest to other DG workers. Some of these lessons stem from encountering operations
as simple as division, so in this talk we will motivate DG division and how we have addressed that problem to deliver alias-free and conservative algorithms for, for example, kinetic collision
operators. The concept of a weak equality, common to applied math and FEM communities, plays a central role. We will then discuss how weak equality has been leveraged in order to formulate and
implement more complex operations, such as spectral transforms for piecewise discontinuous polynomial data and multigrid solutions of DG Poisson equations.
Global gyrokinetic simulations are an increasingly important tool for understanding and designing magnetic confinement fusion devices. The recent operation of the optimized stellarator Wendelstein
7-X in a turbulence-dominated regime gives renewed urgency to the need for stellarator gyrokinetics in general, while electromagnetic effects are likely to play an important role in turbulence
transport in both tokamaks and stellarators. In this talk, we report developments and deployment of the full volume gyrokinetic code XGC for stellarator physics and electromagnetic physics. XGC has
been extended to treat general 3D toroidal equilibria with modifications to permit efficient electrostatic field solve. The code has then been used to reproduce linear electrostatic ITG calculations
in Wendelstein 7-X and PPPL's QUASAR, followed by the first global turbulence transport simulations in QUASAR geometry. In addition, implementation of improved explicit electromagnetic techniques has
allowed XGC to reproduce the transition from ion temperature gradient (ITG) to kinetic balloon mode (KBM) dominated microinstability regimes in tokamak cyclone base case geometry. This has then been
extended to simulations of KBM in test geometry similar to NSTX-U. Recent flux tube simulations with the GENE code (Aleynikova 2018) show a similar transition from an ITG to a KBM dominated regime at
beta values lower than those envisioned for peak Wendelstein 7-X performance and future reactors. One future use of the combined global electromagnetic stellarator gyrokinetic code will be to
investigate this and predict turbulence transport in optimized stellarators.
An overview of European gyrokinetic PIC codes will be presented with a focus on numerical schemes used in the electromagnetic regime. Simulations of Alfven modes destabilized by the fast particles
and of the zonal flows generated by the unstable Alfven waves will be discussed. Gyrokinetic PIC simulations of the MHD-type instabilities will be addressed on example of the internal kink modes.
Simulations of the transition from the ITG-dominated regime to the KBM regime and finally to the microtearing instabilities with plasma beta increasing will be shown.
Distributed Compute Labs (DCL) is a Canadian educational nonprofit organization responsible for developing and deploying the Distributed Compute Protocol (DCP), a lightweight, easy-to-use, idiomatic,
and powerful computing framework built on modern web technology, that allows any device — from smartphones to enterprise web servers — to contribute otherwise idle CPU and GPU capacity to secure and
configurable general-purpose computing networks. By leveraging existing devices and infrastructure — a university’s desktop fleet, for example — a large supply of latent computational resources
becomes available at no additional capital cost. DCP makes it possible for everyone — from a student in Santa Barbara to a large enterprise in New York City — to have access to large quantities of
cost-effective computing resources. In summary, the Distributed Compute Protocol democratizes access to digital infrastructure, reduces barriers, and unleashes innovation.
ITER plans to rely on RMP coils as the primary means for ELM control. However, puzzling observations on present-day experiments complicate understanding the underlying physics: plasma density is
pumped out, which can lower the fusion efficiency in ITER, while the electron heat transport is still low in the pedestal. Kinetic level understanding including most of the important physics is
needed but has not been available. Gyrokinetic total-f simulation of the plasma transport driven by n=3 resonant magnetic perturbations (RMPs) in a DIII-D H-mode plasma is performed using the
gyrokinetic code XGC. The RMP field is calculated in M3D-C1 and coupled into XGC in realistic divertor geometry with neutral particle recycling. The RMP field is stochastic around the pedestal foot
but exhibits good KAM surfaces at pedestal top and steep-slope. The simulation qualitatively reproduces the experimental phenomena: plasma density is pumped-out due to enhanced electrostatic
turbulence while electron heat transport is low. Different from earlier gyrokinetic studies, the present simulation consistently combines neoclassical and turbulent transport, a fully nonlinear
Fokker-Planck collision, neutral particle recycling, and the full 3-D electric field. Density pump-out is not seen without turbulence effects. Reduction of the ExB shearing rate is likely to be
responsible, mostly, for the enhanced edge turbulence, which is found to be from trapped electron modes.
The inference of velocity fields from 2D movies evolving conserved scalars (optical flow) is fundamentally ambiguous due to the well-known “aperture problem”: velocities along isocontours of the
scalar are not visible. This may even corrupt the inference of velocity fields averaged at scales longer than the typical length scale of features in the scalar field, as in the drift wave. However,
for divergence-free flows, a stream-function formulation allows us to show that the "invisible velocity" vanishes in the surface average inside any closed scalar isocontour. This error-free averaged
velocity may be used as an “anchor” for a more reliable inference of the larger-scale velocity field, or to test model-based optical-flow schemes. We have also used the stream-function formulation to
derive a new method of optical flow for divergence-free flows. We discuss the new algorithm, including details of discretization, boundary conditions, and image preprocessing that can significantly
affect its performance. A simple implementation of the new method is shown to work well for a number of synthetic movies, and is also applied to a GPI movie of edge turbulence in NSTX.
Smooth 3D MHD equilibria with non-uniform pressure may not exist but, mathematically, there exist 3D MHD equilibria with non-uniform, stepped pressure profiles. The pressure jumps occur at surfaces
with highly irrational values of rotational transform and generate singular current sheets. If physically realisable, how such states form dynamically remains to be understood. To be physically
realisable states, MHD equilibria must exist for some non-trivial timescale, meaning they must be at least be ideally stable and sufficiently stable to the fastest growing resistive instabilities.
This presentation will discuss recent progress towards understanding discontinuous MHD equilibria via a stability analysis of a continuous cylindrical equilibrium model with radially localised
pressure gradients, which examines how the resistive stability characteristics of the model change as the localisation of pressure gradients is increased to approach a discontinuous pressure profile
in the zero-width limit.
The magnetic field in a tokamak defines a field line map: a mapping from a poloidal cross-section to that same cross-section by following magnetic field lines. Such a map must necessarily contain
fixed points, of which the magnetic axis is an example. The jacobian (the matrix of partial derivatives) $\mathsf{M}$ describes the mapping around such a fixed point to first order, and is part of
the Lie group $SL(2,\mathbb{R})$. Different elements of this group act on the euclidean plane as rotations, shear mappings or hyperbolic fixed points. We look at a transition from an ellpitic fixed
point (field lines lie on nested surfaces around the fixed point) to an alternating hyperbolic fixed point (field lines map to opposite branches of hyberbolic surfaces) that can occur at $q=2, 2/3, 2
/5, 2/7 ...$. Using the NOVA-K code we identify an ideally unstable mode that is localized on the axis and has a high growth rate when the safety factor is close to 2/3. This mode drives the fixed
point into the alternating hyperbolic regime. The nonlinear evolution of this instability can lead to complete stochastization of a region near the axis. Though the Sawtooth oscillation has long been
observed, there is still disagreement between theoretical models and observations, and no model can reconcile all observations. We speculate that the above transition could explain some of the
observations that do not fit other models, such as measurements of q below 1, snakes, and persistent Alfven Eigenmodes.
Kinetic theory has made tremendous progress in recent decades thanks to reduced models and improved computational capacity. Some problems, especially in the non-Maxwellian regime, remain difficult
even for large supercomputing clusters. In this talk, I will discuss how such problems can be solved on laptops with the right tools applied. Physical approximations can be made to reduce the burden
of predicting the interaction between turbulence and energetic particles. To complement well-established physical reductions of the nonlinear Boltzmann equation, recent advances in applied
mathematics are utilized for direct efficient solution. Throughout, I will discuss ongoing and potential applications of machine learning to improve efficiency even further.
The most extreme and surprising behaviors of black holes and neutron stars are driven by their surrounding magnetic fields and plasmas. Numerical simulations of these systems are complicated by the
exotic physical conditions, requiring new approaches. I will present a range of computational methods which are well adapted to challenges such as strongly curved spacetime, energetically dominant
electromagnetic fields, and pathological current sheets. In particular, I will describe how a new technique for general-relativistic plasma kinetics will aid in understanding black holes' particle
acceleration and jet launching, and in interpreting future observations with the Earth-spanning Event Horizon Telescope.
In the first part of this talk, I will present the modelling and interpretation of a campaign of laser experiments designed to generate high Mach number magnetized collisionless shocks on OMEGA-EP.
We compare the data to the results of 3-D PIC simulations, and describe the signatures of magnetized shock formation, including the early contact discontinuity formation stage, and a later magnetic
reflection with magnetic overshoots. We explain the geometrical effects on the radiography introduced by density gradient in expanding plasma and by the curvature of the imposed magnetic field. We
conclude that our experiments have reproducibly achieved magnetized shocks with Alfvenic Mach number 3 to 9 in laboratory conditions. In the second part, I will describe the gyrokinetic (GK) electron
and fully kinetic (FK) ion particle (GeFi) simulation model and the particle simulation results of waves and current sheet instabilities. In the GeFi model, the GK electron approximation removes the
high frequency electron gyromotion and plasma oscillation, but the electron finite Lamor radii effects are retained. For lower-hybrid waves, the GeFi results agree well with the fully kinetic
explicit delta-f particle code and the fully kinetic Darwin particle code. Our 3-D GeFi and FK simulation results demonstrate the existence of the lower-hybrid-drift, kink and sausage instability in
current sheet under finite guide magnetic fields with the realistic proton-to-electron mass ratio.
The particle-in-cell method has been remarkably successful in understanding physical processes occurring at the kinetic scales. I will discuss the implementation of the electromagnetic
particle-in-cell method for collisionless plasmas in a self-developed code called PICTOR. I will focus on a few techniques employed to improve performance, diagnostic, and scalability of the code. I
will then briefly discuss two physics problems to illustrate the efficacy of PIC simulations in addressing a few outstanding problems in plasma physics. First, I will discuss preferential heating and
acceleration of heavy ion in Alfvenic turbulence. Then, I'll discuss how self-consistent PIC simulations, combined with the measurements from Voyager spacecraft, could be used to obtain a
comprehensive understanding of the dynamics of solar wind termination shock.
In this talk, the energy budget of a collisionless plasma subject to electrostatic fluctuations is studied. In particular, the excess of thermal energy over the minimum accessible to it under various
constraints that limit the possible forms of plasma motion is considered. This excess measures how much thermal energy is “available” for conversion into plasma instabilities, and therefore
constitutes a nonlinear measure of plasma stability. The “available energy” defined in this way becomes an interesting and useful quantity in situations where adiabatic invariants impose non-trivial
constraints on the plasma motion. For instance, microstability properties of certain stellarators can be inferred directly from the available energy, without the need to solve the gyrokinetic
equation. The technique also suggests that an electron-positron plasma confined by a dipole magnetic field could be entirely free from turbulence.
For this talk, we will first relate our previous calculations on the radial electric field at the high confinement H-mode pedestal [W. W. Lee and R. B. White, Phys. Plasmas 24, 081204 (2017)] with
the actual magnetic fusion experimental measurements. We will then discuss the new pressure balance due to the E x B current, which is induced by the resulting radial electric field, and its impact
on the gyrokinetic MHD equations as well as their conservation properties in the force-free steady state.
In contrast to tokamaks where turbulence typically dominates, a substantial fraction of the radial energy and particle transport in stellarators can often be attributed to collisional processes. The
kinetic calculation of collisional transport has for a long time relied on simplified models which use the “mono-energetic” approximation, the simple pitch-angle scattering collision operator and are
radially local. But not all experimental observations have been satisfactorily explained, and in recent years more advanced numerical tools have appeared which relax some of the approximations. These
improvements in the modelling can be of particular importance when analyzing the impurity transport. I will discuss how the calculation of the impurity transport has advanced in recent years, and
what the latest observations in the Wendelstein 7-X stellarator are.
A lot happened since TRANSP made its first appearance about thirty years ago. The 'golden standard' for tokamak discharge analysis has evolved to a code that is capable of predicting heating and
current drive, thermal and particle transport. Its pool of users has expanded internationally to cover almost every tokamak operating nowadays, demanding modernization, increasing support and
additional capabilities. This talk will review the strength and weaknesses of TRANSP, the plans for implementation of new physics modules targeting (as close as possible to) a Whole Device Model. It
will discuss areas for partnership with the theory department and ongoing activities and plans in collaboration with the SciDAC projects, for extension of the transport outside the plasma boundary
and for self-consistent calculations of transport and stability, including transport induced by energetic particles.
Runaway electron phenomena is an important topic in general, and critically important in tokamak disruption studies. Given its high potential for damaging effects, it is critical to understand the
physics of their generation and find a mitigation strategy for ITER. Kinetic instabilities associated with high-energy runaway electrons have been shown to play an important role in the behavior of
runaway electrons Recently, a new kind of instability with magnetic signals in the Alfvén frequency range have been observed in disruption experiments in DIII-D, which is found to be related to the
dissipation of the runaway electron current. In this talk, a candidate explanation for this phenomena is presented, namely resonant interaction with compressional Alfvén eigenmodes (CAE). CAEs driven
by energetic ions have been well studied in spherical tokamaks like NSTX. For CAEs driven by resonances with runaway electrons, the damping rate of the modes due to electron-ion collisions are
calculated. The model is applied to a time-dependent kinetic simulation of runaway electrons, which includes the bounce-average effect and the enhanced ion pitch-angle scattering due to partial
screening. The results match with experiments qualitatively, and provide a promising way to diffuse runaway electrons before their energization. A brief overview of related research into runaway
electrons is also given, indicating how this work fits into the wider effort to find mitigation strategies.
We revisit the Hahm-Kulsrud-Taylor (HKT) problem, a classic prototype problem for studying resonant magnetic perturbations and 3D magnetohydrodynamical equilibria. We employ the boundary-layer
techniques developed by Rosenbluth, Dagazian, and Rutherford (RDR) for the internal m=1 kink instability, while addressing the subtle difference in the matching procedure for the HKT problem.
Pedagogically, the essence of RDR's approach becomes more transparent in the reduced slab geometry of the HKT problem. We then compare the boundary-layer solution, which yields a current singularity
at the resonant surface, to the numerical solution obtained using a flux-preserving Grad-Shafranov solver. The remarkable agreement between the solutions demonstrates the validity and universality of
RDR's approach. In addition, we show that RDR's approach consistently preserves the rotational transform, which hence stays continuous, contrary to a recent claim that RDR's solution contains a
discontinuity in the rotational transform.
The first gyrokinetic simulations of plasma turbulence in the Texas Helimak device are presented. These have been performed using the Gkeyll (http://gkyl.readthedocs.io/) computational framework. The
Helimak is a simple magnetized torus with a toroidal and vertical magnetic field and open field lines that terminate on conducting plates at the top and bottom of the device. It has features similar
to the scrape-off layer region of tokamaks, such as bad curvature-driven instabilities and sheath boundary conditions on the end plates, which are included in these initial gyrokinetic simulations. A
bias voltage can be applied across conducting plates to drive E x B flow and study the effect of velocity shear on turbulence suppression. Comparisons are presented between simulations and
measurements from the experiment, showing qualitative similarities, including fluctuation amplitudes and equilibrium profiles that approach experimental values. There are also some important
quantitative differences, and I discuss how certain physical and geometric effects may improve agreement in future results.
Incoming Exascale capabilities of super computers will enable whole-device simulations based on first principles plasma physics. To take full advantage of these new capabilities, new numerical
schemes and more complete physical models are developed in XGC. XGC is a whole-volume total-f gyrokinetic code optimized for simulation of edge plasma in magnetic fusion devices. One goal of the ECP
project is to couple XGC with a core code, such as GENE, to optimize the efficiency of whole-device simulations. The current status of this core-edge coupling project will be presented, including the
presentation of the core-edge coupling scheme and of the coupled GENE-XGC simulations. Another goal is to study the influence of tungsten and beryllium on the performance of ITER. The Total-f
gyrokinetic XGC is thus being upgraded to simulate the physics of many-species impurities in the whole-volume including SOL. First multi-species simulation of a JET plasma under tungsten
contamination will be demonstrated, showing that the lower charge state W can move inward from into core, but the higher charge state W will move outward toward the pedestal top and accumulate, as
seen in ASDEX-U.
Kinetic ballooning modes (KBM) are widely believed to play a critical role in disruptive dynamics as well as turbulent transport in tokamaks. While the nonlinear evolution of ballooning modes has
been proposed as a mechanism for “detonation” in tokamak plasmas, the role of kinetic effects in such nonlinear dynamics remains largely unexplored. In this study saturation mechanism and nonlinear
dynamics of KBM is presented, with global gyrokinetic simulation results of KBM nonlinear behavior. Instead of the finite-time singularity predicted by ideal MHD theory, the kinetic instability is
shown to develop into an intermediate non-linear regime of exponential growth, followed by a nonlinear saturation regulated by spontaneously generated zonal fields. In the intermediate nonlinear
regime, rapid growth of localized current sheet, which can mediate reconnection, is observed. In the future, the linear properties as well as nonlinear mode structures from the simulations could be
incorporated into the deep learning models for disruption predictions in the form of a new parameter/channel, as a first-principles physics guide to the AI. The deep learning model could in turn
provide feedback on the sensitivity of the parameters, including the linear stability properties of various modes, and nonlinear dynamics of these instabilities, and thus automatically select new
inputs for the first-principles codes.
Molecular hydrogen and its isotopologues are present in a range of vibrationally excited states in fusion, atmospheric, and interstellar plasmas. Electron-impact excitation cross sections resolved in
both final and initial vibrational levels of the target are required for modeling the properties and dynamics, and controlling the conditions of many low-temperature plasmas. Recently, the convergent
close-coupling (CCC) method has been utilized to provide a comprehensive set of accurate excitation, ionization, and grand total cross sections for electrons scattering on H2 in the ground
(electronic and vibrational) state, and calculations are being conducted to extend this data set to include cross sections resolved in all initial and final vibrational levels. In this talk I will
review the available e-H2 collision data, discuss the resolution of a significant discrepancy between theory and experiment for excitation of the b3Su+ state, and present estimates for dissociation
of H2.
Tokamak turbulence exhibits interaction on all length scales, but standard gyrokinetic treatments consider global scale flows and gyroscale flows separately, and assume a separation between these
length scales. However, the use of a small-vorticity ordering (Dimits, 2010) allows for the presence of large, time-varying flows on large length scales, whilst providing a unified treatment including
shorter length scales near and below the gyroradius. Some examples of strong-flow generalisations of gyrokinetics are presented, followed by a description of the nuances of the equations and
numerical implementation that use the ordering of Dimits (2010). Our Euler-Lagrange and Poisson equations contain an implicit dependence that appears as a partial time derivative of the E × B flow.
This implicit dependence is analogous to the v||-formulation of gyrokinetics. However, as these implicit terms are small, we use an iterative scheme to resolve this. Additionally, we have developed a
stand-alone Poisson solver based on that from the ORB5 code, and use this to simulate certain flow and density gradient driven instabilities in cylindrical geometry.
Within the framework of MagnetoHydroDynamics, a strong interplay exists between flow and magnetic fields leading to several interesting pathways for energy cascade. In this talk I numerically
demonstrate three examples of such interplay using our in-house developed DNS code G-MHD3D which simulates three dimensional single fluid MHD equations. I also suggest analytical arguments in some of
our numerical observations. The first problem discusses the phenomena of nonlinear interaction of magnetic and kinetic energies within the premise of two and three dimensional MHD equations leading
to periodic exchange of energy. Scaling of the energy exchange frequency with deviation from Alfven resonance and initial wave number of excitation is numerically determined. Results are
qualitatively reproduced by analysing the set of single fluid MHD equations through low degrees of freedom coupled ODEs obtained via a Galerkin procedure. Secondly, in three dimensions, at Alfven
resonance, for some chaotic flows, the initial flow profile is found to “recur” periodically with the time evolution of the plasma. Such recurrence is unexpected in systems with high degree of
freedom (e.g. 3D MHD). The primary cause of such phenomena is analysed using an effective number of active degrees of freedom present in the system. Finally we observe some preliminary results of
large and intermediate scale magnetic field generation in plasmas often called as “dynamo” using our code. The growth rate of such ‘fast’ dynamos are compared for different parameters of the system
and the fastest dynamos for the parameter set is identified. Physics details and numerical aspects of the development of the code, numerical protocols followed, direct numerical simulations results,
numerical tools to diagnose the three dimensional grid data and the analytical arguments in support of the numerical observations will be presented in the talk.
We present a new gyrokinetic model that retains the fundamental elements of the plasma dynamics at the tokamak periphery, namely electromagnetic fluctuations at all scales, comparable amplitudes of
background and fluctuating components, and a large range of collisionality regimes. Such model is derived within a gyrokinetic full-F approach, describing distribution functions arbitrarily far from
equilibrium, and projecting the gyrokinetic equation onto a Hermite-Laguerre velocity space polynomial basis, obtaining a gyrokinetic moment hierarchy. The treatment of arbitrary collisionalities is
performed by expressing the full Coulomb collision operator in gyrocentre phase space coordinates, and providing a closed formula for its gyroaverage in terms of the gyrokinetic moments. In the
electrostatic regime and long-wavelength limit, the novel gyrokinetic hierarchy reduces to a drift-kinetic moment hierarchy that in the high collisionality regime further reduces to an improved set
of drift-reduced Braginskii equations, which are widely used in scrape-off layer simulations. First insights on the linear modes described by our novel gyrokinetic model will be presented.
The resonance broadened quasi-linear (RBQ) model is developed for the problem of relaxing the fast energetic particle distribution function in constant-of-motion (COM) 3D space [N.N. Gorelenkov et
al., Nucl. Fusion 58 (2018) 082016]. The model is generalized by using the QL theory [H. Berk et al., Phys. Plasmas'96] and carefully reexamining the wave particle interaction (WPI) in the presence
of realistic AE mode structures and pitch angle scattering with the help of the guiding center code ORBIT. The RBQ model applied in realistic plasma conditions is improved by going beyond the
perturbative-pendulum-like approximation for the wave particle dynamics near the resonance. The resonance region is broadened but remains 2-3 times smaller than predicted by an earlier bump-on-tail
QL model. In addition the resonance broadening includes the Coulomb collisional or anomalous pitch angle scattering. The RBQ code takes into account the beam ion diffusion in the direction of the
canonical toroidal momentum. The wave particle interaction is reduced to one-dimensional dynamics where for the Alfvénic modes typically the particle kinetic energy is nearly constant. The diffusion
equation is solved simultaneously for all particles together with the evolution equation for the mode amplitudes. We apply the RBQ code to a DIII-D plasma with elevated q -profile where the beam ion
profiles show stiff transport properties [C. Collins et al. PRL'16]. The sources and sinks are included via the Krook operator. The properties of AE driven fast ion distribution relaxation are
studied for validations of the applied QL model to DIII-D discharges. Initial results show that the model is robust, numerically efficient, and can predict intermittent fast ion relaxation in present
and future burning plasmas.
Since decades coronal heating is a buzzword that is used as a motivation on coronal research. Depending on the level of detail one is interested in, one could define this question anything ranging
from answered to not understood at all. 3D MHD models can now produce a corona in a numerical experiment that comes close to the real Sun in complexity. And the fact alone that in these models a
three-dimensional loop-dominated time-variable corona is produced could be used as an argument that the problem of coronal heating is solved. However, careful inspection of these model results shows
that despite their success they leave many fundamental questions unanswered. In this talk I will address some of these aspects, including the mass and energy exchange between chromosphere and corona,
the apparent width of coronal loops, the energy source of hot active region core loops, or the internal structure of loops. In this sense this talk will pose more questions that it provide answers.
This talk presents a mass-, momentum- and energy conserving discretization of the nonlinear Landau collision integral. The semi-discrete form is achieved using a modal discontinuous Galerkin method
on a tensor product mesh and a recovery method to define second derivatives at the element boundaries. Combined with an explicit time stepping scheme this gives a fully discrete conservative form.
The conservation properties are proven algebraically and shown numerically for a two dimensional relaxation test problem.
Tight tolerances have been a leading driver of cost in recent stellarator experiments, so improved definition and control of tolerances can have significant impact on progress in the field. Here we
relate tolerances to the shape gradient representation that has been useful for shape optimization in industry, used for example to determine which regions of a car or aerofoil most affect drag, and
we demonstrate how the shape gradient can be computed for physics properties of toroidal plasmas. The shape gradient gives the local differential contribution to some scalar figure of merit (shape
functional) caused by normal displacement of the shape. In contrast to derivatives with respect to quantities parameterizing a shape (e.g. Fourier amplitudes), which have been used previously for
optimizing plasma and coil shapes, the shape gradient gives spatially local information and so is more easily related to engineering constraints. We present a method to determine the shape gradient
for any figure of merit using the parameter derivatives that are already routinely computed for stellarator optimization, by solving a small linear system relating shape parameter changes to normal
displacement. Examples of shape gradients for plasma and electromagnetic coil shapes are given. We also derive and present examples of an analogous representation of the local sensitivity to magnetic
field errors; this magnetic sensitivity can be rapidly computed from the shape gradient. The shape gradient and magnetic sensitivity can both be converted into local tolerances, which inform how
accurately the coils should be built and positioned, where trim coils and structural supports for coils should be placed, and where magnetic material and current leads can best be located. Both
sensitivity measures provide insight into shape optimization, enable systematic calculation of tolerances, and connect physics optimization to engineering criteria that are more easily specified in
real space than in Fourier space.
This talk explores energy-, momentum-, density-, and positivity-preserving spatio-temporal discretizations for the nonlinear Landau collision operator. We discuss two approaches, namely direct
Galerkin formulations and discretizations of the underlying infinite-dimensional metriplectic structure of the collision integral. The spatial discretizations are chosen to reproduce the
time-continuous conservation laws that correspond to Casimir invariants and to guarantee the positivity of the distribution function. Both the direct and the metriplectic discretization are
demonstrated to have exact H-theorems and unique, physically exact equilibrium states. Most importantly, the two approaches are shown to coincide, given the chosen Galerkin method. A temporal
discretization, preserving all of the mentioned properties, is achieved with so-called discrete gradients. The proposed algorithm successfully translates all properties of the infinite-dimensional
time-continuous Landau collision operator to time- and space-discrete sparse-matrix equations suitable for numerical simulation.
Magnetic field reconnection is considered one of the most important mechanisms of magnetic energy conversion and reorganization acting in astrophysical and laboratory plasmas. Although our knowledge
has been greatly advanced in the last few decades, the problem of how magnetic reconnection can be triggered explosively in weakly collisional (quasi-ideal) plasmas (as observed e.g in solar flares,
geomagnetic substorms and sawtooth crashes in tokamaks) still remains an open field of research. Here we discuss a possible scenario for the triggering of explosive reconnection via the onset of an
‘ideal’ tearing instability within forming current sheets and we show results from MHD numerical simulations that support our scenario. We demonstrate that the same reasoning, if applied recursively,
can describe the complete nonlinear disruption of the original current sheet until microscopic marginally-stable current sheets are formed. We show that the ‘ideal’ tearing mode provides a general
frame of work that can be extended to include other effects such as kinetic effects and different current profiles.
A new quasi-axisymmetric, two-field-period stellarator configuration has been designed following a broad study using the optimization code ROSE (Rose Optimizes Stellarator Equilibria). Because of the
toroidal symmetry of the magnetic field strength, quasi-axisymmetric stellarators share many neoclassical properties of tokamaks, such as a comparable bootstrap current which can be employed to
simplify the coil structure, which is favorable for finding compact equilibria. The ROSE code optimizes the plasma boundary calculated with VMEC based on a set of physical and engineering criteria.
Various aspect ratios, number of field periods and iota profiles are investigated. As an evaluation of the design, the bootstrap current, the ideal MHD stability, the fast-particle losses, and the
existence of islands are examined. The main result of this extensive study – a compact, MHD-stable, two-field-period stellarator with small fast-particle loss fraction – will be presented.
Turbulence likely plays an important role in generating the solar wind, and spacecraft measurements indicate that solar-wind turbulence is largely non-compressive and Alfvén-wave-like. Although
compressive fluctuations are sub-dominant, Alfvén waves in the solar wind couple to compressive slow magnetosonic waves (“slow waves”) via the parametric-decay instability. In this instability, an
outward-propagating Alfvén wave decays into an outward-propagating slow wave and an inward-propagating Alfvén wave. In this talk, I will describe a weak-turbulence calculation of the nonlinear
evolution of the parametric instability in the solar wind at wavelengths much greater than the ion inertial length under the assumption that slow waves, once generated, are rapidly damped. I'll show
that the parametric instability leads to an inverse cascade of Alfvén-wave quanta and present several exact solutions to the wave kinetic equations. I will also present a numerical solution to the
wave kinetic equations for the solar-wind-relevant case in which most of the Alfvén waves initially propagate away from the Sun in the plasma rest frame. In this case, the outward-propagating Alfvén
waves evolve toward a $1/f$ frequency spectrum that shows promising agreement with spacecraft measurements of interplanetary turbulence in the fast solar wind. I will also present predictions that
will be tested by NASA's upcoming Solar Probe Plus mission, which will travel much closer to the Sun than any previous spacecraft.
The breaking of magnetic field line connections is of fundamental importance in essentially all applications of plasma physics: laboratory to astrophysics. For sixty years the theory of magnetic
reconnection has been focused on two-coordinate models. When dissipative time scales far exceed natural evolution times, such models are not realistic for ordinary three dimensional space. The ideal
(dissipationless) evolution of a magnetic field is shown to in general lead to a state in which the magnetic field lines change their connections on an Alfvénic (inertial), not resistive, time scale.
Only a finite mass of the lightest current carrier, the electron, is required. During the reconnection, the gradient in $j_\parallel/B$ relaxes while conserving magnetic helicity in the reconnecting
region. This implies a definite amount of energy is released from the magnetic field and transferred to shear Alfvén waves, which in turn transfer their energy to the plasma. When there is a strong
non-reconnecting component of the magnetic field, called a guide field, $j_\parallel/B$ obeys the same evolution equation as that of an impurity being mixed into a fluid by stirring. Although the
enhancement of mixing by stirring has been recognized by every cook for many millennia, the analogous effect in magnetic reconnection is not generally recognized. An interesting mathematical
difference is a three-coordinate model is required for the enhancement of magnetic reconnection while only two coordinates are required in fluid mixing. The issue is the number of spatial coordinates
required to obtain an exponential spatial separation of magnetic field lines versus streamlines of a fluid flow.
A system of exact fluid equations always involves more unknowns than equations. This is called the closure problem. An important aspect of obtaining quantitative closures is an accurate account of
collisional effects. Recently, analytical calculations of the Landau (Fokker-Planck) collision operator as well the derivation of an infinite hierarchy of moment equations have been carried out using
expansions for the distribution function in terms of irreducible Hermite polynomials. In this talk, I will present solutions to the moment hierarchy that provide closure for the set of five-moment
fluid equations. In the collisional limit, improved Braginskii closures are obtained by increasing the number of moments and considering the ion-electron collision effects. For magnetized plasmas, I
highlight the effect of long mean free path and derive parallel integral closures for arbitrary collisionality. Finally, I will show how the integral closures can be used to study radial transport
due to magnetic field fluctuations and electron parallel transport for arbitrary collisionality.
The talk includes 3 parts: 1.overview of ENN; 2.ENN research areas and achievements; and 3.ENN fusion technology roadmap.
CLT is an explicit, three-dimensional, fully toroidal, non-reduced, Hall-MHD code, which is developed in Zhejiang University. Through CLT, I find that electron diamagnetic rotation, which is well
described by the CLT code, can significantly modify the dynamics of the tearing mode. It can also affect the characteristics of the sawtooth oscillations. Besides, I have also studied the influence
of driven current on tearing mode instabilities, which can explain some experimental data of EAST. Now CLT is updating to study the influence of RMPs (Resonant Magnetic Perturbations) on tearing mode
instabilities. Preliminary results show that the threshold for ‘mode locking’ increases with the frequency of RMPs, which is consistent with theory prediction.
The ohmic breakdown is generally used to produce initial plasmas in tokamaks. However, the complex electromagnetic structure of tokamaks has obscured the physical mechanism of ohmic breakdown for
several decades. Previous studies ignored plasma responses to external electromagnetic fields and adopted only the simplest Townsend avalanche theory. However, we found clear evidence that
experimental results cannot be explained by the Townsend theory. Here, we propose a completely new type of breakdown mechanism that systematically considers multi-dimensional plasma responses in the
complex electromagnetic topology. As the plasma response, self-electric fields produced by space-charge were found to be crucial for significantly reducing plasma density growth rate and enhancing
perpendicular transport via $\small {\bf E}\times{\bf B}$ drifts. A particle simulation code, BREAK, clearly captured these effects and provided a remarkable reproduction of the mysterious
experimental results in KSTAR. These new physical insights into complex electromagnetic topology provide general design guideline for a robust breakdown scenario in a tokamak fusion reactor.
This is joint work with M. O'Neil, L. Greengard, and L.-M. Imbert-Gerard
The basic point in the understanding the processes of the evolution of the instabilities and turbulence in the plasma shear flows across the magnetic field is the proper treatment the effects of the
persistent distortion of the perturbations by the shearing flow, particularly in the applications of the spectral transforms to the governing equations. The problem of extracting the separate spatial
Fourier mode in the stability theory of the plasma shearing flows, and joined with it the problem of the limits of the applicability to such a plasma the spectral methods of the investigation of the
stability properties on the base of the dispersion equations, may be resolved by employing the non-modal fluid and kinetic theory, which grounds on the methodology of the shearing modes and
completely involves the effect of the persistent deformation of the perturbations by the sheared flows. That theory displays, that the application of the methodology of the shearing modes and
convective-shearing coordinates and solution of the initial value problem in the wave vector and time variables, instead of the application of the static spatial Fourier modes and spectral transform
in time, has the decisive impact on the understanding the wave-particle interaction in the shearing flow. That theory recovers main linear and non-linear processes and corresponding numerous
characteristic time scales, which may be observable in the experiments and numerical simulations, but can’t be distinguished, when the spectral transform in time is applied. A most famous is the
”quench rule”, which was detected in first in the numerical simulations, but was not confirmed analytically in the calculations of the shearing flows stability, grounded on the spectral
transformations in time. The primary intent of this report is to show that a non-modal approach is a decisive in the reconciling observational evidence with stability theory of plasma shearing flows
and to suggest a more frequent use of that approach.
Boundary plasma is in a non-equilibrium statistical state governed by self-organization among multiscale physics, and needs to be modeled with total-f gyrokinetic equations. A unique
pariticle-in-cell technique will be introduced that has enabled XGC to be the world’s first, and only, gyrokinetic code that simulates the boundary plasma across the magnetic separatrix into the
scrape-off layer in contact with a material wall. Examples of successful boundary physics discoveries enabled by the technique will also be presented.
We studied the role of electron physics in 3D two-fluid 10-moment simulations of the Ganymede’s magnetosphere. The model captures non-ideal physics like the Hall effect, the electron inertia, and
anisotropic, non-gyrotropic pressure effects. A series of analyses were carried out: 1) The resulting magnetic field topology and electron and ion convection patterns were investigated. The magnetic
fields were shown to agree reasonably well with in-situ measurements by the Galileo satellite. 2) The physics of collisionless magnetic reconnection were carefully examined in terms of the current
sheet formation and decomposition of generalized Ohm’s law. The importance of pressure anisotropy and non-gyrotropy in supporting the reconnection electric field is confirmed. 3) We compared surface
“brightness” morphology, represented by surface electron and ion pressure contours, with oxygen emission observed by the Hubble Space Telescope (HST). The correlation between the observed emission
morphology and spatial variability in electron/ion pressure was demonstrated. We also briefly discussed the relevance of this work to the future JUICE mission.
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal “winding” surface and suffer the difficulties of nonlinear
optimization. A new coil design method, the FOCUS code, is introduced by representing each discrete coil as an arbitrary, closed space curve. The first and second derivatives of the target function
that covers both physical requirements and engineering constraints are calculated analytically. We have employed several advanced nonlinear optimization algorithms, like the nonlinear conjugate
gradient and the modified Newton method, for minimizing the target function. Numerical illustrations show that the new method can be applied to different types of coils for various configurations
with great flexibilities and robustness. An extended application for analyzing the error field sensitivity is also presented.
Scrape-off layer plasmas feature intermittent, large-amplitude fluctuations which are attributed to the radial outwards propagation of plasma blobs through this volume. We introduce a stochastic
model which describes scrape-off layer time series by superposing uncorrelated pulses with variable amplitude. The resulting time series is Gamma distributed where the lowest order statistical
moments are given by the pulse parameters. The power spectral density is governed by the pulse shape - for a double exponential pulse shape presents the power spectral density a flat region for low
frequencies and a steep power law scaling for high frequencies. Predictions from this model are compared to fluctuation measurements of electron density, temperature and plasma potential by mirror
Langmuir probes in the SOL during an ohmic L-mode discharge in Alcator C-Mod.
We present a numerical procedure for modeling resonant response of energetic particles to waves in tokamaks. With Littlejohn Lagrangian for guiding center motion, we use the action-angle variables to
simplify simulations of the fast ion dynamics to one dimensional. The transformation also involves construction of canonical straight field line coordinates, which render a Hamiltonian description of
the guiding center motion. This module can be integrated with the modified MHD code AEGIS to simulate wave-particle interactions.
Tearing mode instability is one of the most important dynamic processes in space and laboratory plasmas. Hall effects, resulted from the decoupling of electron and ion motions, could cause the fast
development and perturbation structure rotation of the tearing mode and become non-negligible. A high accuracy nonlinear MHD code (CLT) is developed to study Hall effects on the dynamic evolution of
tearing modes with Tokamak geometries. It is found that the rotation speed of the mode structure from the simulation is in a good agreement with that from the analytical theory in a single tearing
mode. The linear growth rate increases with increase of the ion skin depth. The self-consistent generated rotation largely alters the dynamic behaviors of the double tearing mode and the sawtooth
Increasingly accurate laboratory experiments and satellite observations have led to a "golden age" in plasma physics. Detailed kinetic features, including distribution functions, can now be measured
in-situ, putting severe strain on theory and modeling to explain the experiments/observations. The Particle-In-Cell (PIC) method remains a powerful and widely used tool to study such kinetic physics
numerically. Recently, complimenting the PIC approach, significant progress has been made in discretizing the Vlasov equation directly, treating it as a partial differential equation (PDE) in 6D
phase-space. In this talk, I present a high-order discontinuous Galerkin (DG) algorithm to solve the Vlasov equation. This continuum scheme leads to noise-free solutions and, with the use of
specially optimized computational kernels, can be very efficient, in particular, for problems in which the structure of distribution functions and its higher-moments are required. In addition, with a
proper choice of basis functions and numerical fluxes, the scheme conserves energy exactly, while conserving momentum to a high degree of accuracy. Applications of the scheme to (a) kinetic
saturation mechanism of Wiebel-like instability and (b) turbulence in the solar wind are presented. We demonstrate a new and novel mechanism for the nonlinear saturation of the Wiebel instability
that comes about from a balance between filamentation and a secondary electrostatic two-stream instability. We use 5D simulations of turbulent plasmas to study detailed kinetic physics of magnetized
turbulence, showing that the solution contains remarkable amount of detail in the distribution function, leading to new and novel insights the nature of kinetic wave-particle exchange in turbulent
One of the key recurrent themes in high-energy plasma astrophysics is relativistic nonthermal particle acceleration (NTPA) necessary to explain the bright X-ray and gamma-ray flaring emission with
ubiquitous power-law spectra in astrophysical objects such as pulsar wind nebulae, hot accretion flows and coronae of accreting black holes, and black-hole powered relativistic jets in active
galactic nuclei and gamma-ray bursts. Two leading physical processes often invoked as possible NTPA mechanisms are collisionless magnetic reconnection and turbulence. In order to understand these
processes, as well as their resulting observable radiation signatures, I have recently initiated a broad theoretical and computational research program in kinetic radiative plasma astrophysics. This
program employs large-scale first-principles particle-in-cell kinetic simulations (including those that self-consistently incorporate radiation-reaction effects) coupled with analytical theory. In
this talk I will review the resulting progress that we have achieved in recent years towards understanding and quantitative characterization of NTPA in reconnection and turbulence over a broad range
of physical regimes. I will present 2D and 3D simulation results that demonstrate that both reconnection and turbulence in relativistic collisionless astrophysical plasmas can robustly produce
non-thermal energy spectra with power-law indices that show an intriguingly similar characteristic dependence on the plasma magnetization. I will also describe the effects of strong radiative cooling
on reconnection and turbulence.
The HPE code represents the extension of ESC (Equilibrium and Stability Code) to tokamak equilibria with plasma anisotropy and toroidal rotation. In addition to conventional 1-D input profiles of
plasma pressure $p(a)$ and safety factor $q(a)$ HPE accepts the poloidal Mach number ${\cal M}(a)$ of plasma rotation and 2-D parallel pressure profile of hot particles $p_\parallel(a,B)$ as the
input. Here, $a$ is the normalized radial flux coordinate of magnetic configuration and $B$ is the strength of magnetic field. The HPE code includes the effect of finite width of hot particle orbits
and for the case of powerful NBI injection the code can generate the plasma equilibria for theory needs as well as use the experimental (or kinetic simulations) data for interpretation of
A large amount of mass falls on the polar region of neutron star in Xray binaries and the question is, is the mass completely frozen on the field lines or can it diffuse through them? In this talk we
present a mechanism for the latter possibility. A strong MHD instability occurs in the top layers of the neutron star driven by the incoming mass. This instability has the same properties as the
Schwarzschild instability in the solar convection zone. It gives rise to a turbulent cascade which mixes up the field lines so that lines originally far apart can come with a resistivity diffusion
distance and transfer the masses between them. However, the lines of force themselves are not disrupted. This leads to an equilibrium which is marginal with respect to the instability just as happens
in the Schwarzschild case.
Increased fluxes of energetic electrons and ions in the inner magnetosphere of the Earth are often associated with sudden reconfigurations of the magnetotail, often referred to as substorm
dipolarizations. We describe a novel model of the magnetotail which is easily controlled by several adjustable parameters, such as the thickness of the tail and the location of transition from
dipole-like to tail-like magnetic field lines. This model is fully three-dimensional and includes the day-night asymmetry of the terrestrial magnetosphere, however, the field lines are confined to
the meridional planes. Our model is well suited to studies of the magnetotail dipolarizations which we consider to be the tailward movements of the transition between dipole-like and tail-like field
lines. We also study the effects of a dipolarizing electromagnetic pulse propagating towards the Earth. The calculated electric and magnetic fields are used to describe the motion of electrons and
ions and changes in their energies. In some cases, particle energies increase by a factor of 25 or more. The energized particles are transported earthward where they are often observed by
geostationary satellites as substorm injections. The energization level obtained in our model is reasonably consistent with satellite and ground-based observation (e.g. carried out by riometers), and
therefore we consider our scenario of the dipolarization process to be feasible.
We report the development of a new version of DCON for nonaxisymmetric toroidal plasmas, e.g. stellarators. The DCON code is widely used for fast and accurate determination of ideal MHD stability of
axisymmetric toroidal plasmas. Minimization of the ideal MHD energy principle $\delta W$ is reduced to adaptive numerical integration of a coupled set of ordinary differential equations for the
complex poloidal Fourier harmonics of the perturbed normal displacement. For a periodic cylindrical plasma, both the poloidal and toroidal coordinates are ignorable, allowing for treatment of single
harmonics $m$ and $n$. For an axisymmetric toroidal plasma, poloidal symmetry is broken, causing different $m$’s to couple and requiring simultaneous treatment of $M$ harmonics. For a nonaxisymmetric
plasma, toroidal symmetry is also broken, causing different $n$’s to couple and requiring treatment of $N$ harmonics. For a stellarator with field period $l$, e.g. $l = 5$ for W7X, each toroidal
harmonic $n$ is coupled to toroidal harmonics $n+k \, l$, for all integers $k$. Both $M$ and $N$ are truncated on the basis of convergence. Singular surfaces occur at all values of safety factor $q =
m/n$ in the plasma. The DCON equations have been generalized to allow for multiple $n$’s with coupling matrices $F$, $K$, and $G$. An interface has been developed for the nonaxisymmetric equilibrium
code VMEC, which provides values for these matrices. Their Fourier harmonics are fit to cubic splines as a function of the radial coordinate $s$, allowing for adaptive integration of the ODEs. The
status of the code development will be presented.
Dynamics of Langmuir solitons in inhomogeneous plasmas is investigated numerically employing Zakharov equations. The solitons are accelerated toward a lower background density side. With the steep
density gradients, balance between the electric field part of the soliton and the density cavity breaks and the solitons disintegrate. The disintegration threshold is given by regarding the electric
field part of the soliton as a point mass moving along the self-generated potential well produced by the density cavity. On the other hand, when the density gradient is below the threshold, Langmuir
solitons adjust themselves by expelling the imbalanced portion as density cavities at the sound velocity. When the gradient is below the threshold, the electric field part of the soliton bounces back
and forth within the potential well. The study is extended to kinetic simulation. Generation mechanism of high energy electron tails in the presence of solitons is discussed. The electron
distribution function resembles that of the Lorentzian type. The particle acceleration is explained as a transport process toward high energy side due to overlapping of multiple resonant islands in
the phase space.
Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer) is required in order to reliably predict performance in future fusion devices. I
will present research exploring characteristics of this transport using the family of X-point Gyrokinetic Codes (XGC). First, the variation of pressure in the scrape-off layer (important to
understand in order to avoid divertor wall degradation) is widely believed to follow simple fluid prescriptions, due to high collisionality. However, simulation results in the near-SOL indicate a
significant departure from the simple fluid models, even after including additional terms from neutral drag and the Chew-Goldberger-Low form of parallel ion viscosity to the parallel momentum
balance. Second, turbulence characteristics in the edge region show nonlocal behavior, including convective transport of turbulent eddies (“blobs”) born just inside the closed field line region out
into the SOL. These large intermittent structures can be created even in the absence of collisions in the simulation. Tracking these structures show that on average their radial velocity is very
small, even in the near-SOL, while their poloidal velocity is significant. The potential structure within these blobs is monopolar with a peak shifted from the density structure, contrary to the
dipolar structure expected by analytical models for blob generation and transport based on interchange turbulence. Finally, coherent phase space structures in blobs are searched for, however only
broad regions of velocity space are found to show significant structure.
Recent progress of 3D equilibrium calculation will be reported. The 3D equilibrium calculation is a fundamental consideration to understand the magnetic topology. In particular, for stellarators, the
topological change due to the beta-sequences has been found and it is an important issue how to make good flux surfaces in general 3D configuration in the presence of the plasma beta. On the other
hand, the 3D equilibrium calculation is also an important issue for tokamaks, because the RMP is widely used to control the stability and transport. In this talk, recent results of 3D equilibrium
calculations based on the HINT code, which is a 3D equilibrium calculation code without an assumption of perfectly nested flux surfaces. Impacts of the beta-sequences and the plasma rotation will be
Magnetic helicity, a measure for the linking and knotting of magnetic field lines, is a conserved quantity in Ideal MHD. In the presence of resistivity, helicity constrains the rate at which magnetic
energy can be dissipated. When a localized, helical magnetic field is set to relax in a low-resistance high-beta plasma, the magnetic pressure drives the plasma to expand whilst the helicity is still
approximately conserved. Using numerical simulations I show how this interplay gives rise to a novel MHD equilibrium: the initially linked field lines self-organize to form a structure where field
lines lie on nested toroidal surfaces of constant pressure. The Lorentz forces are balanced by the gradient in pressure, with a minimum in pressure on the magnetic axis. Interestingly, the rotational
transform is nearly constant on all magnetic surfaces, making the structure topologically nearly identical to a famous knotted structure in Topology: the Hopf fibration. I will explore the nature of
this equilibrium, and how it relates geometrically to the structure of the Hopf map. Additional dynamics give rise phenomena that are well known from magnetic confinement devices; magnetic islands
can occur at rational surfaces, and in certain regimes the equilibrium becomes nonaxisymmetric, triggering a marginal core-interchange mechanism.
The transport of heat and particles in the relatively collisional edge regions of magnetically confined plasmas is a scientifically challenging and technologically important problem. Understanding
and predicting this transport requires the self-consistent evolution of plasma fluctuations, global profiles, and flows, but the numerical tools capable of doing this in realistic (diverted) geometry
are only now being developed. BOUT++ is one such tool that has had many recent develops towards this goal. A novel coordinate system has been developed to improve the resolution around the X-point
and strike points in the divertor region. A 5-field reduced 2-fluid plasma model for the study of instabilities and turbulence in magnetised plasmas has been built on the BOUT++ framework that allows
the evolution of global profiles, electric fields and flows on transport timescales, with flux-driven cross-field transport determined self-consistently by electromagnetic turbulence. Models for
neutral evolution have also been included, and the interaction of these neutrals with the plasma is characterised through charge exchange, recombination, ionisation, and radiation. Simulation results
for linear devices, MAST-U, and DIII-D are presented that shed light on the nature of plasma-neutral interaction, detachment in the super-X divertor, and turbulence in diverted geometry.
A fully kinetic ion model is useful for the verification of gyrokinetic turbulence simulations in certain regimes where the gyrokinetic model may break down due to the lack of small ordering
parameters. For a fully kinetic ion model to be of value, however, it must first be able to accurately simulate low-frequency drift-type instabilities typically well within the domain of
gyrokinetics. In this talk, we present a fully kinetic ion model formulated with weak gradient drive terms and applied to the ion-temperature-gradient (ITG) instability. A $\delta f$ implementation
in toroidal geometry is discussed, where orthogonal coordinates are used for the particle dynamics, but field-line-following coordinates are used for the field equation, allowing for high resolution
of the field-aligned mode structure. Variational methods are formulated for integrating the particle equations of motion, allowing for accuracy on a long time scale with modest timestep sizes.
Finally, an implicit orbit averaging and sub-cycling scheme for the fully kinetic ion model is considered.
Following Grad’s procedure, an expansion of the velocity space distribution functions in terms of multi-index Hermite polynomials is carried out to derive a consistent set of collisional fluid
equations for plasmas. The velocity-space moments of the often troublesome nonlinear Landau collision operator are evaluated exactly, and to all orders with respect to the expansion. The collisional
moments are shown to be generated by applying gradients on two well-known functions, namely the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian distribution. The expansion can be
truncated at arbitrary order with quantifiable error, providing a consistent and systematic alternative to the Chapman-Enskog procedure which, in plasma physics, amounts to the famous Braginskii
equations. To illustrate our approach, we provide the collisional ten-moment equations and prove explicitly that the exact, nonlinear expressions for the momentum- and energy-transfer rate satisfy
the correct conservation properties.
The talk presents recent advances in the variational formulation of reduced Vlasov-Maxwell equations. First, the variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange,
Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the
guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be
explicitly symmetric. Next, the Eulerian variational principle for the nonlinear electromagnetic gyrokinetic Vlasov-Maxwell equations is presented in the parallel-symplectic representation, where the
gyrocenter Poisson bracket contains contributions from the perturbed magnetic field.
A conservative discretization of incompressible Navier-Stokes equations over surface simplicial meshes is developed using discrete exterior calculus (DEC). The DEC discretization is carried out for
the exterior calculus form of Navier-Stokes equations, where the velocity field is represented by a 1-form. A distinguishing feature of our method is the use of an algebraic discretization of the
interior product operator and a combinatorial discretization of the wedge product. Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme for
structured-triangular meshes, and first order accuracy for general unstructured meshes. The mimetic character of many of the DEC operators provides exact conservation of both mass and vorticity, in
addition to superior kinetic energy conservation. The employment of various discrete Hodge star definitions based on both circumcentric and barycentric dual meshes is also demonstrated. The
barycentric Hodge star allows the discretization to admit arbitrary simplicial meshes instead of being limited only to Delaunay meshes, as in previous DEC-based discretizations. The convergence order
attained through the circumcentric Hodge operator is retained when using the barycentric Hodge. The discretization scheme is presented in detail along with numerical test cases demonstrating its
numerical convergence and conservation properties. Preliminary results regarding the implementation of hybrid (circumcentric/barycentric) Hodge star operator are also presented. We conclude with some
ideas for employing a similar method for magnetohydrodynamics.
The pursuit of commercial fusion power has driven the development of increasingly complex and complete numerical simulation tools in plasma physics. Recent work with the EUTERPE particle-in-cell code
has made possible global, electromagnetic, fully gyrokinetic and fluid-gyrokinetic hybrid simulations in a broad parameter space, where previously global gyrokinetic simulations had been hampered by
the so-called ‘cancellation problem’. This has been applied to the simulation of the interaction between Alfvén eigenmodes and energetic particles. In this talk, the range of numerical methods used
will be detailed, and it will be shown with practical examples that self-consistent global simulations may be necessary for even a qualitatively accurate prediction of the perturbation of the
magnetic field and fast particle transport due to wave-particle interaction. A brief outline will be given of the future direction of this work, such as the possibility of gyrokinetic simulation of
the interaction between fine-scale turbulence and MHD modes.
The particle-in-cell (PIC) method has long been the standard technique for kinetic plasma simulation across many applications. The downside, though, is that quantitatively accurate, 3-D simulations
require vast computing resources. A prominent reason for this complexity is that the statistical figure of merit is the number of particles per cell. In 3-D, the number of cells grows rapidly with
grid resolution, necessitating an astronomical number of particles. To address this challenge, we propose the use of sparse grids: by a clever combination of the results from a variety of grids, each
of which is well resolved in at most one coordinate direction, we achieve similar accuracy to that of a full grid, but with far fewer grid cells, thereby dramatically reducing the statistical error.
We present results from test cases that demonstrate the new scheme's accuracy and efficiency. We also discuss the limitations of the approach and, in particular, its need for an intelligent choice of
coordinate system.
Many nonlinear systems of partial differential equations admit spontaneous formation of singularities in a finite time (blow up). Blow up is often accompanied by a dramatic contraction of the spatial
extent of solution, which is called by collapse. A collapse in a nonlinear Schrodinger equation (NLSE) describes the self-focusing of the intense laser beam in the nonlinear Kerr medium (like usual
glass) with the propagation distance $z$ playing the role of time. NLSE in the dimension two (two transverse coordinates) corresponds the stationary self-focusing of the laser beam eventually causing
optical damage as was routinely observed in experiment since 1960s. NLSE in the dimension three (two transverse coordinates and time) is responsible for the formation of the optical bullet making
pulse much shorter in time in addition to the spatial self-focusing. We address the universal self-similar scaling near collapse. In the critical 2D case the collapsing solutions have a form of
rescaled soliton such that the $z$-dependence of that scale determines the $z$-dependent collapse width $L(z)$ and amplitude $\sim 1/L(z)$. At the leading order $L(z) \sim (z_c-z)^{1/2}$, where $z_c$
is the collapse time with the required log-log modification of that scaling. Log-log scaling for NLSE was first obtained asymptotically in 1980s and later proven in 2006. However, it remained a
puzzle that this scaling was never clearly observed in simulations or experiment. We found that the classical log-log modification NLSE requires double-exponentially large amplitudes of the solution
$\sim 10^{10^{100}}$, which is unrealistic to achieve in either physical experiments or numerical simulations. In contrast, we developed a new asymptotic theory which is valid starting from quite
moderate (about 3 fold) increase of the solution amplitude compare with the initial conditions. We use that new theory to propose a nonlinear combining of multiple laser beams into a
diffraction-limited beam by beam self-focusing in Kerr medium. Multiple beams with total power above critical are combined in near field and propagated through multimode optical fiber. Random
fluctuations during propagation first trigger the formation of the strong optical turbulence. During subsequent propagation, the inverse cascade of optical turbulence tends to increase the transverse
spatial scale of fluctuation until it efficiently triggers a strong optical collapse event producing diffraction-limited beam with the critical power.
Over the decades, multiscale modeling efforts have resorted to powerful methods, such as asymptotic/perturbative expansions and/or averaging techniques. As a result of these procedures, finer scale
terms are typically discarded in the fundamental equations of motion. Although this process has led to well consolidated plasma models, consistency issues may emerge in certain cases especially
concerning the energy balance. This may lead to the presence of spurious instabilities that are produced by nonphysical energy sources. The talk proposes alternative techniques based on classical
mechanics and its underlying geometric principles. Inspired by Littlejohn's guiding-center theory, the main idea is to apply physical approximations to the action principle (or the Hamiltonian
structure) underlying the fundamental system, rather than operating directly on its equations of motion. Here, I will show how this method provides new energy-conserving variants of hybrid
kinetic-MHD models, which suppress the spurious instabilities emerging in previous non-conservative schemes. Also, this method allows for quasi-neutral approximations of fully kinetic Vlasov
theories, thereby neglecting both radiation and Langmuir oscillations.
Even diffraction aside, the commonly known equations of geometrical optics (GO) are not entirely accurate. GO considers wave rays as classical particles, which are completely described by their
coordinates and momenta, but rays have another degree of freedom, namely, polarization. As a result, wave rays can behave as particles with spin. A well-known example of polarization dynamics is
wave-mode conversion, which can be interpreted as rotation of the (classical) ``wave spin.'' However, there are other less-known manifestations of the wave spin, such as polarization precession and
polarization-driven bending of ray trajectories. This talk presents recent advances in extending and reformulating GO as a first-principle Lagrangian theory, whose effective-gauge Hamiltonian governs
both mentioned polarization phenomena simultaneously. Examples and numerical results are presented. When applied to classical waves, the theory correctly predicts the polarization-driven divergence
of left- and right- polarized electromagnetic waves in isotropic media, such as dielectrics and nonmagnetized plasmas. In the case of particles with spin, the formalism also yields a point-particle
Lagrangian model for the Dirac electron, i.e. the relativistic spin-1/2 electron, which includes both the Stern-Gerlach spin potential and the Bargmann-Michel-Telegdi spin precession. Additionally,
the same theory contributes, perhaps unexpectedly, to the understanding of ponderomotive effects in both wave and particle dynamics; e.g., the formalism allows to obtain the ponderomotive Hamiltonian
for a Dirac electron interacting with an arbitrarily large electromagnetic laser field with spin effects included.
This talk reports on a recent advancement in developing physical understanding and a first-principles-based model for predicting intrinsic rotation profiles in magnetic fusion experiments, including
ITER. It is shown for the first time that turbulent fluctuation-driven residual stress (a non-diffusive component of momentum flux) can account for both the shape and magnitude of the observed
intrinsic toroidal rotation profile. The model predictions of core rotation based on global gyrokinetic simulations agree well with the experimental measurements for a set of DIII-D ECH discharges.
The characteristic dependence of residual stress and intrinsic rotation profile structure on the multi-dimensional parametric space covering turbulence type, q-profile structure, collisionality and
up-down asymmetry in magnetic geometry has been studied with the goal of developing physics understanding needed for rotation profile control and optimization. Finally, the first-principles-based
model is applied to elucidating the ρ∗-scaling and predicting rotations in ITER regime.
Collisionless shocks -- supersonic plasma flows in which the interaction length scale is much shorter than the collisional mean free path -- are common phenomena in space and astrophysical systems,
including the solar wind, coronal mass ejections, supernovae remnants, and the jets of active galactic nuclei. These systems have been studied for decades, and in many the shocks are believed to
efficiently accelerate particles to some of the highest observed energies. Only recently, however, have laser and diagnostic capabilities evolved sufficiently to allow the detailed study in the
laboratory of the microphysics of collisionless shocks over a large parameter regime. We present experiments that demonstrate the formation of collisionless shocks utilizing the Phoenix laser
laboratory and the LArge Plasma Device (LAPD) at UCLA. We also show recent observations of magnetized collisionless shocks on the Omega EP laser facility that extend the LAPD results to higher laser
energy, background magnetic field, and ambient plasma density, and that may be relevant to recent experiments on strongly driven magnetic reconnection. Lastly, we discuss a new experimental regime
for shocks with results from high-repetition (1 Hz), volumetric laser-driven measurements on the LAPD. These large parameter scales allow us to probe the formation physics of collisionless shocks
over several Alfvenic Mach numbers ($M_A$), from shock precursors (magnetosonic solitons with $M_A<1$) to subcritical ($M_A<3$) and supercritical ($M_A>3$) shocks. The results show that collisionless
shocks can be generated using a laser-driven magnetic piston, and agree well with both 2D and 3D hybrid and PIC simulations. Additionally, using radiation-hydrodynamic modeling and measurements from
multiple diagnostics, the different shock regimes are characterized with dimensionless formation parameters, allowing us to place disparate experiments in a common and predictive framework.
Runaway electrons are a critical area of research into tokamak disruptions. A thermal quench on ITER can result in avalanche production of a large amount of runaway electrons and a transfer of the
plasma current to be carried by runaway electrons. The potential damage caused by the highly energetic electron beam poses a significant challenge for ITER to achieve its mission. It is therefore
extremely important to have a quantitative understanding of the runaway electron avalanche process. It is found that the radiative energy loss and the pitch angle scattering from radiative E&M fields
plays an important role in determining the runaway electron distribution in momentum space. In this talk we discuss three kinds of radiation from runaway electrons, synchrotron radiation, Cerenkov
radiation, and electron cyclotron emission (ECE) radiation. Synchrotron radiation, which mainly comes from the cyclotron motion of highly relativistic runaway electrons, dominates the energy loss of
runaway electrons in the high-energy regime. The Cerenkov radiation from runaway electrons gives an additional correction to the Coulomb logarithm in the collision operator, which changes the
avalanche growth rate. The ECE emission mainly comes from electrons in the energy range 1.2<γ<3, and gives an important approach to diagnose the runaway electron distribution in momentum and pitch
angle. We developed a novel tool to self-consistently calculate normal mode scattering of runaway electrons using the quasi-linear method, and implement that in the a well-developed runaway electron
kinetic simulation code CODE. Using this we successfully reproduce the experimental result of ECE signal qualitatively.
Electromagnetic ion cyclotron (EMIC) waves are often observed in the magnetosphere with frequency usually in the proton and helium cyclotron bands and sometimes in the oxygen band. The temperature
anisotropy, caused by injection of energetic ions or by compression of magnetosphere, can efficiently generate proton EMIC waves, but not as efficient for helium or oxygen EMIC waves. Here we propose
a new generation mechanism for helium and oxygen EMIC waves associated with weak fast magnetosonic shocks, which are observed in the magnetosphere. These shocks can be associated with either dynamic
pressure enhancement or shocks in the solar wind and can lead to the formation of a “bunch” distribution in the perpendicular velocity plane of oxygen ions. The oxygen bunch distribution can excite
strong helium EMIC waves and weak oxygen and proton waves. The dominant helium EMIC waves are strong in quasi-perpendicular propagation and show harmonics in frequency spectrum of Fourier analysis.
The proposed mechanism can explain the generation and some observed properties of helium and oxygen EMIC waves in the magnetosphere.
The nature of ideal-MHD equilibria in three-dimensional geometry is profoundly affected by resonant surfaces, which beget a non-analytic dependence of the equilibrium on the boundary. Furthermore,
non-physical currents arise in equilibria with continuously-nested magnetic surfaces and smooth pressure and rotational-transform profiles. We demonstrate that three-dimensional, ideal-MHD equilibria
with nested surfaces and δ-function current densities that produce a discontinuous rotational-transform are well defined and can be computed both perturbatively and using fully-nonlinear equilibrium
calculations. The results are of direct practical importance: we predict that resonant magnetic perturbations penetrate past the rational surface (i.e. “shielding” is incomplete, even in purely
ideal-MHD) and that the perturbation is amplified by plasma pressure, increasingly so as stability limits are approached.
The total-f edge gyrokinetic code XGC1 shows that the divertor heat-flux width $λ_q$ in three US tokamaks (DIII-D for conventional aspect ratio, NSTX for tight aspect ratio, and C-Mod for high BP)
obeys the experimentally observed $λ_q\propto 1/B_P^\gamma$ scaling in the so called “sheath-dominant regime.” The low-beta edge plasma is non-thermal and approaches the quasi-steady state in a
kinetic non-diffusive time scale. Nonlinear Fokker-Planck-Landau collision operator is used. Monte-Carlo neutral atoms are recycled near the material wall. Successful validation of the XGC1
simulation results on three US tokamak devices will be presented in the so called "sheath-limited" regime. It is found that $λ_q$ on DIII-D, NSTX, and lower-$B_P$ C-Mod is dominated by the
neoclassical orbit dynamics of the supra-thermal ions. However, C-Mod at higher $B_P$ shows blob-dominance, while still fitting into the $λ_q\propto 1/B_P^\gamma$ graph. Predictive simulation on ITER
shows that $λ_q$ is over 5 times greater than that predicted by the empirical $λ_q\propto 1/B_P^\gamma$ scaling.
In this talk I will describe the details and derivation of a new current-coupling gyrokinetic-MHD model. In particular, I will show that the model can be derived from a variational principle. Energy,
and hot charge are conserved exactly regardless of the form of the background magnetic field. Likewise, when the background field admits a continuous rotation or translation symmetry, the
corresponding component of the total momentum is conserved. The theory relies on a new gauge-invariant formulation of the motion of gyrocenters in prescribed electromagnetic fields, and this will be
described in detail.
Turbulence is a ubiquitous process in space and astrophysical plasmas that serves to mediate the transfer of large-scale motions to small scales at which the turbulence can be dissipated and the
plasma heated. In situ solar wind observations and direct numerical simulations demonstrate that sub-proton scale turbulence is dominated by highly anisotropic and intermittent, low frequency,
kinetic Alfvénic fluctuations. I will review recent work on the dissipation of Alfvénic turbulence observed in gyrokinetic simulations and discuss the coherent structures and intermittency associated
with the turbulence, which suggest a non-local and non-self-similar energy cascade. Moving beyond the confines of gyrokinetics, I will also briefly discuss work on a full Eulerian Vlasov-Maxwell
code, Gkeyll, being developed at Princeton and the University of Maryland.
The well-known, physical mechanism for fast, magnetic reconnection in collisionless plasmas is that the off-diagonal terms of the electron-pressure tensor give rise to a larger electric-fields in the
reconnection region. The electron-pressure tensor fully associated with electron kinetic effects is difficultly implemented into the MHD model. In this talk, we try to use a simple equation $E = \eta
J$ (where $\eta$ is an effective resistivity) to illustrate the fast reconnection in collisionless, magnetic reconnection. The physical mechanism and formulation of the effective resistivity are
Accretion flows are found in a large variety of astrophysical systems, from protoplanetary disks to active galactic nuclei. Our present understanding of such flows is severely limited by both
observational and numerical resolution. I will discuss some new numerical results on zero magnetic flux shear MHD turbulence and its relation to the magnetic Prandtl number. I will then briefly
discuss the effects of rotation on large scale magnetic fields. My talk will end with some speculations about how one might construct a self-consistent model for accretion flows based on our current
Following the first operation of H-mode in KSTAR in 2009, study of the edge localized modes (ELM) has been actively conducted. A unique in-vessel control coil (IVCC) set (top, middle and bottom)
capable of generating resonant (and non-resonant) magnetic perturbation (RMP) at low n(=1,2) number was successfully utilized to suppress and/or mitigate the ELM-crash in KSTAR. Extensive study of
dynamics of the ELMs in both pre-crash and crash suppressed phase under magnetic perturbation with the 2D/3D Electron Cyclotron Emission Imaging (ECEI) system revealed new phenomenology of the ELMs
and ELM-crash dynamics that were not available from conventional diagnostics. Since the first 2D images of the ELM time evolution from growth to crash through saturation, the detailed images of the
ELMs leading to the crash together with the fast RF emission (<200MHz) signal demonstrated that the pre-crash events are complex. The measured 2D image of the ELM was validated by direct comparison
with the synthetic 2D image by the BOUT++ code and non-linear modelling study is in progress. Recently, the observed dynamics of the ELMs at both high and low field sides such as asymmetries in
intensity, mode number and rotation direction casted a doubt in peeling-ballooning mode. Response of high field side ELM to the RMP was more pronounced compared to that of the low field side. Other
study includes observation of multi-modes and sudden mode number transition. During the ELM-crash suppression experiment, various types of ELM-crash patterns were observed and often the suppression
was marginal. The observed semi-coherent turbulence spectra under the RMP provided an evidence of non-linear interaction between the ELMs and turbulence.
Classical particle systems reside at thermal equilibrium with their velocity distribution function stabilized into a Maxwell distribution. On the contrary, collisionless and correlated particle
systems, such as space and astrophysical plasmas, are characterized by a non-Maxwellian behavior, typically described by so-called $\kappa$ distributions, or combinations thereof. Empirical $\kappa$
distributions have become increasingly widespread across space and plasma physics. A breakthrough in the field came with the connection of $\kappa$ distributions to non-extensive statistical
mechanics. Understanding the statistical origin of $\kappa$ distributions was the cornerstone of further theoretical developments and applications, some of which will be presented in this talk: (i)
The physical meaning of thermal parameters, e.g., temperature and kappa index; (ii) the multi-particle description of $\kappa$ distributions; (iii) the generalization to phase-space $\kappa$
distribution of a Hamiltonian with non-zero potential; (iv) the Sackur-Tetrode entropy for $\kappa$ distributions, and (v) the existence of a large-scale phase-space cell, characteristic of
collisionless space plasmas, indicating a new quantization constant, $\hbar ^* \sim 10^{-22} Js$.
The theorem for toroidal angular momentum conservation within gyrokinetic field theory is used as a starting point for consideration of flow equilibration at low frequencies (less than fast-Alfvén or
gyrofrequencies). Quasineutrality and perpendicular MHD force balance are inputs to the theory and therefore never violated. However, the gyrocenter densities are not ambipolar in equilibrium, since
the flow vorticity is given by their difference. From an arbitrary initial state, flows evolve acoustically and via Landau damping into divergence balance, in which radial force balance of the
electric field is a part. On collisional time scales, which in the tokamak core are longer, the neoclassical electric field is brought into balance by collisions, and it is only on these slow time
scales that the collisional transport is ambipolar (ie, the time derivative of the vorticity is small). Computations from 2014 showing the relaxation on tokamak core spatial scales are displayed. I
will also give relevant cases of edge-layer relaxation and discuss the dependence on the finite poloidal gyroradius. Total-$f$ two-species gyrokinetic relaxation cases from 2009/10 are available to
show that the basic processes in fluid and gyrokinetic models are the same for these purposes.
A higher-order portion of the ${\bf E}\times {\bf B}$ drift causes an outward flux of co-current momentum when electrostatic potential energy is transferred to ion-parallel flows. The robust symmetry
breaking follows from the free energy flow in phase space and does not depend on any assumed linear eigenmode structure. The resulting rotation peaking is counter-current and scales as temperature
over plasma current. This peaking mechanism can only act when there are adequate fluctuations at low enough frequencies to excite ion parallel flows, which may explain some experimental observations
related to rotation reversals
Magnetic reconnection, the change of magnetic topology in the presence of plasma, is observed in space, laboratory, and enables the explosive energy release by plasma instabilities, as in solar
flares or magnetospheric substorms, and the change in topology allows the rapid heat transport associated with sawtooth relaxation and self-organization in RFPs. In numerous environments, especially
in toroidal confinement devices, reconnection proceeds in the presence of a net guide field. We report detailed laboratory observations in MRX of the structure of reconnection current sheets with a
guide field regime in a two-fluid plasma regime (ion gyro-radius comparable to the current sheet width) . We observe experimentally for the first time the quadrupolar electron pressure variation in
the ion-diffusion region, an analogue of the quadrupolar "Hall" magnetic fields in anti-parallel reconnection. The quadrupolar pressure perturbation was originally predicted by extended MHD
simulation as essential to balancing the large parallel reconnection electric fields over the ion-scale current-sheet We observe that electron density variations dominate temperature variations and
may provide a new diagnostic of reconnection with finite guide field for fusion experiments and spacecraft missions. We discuss consequences for force balance in the reconnection layer and
implications for fast reconnection in fusion devices.
A 2D full-wave simulation code (so-called FW2D) has been recently developed. This code currently solves the cold plasma wave equations using the finite element method. One advantage of using the
finite element method is that the local basis functions can be readily adapted to boundary shapes and can be packed in such a way as to provide higher resolution in regions where needed. We have
constructed a 2D triangular mesh given a specified boundary and a target mesh density function. Moreover, the density of the mesh can be specified based on the expected wavelength obtained from
solution of the local dispersion (except close to resonances) so that the most efficient resolution is used. Another advantage of this wave code is short running time. For instance, by using node
number of 24,395, the computing time is approximately 300 seconds CPU time to obtain a solution. The wave code has been successfully applied to describe low frequency waves in Earth and Mercury's
multi-ion magnetospheres. The results include (a) mode conversion from the incoming fast to the transverse wave modes at the ion-ion hybrid resonance, (b) mode coupling and polarization reversal
between left-handed (i.e., electromagnetic ion cyclotron waves: EMIC waves) and right-handed polarized waves (i.e., fast mode), and (c) refraction and reflection of field-aligned propagating EMIC
waves near the heavier ion cyclotron frequency. Very recently FW2D code has been improved to adopt tokamak geometry and examine radio frequency (RF) waves in the scape-off layer (SOL) of tokamaks,
which is the region of the plasma between last closed flux surface and tokamak vessel. The SOL region is important for RF wave heating of tokamaks because significant wave power loss can occur in
this region. This code is ideal for waves in SOL plasma, because realistic boundary shapes and arbitrary density structures can be easily adopted in the code and the SOL plasma can be approximated as
cold plasma.
In a popular description of the L-H transition, energy transfer to the mean flows directly depletes kinetic energy from turbulent fluctuations, resulting in suppression of the turbulence and a
corresponding transport bifurcation. However, electron parallel force balance couples non-zonal velocity fluctuations with electron pressure fluctuations on rapid timescales, comparable with the
electron transit time. For this reason, energy in the non-zonal velocity stays in a fairly fixed ratio to electron thermal free energy, at least for frequency scales much slower than electron
transit. In order for direct depletion of the energy in turbulent fluctuations to cause the L-H transition, energy transfer via Reynolds stress must therefore drain enough energy to significantly
reduce the sum of the free energy in non-zonal velocities and electron pressure fluctuations. At low $k$, the electron thermal free energy is much larger than the energy in non-zonal velocities,
posing a stark challenge for this model of the L-H transition.
The Speed-Limited Particle-In-Cell (SLPIC) Method reduces computational requirements for simulations that evolve on ion time scales while keeping appropriate kinetic electron effects. This method
works by introducing an ansatz for the distribution function that allows the new unknown phase-space function to be solved by the method of characteristics, where those characteristics move slowly
through phase space. Therefore, the solution can be obtained by particle-in-cell (PIC) methods, where the electrons have speeds much smaller than their actual speeds, ultimately leading to a much
relaxed numerical (CFL) stability condition on the time step. Ultimately, the time step can be increased by the square root of the ion-electron mass ratio. SLPIC can be easily implemented in existing
PIC codes as it requires no changes to deposition and field solution. Its explicit nature makes it ideal for modern computing architectures with vector instruction sets.
We present a comprehensive survey of the various computational methods for finding axisymmetric plasma equilibria. Our focus is on free-boundary plasma equilibria, where either poloidal field coil
currents or the temporal evolution of voltages in poloidal field circuit systems are given data. Centered around a piecewise linear finite element representation of the poloidal flux map, our
approach allows in large parts the use of established numerical schemes. The coupling of a finite element method and a boundary element method gives consistent numerical solutions for equilibrium
problems in unbounded domains. We formulate a Newton-type method for the discretized non-linear problem to tackle the various non-linearities, including the free plasma boundary. The Newton method
guarantees fast convergence and is the main building block for the inverse equilibrium problems that we discuss as well. The inverse problems aim at finding either poloidal field coil currents that
ensure a desired shape and position of the plasma or at finding the evolution of the voltages in the poloidal field circuit systems that ensure a prescribed evolution of the plasma shape and
position. We provide equilibrium simulations for the tokamaks ITER and WEST to illustrate performance and application areas.
Gyrokinetic simulations -- such as those performed by the XGC code -- provide a self-consistent framework to investigate a wide range of physics in strongly magnetized high temperature laboratory
plasmas, global modes usually considered to be in the realm of MHD simulations. However, the present simulation models generally concentrate on short wavelength electro-magnetic modes mostly to
convenience the field solver performance. To incorporate more global fluid-like modes, also non-zonal long wavelength physics needs to be retained. In this work we present development of a fully 3D
mixed FEM/FDM electro-magnetic field solver for use in the gyrokinetic code XGC1. We present optimization for use on massively parallel computational platforms, investigation of numerical accuracy
characteristics using the method of manufactured solutions, and evaluate importance of field line length calculations on the stability of the discretization. We also invite discussion on the
importance of the perpendicular vector potential for pressure driven modes.
The fast electron driven beta induced Alfvén eigenmode (e-BAE) in toroidal plasmas is investigated for the first time using global gyrokinetic particle simulations, where the fast electrons are
described by the drift kinetic model. The phase space structure shows that only the processional resonance is responsible for the e-BAE excitations while fast-ion driven BAE can be excited through
all the channels such as transit, drift-bounce, and processional resonance. Frequency chirping is observed in nonlinear simulations with both weak and strong drives in the absence of sources and
sinks, which provide a complement to the standard `bump-on-tail` paradigm for the frequency chirping of Alfvén eigenmodes. For weakly nonlinear driven case, frequency is observed to be in phase with
the particle energy flux, and mode structure is almost the same as linear stage. While in the strongly driven nonlinear case, BAAE is excited along with BAE after the BAE mode saturated. Analysis of
nonlinear wave-particle interactions show that the frequency chirping is induced by the nonlinear evolution of the coherent structures in the energetic-particle phase space, where the dynamics of the
coherent structure is controlled by the formation and destruction of phrase space islands of energetic particles in the canonical variables. Zonal flow and zonal field are found to affect
wave-particle resonance in the nonlinear e-BAE simulations.
Progress in a number of research frontiers relies upon an accurate description of the transport coefficients of warm and hot dense matter, characterized by densities near those of solids and
temperatures ranging from several to hundreds of eV. Examples include inertial confinement fusion, evolution of giant planets, exoplanets, and other compact astrophysical objects such as white dwarf
stars, as well as numerous high energy density laboratory experiments. These conditions are too dense for standard plasma theories to apply and too hot for condensed matter theories to apply. The
challenge is to account for the combined effects of strong Coulomb coupling of ions and quantum degeneracy of electrons. This seminar will discuss the first theory to provide fast and accurate
predictions of ionic transport coefficients in this regime. The approach combines two recent developments. One is the effective potential theory (EPT), which is a physically motivated approach to
extend plasma kinetic theory into the strong coupling regime. The second is a new average atom model, which provides accurate radial density distributions at high-density conditions, accounting for
effects such as pressure ionization. Results are compared with state-of-the-art orbital-free density functional theory computations, revealing that the theory is accurate from high temperature
through the warm dense matter regime, breaking down when the system exhibits liquid-like behaviors. A number of properties are considered, including diffusion, viscosity and thermal conductivity.
Fishbone is one of the most important energetic particles driven modes in tokamaks. A numerical study of the nonlinear dynamics of fishbone has been carried out in this work. Realistic parameters
with finite toroidal plasma rotation are used to understand nonlinear frequency chirping in NSTX. We have carried out a systematic study of nonlinear frequency chirping and energetic particle
dynamics. It is found that, linearly, the mode is driven by both trapped particles and passing particles, with resonance condition $\omega_{d} \simeq \omega$ for trapped particles and $\omega_{\phi}+
\omega_{\theta}\simeq\omega$ for passing particles, where $\omega_{d}$ is trapped particle toroidal precession frequency, and $\omega_{\phi}$, $\omega_{\theta}$ are passing particle transit frequency
in toroidal and poloidal direction. As the mode growing, trapped resonant particles oscillate and move outward radially, which reduces particle precessional frequency. We believe this is the main
reason for the mode frequency chirping down. Finally, as the mode frequency chirping down, initially non-resonant particles with lower precessional frequencies become resonant particles in the
nonlinear regime. This effect can sustain a quasi-steady state mode amplitude observed in the simulation.
The plasma current in ITER can be transferred from near thermal to relativistic electrons by the runaway phenomenon. If such a current of relativistic electrons were to strike the chamber walls in
ITER, the machine could be out of commission for many months. For ITER to be operable as a research device, the shortest credible time between such events must be years. The physics of the runaway
process is remarkably simple and clear. The major uncertainty is what range of plasma conditions may arise in post thermal quench ITER plasmas. Consequently, a focused effort that includes theory,
experiments, and engineering could relatively quickly clarify whether ITER will be operable with the envisioned mitigation strategy and what mitigation strategies could enhance the credibility that
ITER will be operable.
Since the work of Sauter in 1931 it is known that quantum electrodynamics (QED) exhibits a so-called "critical" electromagnetic field scale, at which the quantum interaction between photons and
macroscopic electromagnetic fields becomes nonlinear. One prominent example is the importance of light-light interactions in vacuum at this scale, which violates the superposition principle of
classical electrodynamics. Furthermore, an electromagnetic field becomes unstable in this regime, as electron-positron pairs can be spontaneously created from the vacuum at the expenses of
electromagnetic-field energy (Schwinger mechanism). Unfortunately, the QED critical field scale is so high that experimental investigations are challenging. One promising pathway to explore QED in
the nonlinear domain with existing technology consists in the combination of modern (multi) petawatt optical laser systems with highly energetic particles. The suitability of this approach was first
demonstrated in the mid-1990s at the seminal SLAC E-144 experiment. Since then, laser technology continuously developed, implying the dawn of a new era of strong-field QED experiments. For instance,
the basic processes nonlinear Compton scattering and Breit-Wheeler pair production are expected to influence laser-matter interactions and in particular plasma physics at soon available laser
intensities. Therefore, a considerable effort is being undertaken to include these processes into particle-in-cell (PIC) codes used for numerical simulations.
In the first part of the talk the most prominent nonlinear QED phenomena are presented and discussed on a qualitative level. Afterwards, the mathematical formalism needed for calculations with strong
plane-wave background fields is introduced with an emphasize on fundamental concepts. Finally, the nonlinear Breit-Wheeler process is considered more in depth. In particular, the semiclassical
approximation is elaborated, which serves as a starting point for the implementation of quantum processes into PIC codes. | {"url":"https://theory.pppl.gov/news/seminars.php?scid=1&n=research-seminars","timestamp":"2024-11-07T20:23:15Z","content_type":"text/html","content_length":"642585","record_id":"<urn:uuid:4dd4b082-a2b9-43dc-86ab-02c24d467265>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00711.warc.gz"} |
An object is made of a prism with a spherical cap on its square shaped top. The cap's base has a diameter equal to the lengths of the top. The prism's height is 7 , the cap's height is 8 , and the cap's radius is 8 . What is the object's volume? | HIX Tutor
An object is made of a prism with a spherical cap on its square shaped top. The cap's base has a diameter equal to the lengths of the top. The prism's height is # 7 #, the cap's height is #8 #, and
the cap's radius is #8 #. What is the object's volume?
Answer 1
#color(maroon)("Volume of the object " = pi r^2 H + (2/3) pi r^3 ~~ 2479.7638 " cubic units"#
$\text{Volume of the object " = " Volume of cylindrical prism + Volume of spherical cap (or hemisphere)}$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the volume of the object, first calculate the volume of the prism and then add the volume of the spherical cap.
Volume of prism = base area × height Base area of the prism = side length of square × side length of square Base area = (side length of square)² Height of prism = 7
Volume of prism = (side length of square)² × 7
Next, calculate the volume of the spherical cap.
Volume of spherical cap = (π/6) × h² × (3r - h)
Where: h = height of the cap r = radius of the cap
Given: h = 8 r = 8
Volume of spherical cap = (π/6) × 8² × (3×8 - 8)
Finally, add the volume of the prism and the volume of the spherical cap to find the total volume of the object.
Total volume = Volume of prism + Volume of spherical cap
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/an-object-is-made-of-a-prism-with-a-spherical-cap-on-its-square-shaped-top-the-c-33-8f9afa400c","timestamp":"2024-11-12T07:07:47Z","content_type":"text/html","content_length":"583237","record_id":"<urn:uuid:87773299-0817-4822-a98d-255f8079071e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00566.warc.gz"} |
Infosys Coding Exam Solutions | Specialist Programmer | Previous Year - OnlineStudy4U
Infosys Coding Exam Solutions | Specialist Programmer | Previous Year
Infosys Coding Exam Solutions – In This Post We Will Solve Coding Problems Statements From Infosys Specialist Programmer & Power Programmer Roles Which Includes Questions Asked in Previous Infosys
Question Number 1 – Infosys Coding Exam Solutions
Note: Round off the output to its nearest value.
Input Specification: Infosys Coding Exam Solutions
• input1: An integer value ( S ) representing the total number of daily steps.
• input2: An integer value ( D ) representing the number of steps user has completed.
• input3: An integer value ( N ) representing the number of days the user has been tracking their steps.
Output Specification: Infosys Coding Exam Solutions
• Return an integer value representing the average number of steps the user needs to take per day to reach their target.
Function Description:
class UserMainCode(object):
def goalTracker(cls, input1, input2, input3):
# write your code here
Example 1: Infosys Coding Exam Solutions
input1: 348
input2: 327
input3: 5
• Output: 4
• Explanation:
Here, ( S = 348 ), ( D = 327 ), and ( N = 5 ). The user needs to take 21 more steps (348 – 327 = 21). Since they have been tracking their steps for 5 days, the average steps per day needed to
reach the target is ( \frac{21}{5} ), which is approximately 4. Hence, 4 is returned as output.
The output for the given example is 4.
Solution with Explanation – Infosys Coding Exam Solutions
To solve this problem, follow these steps:
1. Calculate Remaining Steps: Determine the number of steps the user still needs to take by subtracting the completed steps ( D ) from the total steps ( S ).
2. Calculate Average Steps Per Day Needed: Divide the remaining steps by the number of days ( N ) and round the result to the nearest integer.
Here’s the implementation of the solution:
class UserMainCode:
def goalTracker(cls, S, D, N):
# Calculate remaining steps needed
remaining_steps = S - D
# Calculate average steps per day needed (round to nearest integer)
average_steps_per_day = round(remaining_steps / N)
return average_steps_per_day
Explanation: Infosys Coding Exam Solutions
1. Remaining Steps Calculation:
remaining_steps = S - D
• This calculates the steps left to reach the goal. For example, if ( S = 348 ) and ( D = 327 ), then remaining_steps is ( 348 – 327 = 21 ).
1. Average Steps Per Day Calculation:
average_steps_per_day = round(remaining_steps / N)
• This calculates the average steps needed per day by dividing the remaining steps by the number of days ( N ) and rounding to the nearest integer. For example, if remaining_steps is 21 and ( N = 5
), then average_steps_per_day is ( \text{round}(21 / 5) = 4 ).
1. Return the Result: Infosys Coding Exam Solutions
return average_steps_per_day
By following these steps, the function correctly calculates the average number of steps the user needs to take per day to reach their target.
Question Number 2
Problem Statement – Infosys Coding Exam Solutions
Input Specification:
• input1: An integer N, representing the length of the binary array.
• input2: An integer X, representing the cost of converting 0 to 1.
• input3: An integer Y, representing the cost of converting 1 to 0.
• input4: An integer array of length N, representing the binary array.
Output Specification:
• Return an integer value representing the minimum amount of coins needed to convert the array as desired.
input1: 4
input2: 5
input3: 4
input4: [1, 1, 1, 0]
Here, the given array is [1, 1, 1, 0].
• The cost of converting the array to [1, 1, 1, 1] is 5 (converting the last 0 to 1).
• The cost of converting the array to [0, 0, 0, 0] is 12 (converting each 1 to 0).
Therefore, the optimal solution is to convert the last 0 to 1, resulting in a cost of 5.
Solution Explanation – Infosys Coding Exam Solutions
To solve this problem, we need to consider two potential conversion strategies:
1. Converting the entire array to all 0s.
2. Converting the entire array to all 1s.
We can calculate the cost for each strategy and then choose the one with the minimum cost:
• To convert the array to all 0s, sum the cost of converting every 1 to 0.
• To convert the array to all 1s, sum the cost of converting every 0 to 1.
Here’s the step-by-step approach in Python:
1. Initialize cost_to_convert_all_to_0 to 0. This will keep track of the cost to convert the array to all 0s.
2. Initialize cost_to_convert_all_to_1 to 0. This will keep track of the cost to convert the array to all 1s.
3. Iterate through the array:
• If the element is 0, add X to cost_to_convert_all_to_1.
• If the element is 1, add Y to cost_to_convert_all_to_0.
1. The minimum cost will be the lesser of the two computed costs.
Python Code – Infosys Coding Exam Solutions
class UserMainCode:
def minimumcost(cls, N, X, Y, arr):
cost_to_convert_all_to_0 = 0
cost_to_convert_all_to_1 = 0
for num in arr:
if num == 0:
cost_to_convert_all_to_1 += X
elif num == 1:
cost_to_convert_all_to_0 += Y
return min(cost_to_convert_all_to_0, cost_to_convert_all_to_1)
# Example usage:
N = 4
X = 5
Y = 4
arr = [1, 1, 1, 0]
print(UserMainCode.minimumcost(N, X, Y, arr)) # Output: 5
Detailed Explanation – Infosys Coding Exam Solutions
• Initialization: Start by initializing two variables to hold the conversion costs for both scenarios.
• Iteration: Traverse through each element in the array. Depending on the value of the element (0 or 1), add the respective conversion cost to the corresponding variable.
• Comparison: Finally, compare the total costs of the two scenarios and return the minimum value.
By following these steps, we ensure that we calculate the minimal cost required to convert the entire array to either all 0s or all 1s. | {"url":"https://onlinestudy4u.in/infosys-coding-exam-solutions/","timestamp":"2024-11-14T12:07:19Z","content_type":"text/html","content_length":"86818","record_id":"<urn:uuid:c6991419-6cad-4abf-a0b0-f9a33d011351>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00877.warc.gz"} |
How to Use the BIN2HEX Function | Encyclopedia-Excel
The BIN2HEX function converts binary numbers into hexadecimal numbers.
= BIN2HEX(number, [places])
number - The binary number you wish to convert to a hexadecimal number
[places] - Optional, number of significant digits to pad the hexadecimal number
Note: binary numbers can be in text or number form, either 1001 or "1001" will work
The BIN2HEX function is part of the "Engineering" group of functions within Excel.
This function takes in binary numbers and converts them into hexadecimal
The binary number can be up to 10 characters, or bits, in length. If the length exceeds this, a #NUM error will be returned.
This means the largest number than can be represented is "111111111" or 511.
When using the optional places argument, the places number will pad the returned binary number with zeros.
Hexadecimal represents binary data more compactly than binary or decimal, as one hex digit can represent four binary digits (bits), making it efficient for displaying large values.
What is a Binary Number
A binary number is a number expressed in the base-2 numeral system, which uses only two symbols, 0 and 1.
Each digit in a binary number is a bit, and the value of each bit is based on its position, with each position representing a power of 2, increasing as you move from right to left.
In this example, instead of the base increasing by 10x as we go up, the base is increased by 2x each step.
Each 0 and 1 value in the binary number is multiplied by the base, and the sum of all of the multiplied values is the corresponding decimal number.
What is a Hexadecimal Number
The hexadecimal number system uses base 16, and uses the standard 0-9 digits, but also includes the letters A-F to represent higher numbers.
This table shows the hexadecimal version of the standard decimal numbers that we are used to working with.
Because hexadecimals use a base 16, this means that the as the numbers of digits increase, their place value, increases by a power of 16. The first being 16^0 = 1, next being 16^1 = 16, 16^2 = 256,
and so on.
If we look at it visually, it would look like this:
In this example, the hexadecimal number 2B3 is calculated by multiplying the first value , "3", by 1, the second value "B" which represents 11 by 16, and finally the third value "2" by 256, and then
adding them all together to get 691.
How to Convert Binary Number to Hexadecimal
Let's say we have table of binary numbers, and need to convert them into hexadecimal.
The formula to convert from binary to hexadecimal would be:
If we plug this formula into the table, the formula returns the correct hexadecimal number.
Using the BIN2HEX function in the right column lets us easily convert multiple binary numbers at once. | {"url":"https://www.encyclopedia-excel.com/how-to-use-the-bin2hex-function","timestamp":"2024-11-13T11:16:16Z","content_type":"text/html","content_length":"1050487","record_id":"<urn:uuid:80bcdac8-af93-44b6-885d-0060920e5b0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00361.warc.gz"} |
Nominal and Periodic Interest Rates
Hello and welcome. In this video, we will talk about interest rates. There are 2 kinds of interest rates you need to know about. Nominal rates and periodic rates. Let’s see what they are. First,
let’s look at the nominal interest rate. The Nominal Interest Rate always has a certain compounding frequency. It is written down as Jm, where m is the compounding frequency. The compounding
frequency tells us how many times per year the rate is compounded. For example, j2 – is the nominal interest rate compounded 2 times per year, it’s called nominal interest rate with semi-annual
compounding; This is the most common rate you will see in the course. J12 is the nominal interest rate compounded 12 times per year, it’s called nominal interest rate with monthly compounding. J365 –
that’s nominal interest rate compounded 365 times per year, or nominal interest rate with daily compounding Sometimes you may also encounter j4 – which is the nominal interest rate with quarterly
compounding. Another important rate that we will use in this course is j1. This is the nominal interest with annual compounding, and it’s also called effective rate. We will go over what this means
exactly later, for now just remember that j1 is a special rate and it is also called the effective rate. Now let’s talk about periodic interest rate. What is periodic interest rate? The periodic
interest rate is the rate per compounding period. Makes sense, right, periodic is rate PER PERIOD. For example, if you have a nominal interest rate with semi-annual compounding j2 = 8%, Your periodic
interest rate is the rate per compounding period, so per half of the year. So while the nominal interest rate is always the rate per year, the periodic rate is the rate per compounding period. The
periodic rate is written as i2 and it is simply half of j2, i2 = j2/2 = 4%. So we have 2 compounding periods here and each period we add 4% to the loan. Let’s look at another example. What if we have
a nominal interest rate with quarterly compounding j4 = 8%. That means there are 4 compounding periods per year, and each period EACH QUAERTER we add 2% to the loan, because our periodic rate I4 = j4
/4 compounding periods = 2%. So every compounding period we are adding 2% to the loan amount and we do it 4 times. What if we have j12 = 12%. That’s the nominal interest rate with monthly
compounding. That means that our periodic monthly interest rate is j12/12, 12%/12 is 1%. So every month, we add 1% to the loan and we do it 12 times. So you can see that nominal and periodic rates
are connected. If we know the nominal rate, we simply divide it by the number of compounding periods to get the periodic rate. The general formula is im (the periodic rate) is equal to jm (the
nominal rate) divided by m, Where m is the number of compounding periods. Like you saw in the examples, i2 = j2/2, i4 = j4/4, I 12 = j12/12 and so on. You can also go the other way. If we know the
periodic rate im, we can multiply it by the number of compounding periods m to get nominal rate jm. So for example j4 = i4 x 4, j2 = i2 x 2, j12 = i12 x 12 and so on. Let’s look at some examples of
this reverse formula. Let’s say you have the monthly periodic rate of 1 %. That’s your i12. From it, you can find the nominal interest rate with monthly compounding, j12. J12 is i12 multiplied by 12,
so 1% x 12 = 12%. If you have a quarterly periodic rate of 1.5%. I4 = 1.5% And the question is asking you for nominal rate, You can find the nominal rate with quarterly compounding by multiplying
your periodic rate by the number of compounding periods, J4 = i4 x 4 = 6%. Ok, we have learnt how periodic interest rates and nominal rates are connected and how to get one of them if you know the
other one.
Complete and Continue | {"url":"https://bc-real-estate-math.teachable.com/courses/real-estate-math/lectures/850026","timestamp":"2024-11-05T13:44:20Z","content_type":"text/html","content_length":"173629","record_id":"<urn:uuid:2b641b96-a65b-4c14-8237-cccf693f08bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00250.warc.gz"} |
Physics:Angle modulation
Angle modulation is a class of carrier modulation that is used in telecommunications transmission systems. The class comprises frequency modulation (FM) and phase modulation (PM), and is based on
altering the frequency or the phase, respectively, of a carrier signal to encode the message signal. This contrasts with varying the amplitude of the carrier, practiced in amplitude modulation (AM)
transmission, the earliest of the major modulation methods used widely in early radio broadcasting.
In general form, an analog modulation process of a sinusoidal carrier wave may be described by the following equation:^[1]
[math]\displaystyle{ m(t) = A(t) \cdot \cos(\omega t + \phi(t))\, }[/math].
A(t) represents the time-varying amplitude of the sinusoidal carrier wave and the cosine-term is the carrier at its angular frequency [math]\displaystyle{ \omega }[/math], and the instantaneous phase
deviation [math]\displaystyle{ \phi(t) }[/math]. This description directly provides the two major groups of modulation, amplitude modulation and angle modulation. In amplitude modulation, the angle
term is held constant, while in angle modulation the term A(t) is constant and the second term of the equation has a functional relationship to the modulating message signal.
The functional form of the cosine term, which contains the expression of the instantaneous phase [math]\displaystyle{ \omega t + \phi(t) }[/math] as its argument, provides the distinction of the two
types of angle modulation, frequency modulation (FM) and phase modulation (PM).^[2] In FM the message signal causes a functional variation of the carrier frequency. These variations are controlled by
both the frequency and the amplitude of the modulating wave. In phase modulation, the instantaneous phase deviation [math]\displaystyle{ \phi(t) }[/math] of the carrier is controlled by the
modulating waveform, such that the principal frequency remains constant.
For frequency modulation, the instantaneous frequency of an angle-modulated carrier wave is given by the first derivative with respect to time of the instantaneous phase:
[math]\displaystyle{ \frac{d}{dt} [ \omega t + \phi(t) ] = \omega + \phi'(t) , }[/math]
in which [math]\displaystyle{ \phi'(t) }[/math] may be defined as the instantaneous frequency deviation, measured in rad/s.
In principle, the modulating signal in both frequency and phase modulation may either be analog in nature, or it may be digital. In general, however, when using digital signals to modify the carrier
wave, the method is called keying, rather than modulation.^[3] Thus, telecommunications modems use frequency-shift keying (FSK), phase-shift keying (PSK), or amplitude-phase keying (APK), or various
combinations. Furthermore, another digital modulation is line coding, which uses a baseband carrier, rather than a passband wave.
The methods of angle modulation can provide better discrimination against interference and noise than amplitude modulation.^[2] These improvements, however, are a tradeoff against increased bandwidth
Frequency modulation
Frequency modulation is widely used for FM broadcasting of radio programming, and largely supplanted amplitude modulation for this purpose starting in the 1930s, with its invention by American
engineer Edwin Armstrong in 1933.^[4] FM also has many other applications, such as in two-way radio communications, and in FM synthesis for music synthesizers.
Phase modulation
Phase modulation is important in major application areas including cellular and satellite telecommunications, as well as in data networking methods, such as in some digital subscriber line systems,
and WiFi.
The combination of phase modulation with amplitude modulation, practiced as early as 1874 by Thomas Edison in the quadruplex telegraph for transmitting four signals, two each in both directions of
transmission, constitutes the polar modulation technique.
Further reading
• Bell Telephone Laboratories, Transmission Systems for Communications, 5th Edition, Holmdel, NJ, 1982, Chapter 6—Signal Conditioning, p.93.
Original source: https://en.wikipedia.org/wiki/Angle modulation. Read more | {"url":"https://handwiki.org/wiki/Physics:Angle_modulation","timestamp":"2024-11-03T19:56:00Z","content_type":"text/html","content_length":"44379","record_id":"<urn:uuid:2d1d9514-82c4-4ee4-9fd1-8be62cfb0cab>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00328.warc.gz"} |
Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
Re: Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
Re: Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
But have you done it in M already?
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
No, but I'd like to see it done.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
I'm not crazy, my mother had me tested.
Re: Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
Me too!
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Re: Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
Let's pick a random person and ask him to do it.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
I'm not crazy, my mother had me tested.
Re: Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
I do not know any random people.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Re: Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
Have you seen a note called Agnishom's Wikis?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
I'm not crazy, my mother had me tested.
Re: Solving Cryptarithmetic Problems Using Parallel Genetic Algorithm
I have not. Who is this fellow and why does he have your first name?
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else. | {"url":"https://mathisfunforum.com/viewtopic.php?pid=354773","timestamp":"2024-11-03T06:05:32Z","content_type":"application/xhtml+xml","content_length":"16262","record_id":"<urn:uuid:a27255a8-b855-4a54-aebb-f13e5b9be912>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00286.warc.gz"} |
David H. Eberly
According to our database
, David H. Eberly authored at least 15 papers between 1991 and 2005.
Collaborative distances:
Book In proceedings Article PhD thesis Dataset Other
On csauthors.net:
3D game engine architecture - engineering real-time applications with wild magic.
The Morgan Kaufmann series in interactive 3D technology, Elsevier Morgan Kaufmann, ISBN: 978-0-12-229064-0, 2005
3D game engine design - a practical approach to real-time computer graphics.
Morgan Kaufmann, ISBN: 978-1-55860-593-0, 2001
Zoom-Invariant Vision of Figural Shape: The Mathematics of Cores.
Comput. Vis. Image Underst., 1998
Ridges in Image and Data Analysis
Computational Imaging and Vision 7, Springer, ISBN: 978-94-015-8765-5, 1996
Scale-Space Boundary Evolution Initialized by Cores.
Proceedings of the Visualization in Biomedical Computing, 4th International Conference, 1996
The multiscale medial axis and its applications in image registration.
Pattern Recognit. Lett., 1994
Ridges for image analysis.
J. Math. Imaging Vis., 1994
Multiscale geometric image analysis: diffusion and cores and variable conductance diffusion and object calculation.
Proceedings of the Visualization in Biomedical Computing 1994, 1994
Volume registration using the 3D core.
Proceedings of the Visualization in Biomedical Computing 1994, 1994
The WMMR filters: a class of robust edge enhancers.
IEEE Trans. Signal Process., 1993
Statistical properties, fixed points, and decomposition with WMMR filters.
J. Math. Imaging Vis., 1992
Complete classification of roots to one-dimensional median and rank-order filters.
IEEE Trans. Signal Process., 1991
Adaptation of group algebras to signal and image processing.
CVGIP Graph. Model. Image Process., 1991
On gray scale image measurements : II. Surface area and volume.
CVGIP Graph. Model. Image Process., 1991
On gray scale image measurements : I. Arc length and area.
CVGIP Graph. Model. Image Process., 1991 | {"url":"https://www.csauthors.net/david-h-eberly/","timestamp":"2024-11-04T14:45:40Z","content_type":"text/html","content_length":"28524","record_id":"<urn:uuid:d61dec1a-56ee-4ee2-9579-2c5b28fc0c87>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00587.warc.gz"} |
sla_gercond.f −
REAL function SLA_GERCOND (TRANS, N, A, LDA, AF, LDAF, IPIV, CMODE, C, INFO, WORK, IWORK)
SLA_GERCOND estimates the Skeel condition number for a general matrix.
Function/Subroutine Documentation
REAL function SLA_GERCOND (characterTRANS, integerN, real, dimension( lda, * )A, integerLDA, real, dimension( ldaf, * )AF, integerLDAF, integer, dimension( * )IPIV, integerCMODE, real, dimension( * )
C, integerINFO, real, dimension( * )WORK, integer, dimension( * )IWORK)
SLA_GERCOND estimates the Skeel condition number for a general matrix.
SLA_GERCOND estimates the Skeel condition number of op(A) * op2(C)
where op2 is determined by CMODE as follows
CMODE = 1 op2(C) = C
CMODE = 0 op2(C) = I
CMODE = -1 op2(C) = inv(C)
The Skeel condition number cond(A) = norminf( |inv(A)||A| )
is computed by computing scaling factors R such that
diag(R)*A*op2(C) is row equilibrated and computing the standard
infinity-norm condition number.
TRANS is CHARACTER*1
Specifies the form of the system of equations:
= ’N’: A * X = B (No transpose)
= ’T’: A**T * X = B (Transpose)
= ’C’: A**H * X = B (Conjugate Transpose = Transpose)
N is INTEGER
The number of linear equations, i.e., the order of the
matrix A. N >= 0.
A is REAL array, dimension (LDA,N)
On entry, the N-by-N matrix A.
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,N).
AF is REAL array, dimension (LDAF,N)
The factors L and U from the factorization
A = P*L*U as computed by SGETRF.
LDAF is INTEGER
The leading dimension of the array AF. LDAF >= max(1,N).
IPIV is INTEGER array, dimension (N)
The pivot indices from the factorization A = P*L*U
as computed by SGETRF; row i of the matrix was interchanged
with row IPIV(i).
CMODE is INTEGER
Determines op2(C) in the formula op(A) * op2(C) as follows:
CMODE = 1 op2(C) = C
CMODE = 0 op2(C) = I
CMODE = -1 op2(C) = inv(C)
C is REAL array, dimension (N)
The vector C in the formula op(A) * op2(C).
INFO is INTEGER
= 0: Successful exit.
i > 0: The ith argument is invalid.
WORK is REAL array, dimension (3*N).
IWORK is INTEGER array, dimension (N).
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 150 of file sla_gercond.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://manpag.es/SUSE131/3+SLA_GERCOND","timestamp":"2024-11-07T17:17:00Z","content_type":"text/html","content_length":"21667","record_id":"<urn:uuid:96f32543-e296-4b28-9f70-278dbe093ec0>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00780.warc.gz"} |
area of a triangle worksheet 6th grade
Grade 6 Geometry Worksheets: Area of triangles | K5 Learning
Area of Right Triangle Worksheets
Area of Triangles Worksheets
Area and Perimeter of Triangles Worksheets - Math Monks
Area of Right Triangle Worksheets
Area of Triangles Worksheets
Area of Right Triangle Worksheets
Printable 6th Grade Area of a Triangle Worksheets | Education.com
Area of Triangles and Parallelograms Worksheets - Math Monks
Area of Non-Right Angled Triangles | PDF printable Measurement ...
50+ Area of a Triangle worksheets for 6th Class on Quizizz | Free ...
Area of Triangles Worksheets
Area of Triangles Worksheets
Area of Triangles Worksheets
Area of right-angled triangles | 3rd grade Math Worksheet ...
Area of Triangle worksheet | Live Worksheets
Area of Triangles Worksheets | PYP IB Grade 5
? Y6 White Rose Supporting: Area of a Triangle Worksheet PDF
How to find Area of a Triangle | 6th Grade | Mathcation.com
Area of Triangles Worksheets
12 Free Area of a Triangle Worksheets | 80+ Area Problems
Geometry Worksheets | Triangle Worksheets
Angle Sum of a Triangle – 6th Grade Math Worksheet | Teach Starter
Area of 2D Shapes Worksheet | 6th Grade PDF Worksheets
Area of a Rectangle Word Problem worksheet | Grade1to6
Area of Triangles Worksheets
Grade 6 Geometry Worksheets: Classifying triangles | K5 Learning
Area Worksheets
Finding The Area Of A Triangle Worksheet
Area of a Triangle – Worksheet | Teach Starter
Area of Right Triangle Worksheets
Numeracy: Area of a triangle | Worksheet | PrimaryLeap.co.uk
Pythagorean Theorem Worksheet for 6th Grade | Lesson Planet | {"url":"https://worksheets.clipart-library.com/area-of-a-triangle-worksheet-6th-grade.html","timestamp":"2024-11-05T23:50:52Z","content_type":"text/html","content_length":"25841","record_id":"<urn:uuid:c6bcebd7-b5d6-4930-b92a-d6e623dfc1ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00667.warc.gz"} |
Role: Software Engineer Intern
Stipend: BTech 65k per month, MTech 70k per month.
Benefits: Intern housing, Airfare and Insurance.
Eligible Courses and Branches:
B.Tech, M.Tech: CS, EC, EI, EE
No URs and Backlogs
CGPA >= 7
Number if Rounds: 2
Round 1: Coding Round
There were two problems which were medium level, based on hashing and the second one based on hashing + string + sorting, which I have solved in 10 min. The Coding Round tool place in Hackerrank
15 were shortlisted after Coding Round.
Round 2: Technical + HR
Two questions were asked in the technical round.
First one: Array Partition
Click here to Practice
Submit Problem to OJ
In above question, I told the exact logic behind it, because there is a lot of edge cases in this question, so i told all the logic/edge cases, but was not able to code it.
Other question was a medium level one, I solved it in a very fast manner, with the complete code, means including header file, take input and every other minute detail.
Then they focused on my project and were asking a lot of questions on project like How you did your project, what is the logic behind it, what was your role and many other things, wherein I discussed
my project, in a very detailed manner, I told all the API that i have made in my project, and how it is working.
At the end some basic HR questions were also asked. And that is how the interview concluded.
Verdict: Selected
Tips: Don't be in a comfortable position in the interview.
Entering edit mode
Bro I have some doubt can i get your number/contact details
Entering edit mode
Can you please clarify why all occurrences of a particular element is clubbed in same array?
Entering edit mode
Actually, we have to put all occurrences of a particular element in either A or B, because if we don't do that, the A intersection B won't be empty set. That's why I have clubbed them. | {"url":"https://thejoboverflow.com/p/p1084/","timestamp":"2024-11-02T01:48:50Z","content_type":"text/html","content_length":"35162","record_id":"<urn:uuid:803f8f9d-3062-4089-ab1d-817e4089741c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00417.warc.gz"} |
How much is an ounce of gold worth?
How much is an ounce of gold worth
1912.60 USD
How much does 6 oz of gold weigh
Equivalent: 186.62 f (g) in weight of gold. Calculate that you have g of gold per unit of Troy 6 oz.
How much is an ounce of gold worth
The troy ounce is a broader system of measurement for precious components, known as the troy weight. One fixed ounce of gold is equal to 28.35 grams. How has the starting price of gold historically
compared to jewelry prices? The price of gold will rise by about 4750% since 1935, even though President Franklin D. Roosevelt increased the total cost of one ounce of gold to $35.
What is 6K gold price per ounce
Generally, lightly used karat gold has 24K, 22K, 18K, 14K, 10K, 6K gold, etc., as opposed to 24K gold, which must be filtered gold. The price of 6 carat gold per ounce. Today it is updated every 1
minute. Offer price: $457.45 Offer price: $457.7 Daily range: $453.6-$457.75 Carats are an ancient item that indicates the purity of gold, a precious metal.
How much is 12 troy ounces in a pound of gold
At the time of this writing, the price of gold per ounce is actually $1,866. Since there are 6 troy ounces in a troy pound, the yellow metal sells for approximately $22,392 apiece (1,866 pounds x
12). Why are troy ounces important?
Can you convert fluid ounces to ounces
To make a fluid ounce measure a specific ounce, multiply the volume by 1.043176 times the density of the ingredient or material. So the ounce numbers are equal to most fluid ounces times 1.043176,
which gives the density of the ingredients with the material.
How do you convert dry ounces to fluid ounces
To convert an ounce-only measurement to a fluid ounce measurement, divide the weight by 1.043176 times the density of the ingredients or materials.
Is 4 fluid ounces the same as 4 ounces
Think about a cup of flour and especially a cup of tomato sauce; Both occupy the same amount of web space (i.e. 8 fl oz), but these individuals have very different weights (about 4 oz of flour and
7.9 oz of tomato sauce). So no, fluid ounces and real ounces are not interchangeable.
How do you convert fluid ounces to ounces
How to convert ounces to fluid ounces. To convert a pure meter ounce to a fluid multimeter ounce, divide the weight by 1.043176 times the density of the compound or material. Therefore, the weight in
fluid ounces is equal to that ounce divided by 0.043176 times the strength of the ingredient or simply the material.
Are liquid ounces the same as dry ounces
This is because dry and liquid ingredients are certainly measured differently: liquids in fluid ounces, which measure volume, and dry foods in ounces, which measure weight.
How many ounces of gold are in a $10 gold piece
The $1 1/4 oz coin has a diameter of 0.866 inches (22.00.0 mm), contains 0.2500 troy ounces of gold, and weighs 0.2727 troy ounces. (8.483 g). The $5 tenth ounce coin is 0.650 inches (16.50 mm) in
diameter, contains 0.1000 troy of precious metal, and one ounce weighs 0.1091 troy puffs (3.393 g). | {"url":"https://www.vanessabenedict.com/6-ounces-of-gold-worth/","timestamp":"2024-11-03T20:31:31Z","content_type":"text/html","content_length":"71768","record_id":"<urn:uuid:c16f9595-3365-4e7d-a23e-b35c47d4774a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00573.warc.gz"} |
Ordinary Differential Equations - (Mathematical Physics) - Vocab, Definition, Explanations | Fiveable
Ordinary Differential Equations
from class:
Mathematical Physics
Ordinary differential equations (ODEs) are equations that relate a function to its derivatives, providing a mathematical framework to model how a quantity changes over time or space. They are
fundamental in understanding systems where changes depend on the current state of the system, such as in physics, engineering, and other applied fields. ODEs can represent simple relationships but
can also form complex systems that require advanced techniques for analysis and solutions.
congrats on reading the definition of Ordinary Differential Equations. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Ordinary differential equations can be classified into several types, including linear and nonlinear ODEs, with linear ODEs generally being easier to solve.
2. Systems of ODEs involve multiple equations that describe the relationships between different variables and their derivatives, allowing for more complex modeling.
3. Phase plane analysis is a powerful tool in studying systems of ODEs, as it provides insights into the system's behavior by visualizing trajectories and equilibrium points.
4. The existence and uniqueness theorem guarantees that under certain conditions, there exists a unique solution to an initial value problem for ODEs.
5. Numerical methods, such as Euler's method and Runge-Kutta methods, are often employed to approximate solutions for ODEs when analytical solutions are difficult or impossible to obtain.
Review Questions
• How do ordinary differential equations help us understand the dynamics of systems, particularly in terms of their time evolution?
□ Ordinary differential equations provide a mathematical framework to describe how systems change over time by relating quantities to their rates of change. This relationship allows for
modeling the dynamics of various phenomena, such as motion, population growth, and heat transfer. By solving these equations, we can predict future states of the system based on its current
conditions and understand stability and oscillatory behaviors.
• Discuss how phase plane analysis can be utilized to study the behavior of systems of ordinary differential equations.
□ Phase plane analysis is a graphical method used to visualize the behavior of systems described by ordinary differential equations by plotting variables against each other. This technique
enables us to identify equilibrium points and analyze their stability by observing how trajectories evolve around these points. It helps in understanding complex interactions between multiple
variables within a system and provides insight into phenomena such as limit cycles and bifurcations.
• Evaluate the significance of numerical methods in solving ordinary differential equations and their implications for real-world applications.
□ Numerical methods are crucial for solving ordinary differential equations when analytical solutions are not feasible due to complexity or nonlinearity. These methods provide approximate
solutions that can be calculated using computational tools, making it possible to analyze real-world systems such as fluid dynamics, population models, and mechanical vibrations. The ability
to generate numerical approximations allows researchers and engineers to model scenarios and make predictions based on mathematical principles, which is essential in fields like physics and
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/math-physics/ordinary-differential-equations","timestamp":"2024-11-06T14:51:34Z","content_type":"text/html","content_length":"166639","record_id":"<urn:uuid:61d8d17e-d6af-488e-a474-4293fb6ec246>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00333.warc.gz"} |
What is produced when you slice a cone with a plane that passes through only one name of the cone but is not parallel to in edge of the cone? - Answers
Still have questions?
What is produced when you slice a cone with a plane that passes through only one nappe of the cone but that is not parallel to an edge of the cone?
The "conic section" that is produced when you slice a cone with a plane that passes through only one nappe of the cone but that is not parallel to an edge of the cone is known as an ellipse. In the
case where the plane is perpendicular to the axis of the cone, the ellipse becomes a circle. | {"url":"https://math.answers.com/math-and-arithmetic/What_is_produced_when_you_slice_a_cone_with_a_plane_that_passes_through_only_one_name_of_the_cone_but_is_not_parallel_to_in_edge_of_the_cone","timestamp":"2024-11-13T18:28:21Z","content_type":"text/html","content_length":"167267","record_id":"<urn:uuid:78dac5f8-9675-4c06-84fa-ffecc188ae1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00102.warc.gz"} |
Bzoj 3926 [Zjoi20150] the gods ' favoured fantasy Township (SAM)
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | {"url":"https://topic.alibabacloud.com/a/bzoj-3926-zjoi20150-the-gods--favoured-fantasy-township-sam_8_8_31284367.html","timestamp":"2024-11-06T12:28:05Z","content_type":"text/html","content_length":"82284","record_id":"<urn:uuid:aef73aa6-d4d7-482a-837a-b5006fce52e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00069.warc.gz"} |
Order Of Operations In Mathematics Free Worksheets | Order of Operation Worksheets
Order Of Operations In Mathematics Free Worksheets
PEMDAS Rule Worksheets
Order Of Operations In Mathematics Free Worksheets
Order Of Operations In Mathematics Free Worksheets – You may have listened to of an Order Of Operations Worksheet, however what specifically is it? In addition, worksheets are a fantastic method for
trainees to exercise brand-new skills and testimonial old ones.
What is the Order Of Operations Worksheet?
An order of operations worksheet is a kind of math worksheet that requires pupils to do mathematics operations. These worksheets are separated right into 3 main sections: multiplication, addition,
and also subtraction. They likewise include the examination of parentheses as well as backers. Pupils who are still discovering just how to do these tasks will find this sort of worksheet useful.
The main purpose of an order of operations worksheet is to help pupils discover the correct method to fix mathematics formulas. If a trainee doesn’t yet understand the concept of order of operations,
they can assess it by describing a description page. On top of that, an order of operations worksheet can be separated right into several classifications, based upon its problem.
One more crucial function of an order of operations worksheet is to instruct pupils how to execute PEMDAS operations. These worksheets start off with simple issues associated with the standard rules
and build up to more complicated issues including every one of the regulations. These worksheets are a wonderful means to present young learners to the enjoyment of addressing algebraic equations.
Why is Order of Operations Important?
Among the most essential things you can discover in math is the order of operations. The order of operations ensures that the math troubles you solve correspond. This is important for examinations
and real-life computations. When addressing a mathematics problem, the order must begin with exponents or parentheses, complied with by subtraction, addition, as well as multiplication.
An order of operations worksheet is a great method to show students the appropriate means to address math formulas. Prior to trainees begin utilizing this worksheet, they might require to review
concepts related to the order of operations. To do this, they should assess the concept page for order of operations. This concept page will certainly give students a review of the keynote.
An order of operations worksheet can help students establish their abilities on top of that as well as subtraction. Educators can use Prodigy as an easy way to distinguish practice and supply
engaging web content. Prodigy’s worksheets are an ideal means to assist trainees find out about the order of operations. Educators can begin with the fundamental ideas of addition, division, and
multiplication to help students construct their understanding of parentheses.
Order Of Operations In Mathematics Free Worksheets
7Th Grade Math Pemdas Worksheets Rule Order Of Operations Tiktokcook
24 Printable Order Of Operations Worksheets To Master PEMDAS
The Order Of Operations Three Steps A Math Worksheet From The
Order Of Operations In Mathematics Free Worksheets
Order Of Operations In Mathematics Free Worksheets offer an excellent resource for young learners. These worksheets can be easily tailored for specific demands. They can be found in 3 degrees of
problem. The very first level is easy, needing trainees to practice utilizing the DMAS method on expressions having 4 or even more integers or 3 drivers. The second degree needs trainees to use the
PEMDAS approach to streamline expressions using inner and also external parentheses, braces, and curly dental braces.
The Order Of Operations In Mathematics Free Worksheets can be downloaded totally free and also can be published out. They can then be assessed making use of addition, multiplication, division, as
well as subtraction. Trainees can additionally utilize these worksheets to examine order of operations and using exponents.
Related For Order Of Operations In Mathematics Free Worksheets | {"url":"https://orderofoperationsworksheet.com/order-of-operations-in-mathematics-free-worksheets/","timestamp":"2024-11-11T14:48:36Z","content_type":"text/html","content_length":"44312","record_id":"<urn:uuid:26f7ac6f-d7e8-4a17-8861-c231b5e7f521>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00411.warc.gz"} |
Basal hydrofractures near sticky patches | Journal of Glaciology | Cambridge Core
1. Introduction
Crevasses are deep cracks in glaciers, ice sheets and ice shelves. Based on their position in the ice, they can be classified into surface crevasses and basal crevasses. On ice shelves, the
propagation of surface and basal crevasses is a precursor to calving and rifting, and thus affects the stability of the ice shelf (Bassis and Ma, Reference Bassis and Ma2015; Lai and others,
Reference Lai2020; Lipovsky, Reference Lipovsky2020). In grounded ice, basal crevasses may have important influences on subglacial hydrology and glacier dynamics (Walter and others, Reference Walter,
Dalban Canassy, Husen and Clinton2013). Crevasses are also a potential explanation for seismic activity detected in glaciers and ice sheets (Hudson and others, Reference Hudson2020). Clustered, deep
icequakes with non-double-couple sources have been detected in alpine glaciers (Walter and others, Reference Walter, Dalban Canassy, Husen and Clinton2013; Helmstetter and others, Reference
Helmstetter, Moreau, Nicolas, Comon and Gay2015). Because these icequakes have a ‘substantial, if not dominant, isotropic component in the moment tensor’, they cannot be explained by shearing in
stick-slip motion along the ice–bed interface. Instead, Walter and others (Reference Walter, Dalban Canassy, Husen and Clinton2013) attributed these icequakes to opening-mode fractures deep within
the ice. Basal crevasses have been studied less than surface crevasses because observations of basal crevasses are few, and complicated, heterogeneous conditions characterise the ice–bed interface.
The propagation of basal crevasses can be very sensitive to basal conditions, including subglacial hydrological conditions and bedrock rigidity (Jimenez and Duddu, Reference Jimenez and Duddu2018).
Various observations document the existence of basal crevasses. At Bench Glacier, Alaska, crevasses with connections to the bed were detected by both drilling experiments and radar imaging (Harper
and others, Reference Harper, Bradford, Humphrey and Meierbachtol2010). Such crevasses, filled by high-pressure water, serve as important components of the subglacial hydrology system. In West
Antarctica, Wearing and Kingslake (Reference Wearing and Kingslake2019) detected relic basal crevasses in grounded regions of the Henry Ice Rise in the Ronne Ice Shelf using ice-penetrating radar. To
consistently explain the formation of these buried relic crevasses and the ice rise, they proposed a conceptual model where the floating ice shelf re-grounds on high points of the bedrock, leading to
upstream thickening and downstream crevassing. Here, the ice–bed contacts are areas where the basal shear stress is higher than the nominally stress-free ice–sea water interface.
The complexity of basal crevasses arises from the heterogeneous condition of the ice–bed interface. Basal conditions of grounded ice sheets such as subglacial water pressure, basal shear stress and
temperature vary spatially and temporarily. For some ice streams of the Antarctic Ice Sheet, rib-like patterns of high basal shear stress are inferred on the basis of inversions of surface data
(Sergienko and Hindmarsh, Reference Sergienko and Hindmarsh2013). Other studies have argued that spatial and temporal variation of ice flow and surface velocity are a consequence of basal ‘sticky
spots’ (referred to here as sticky patches) that have higher basal shear stress than their surroundings (Stokes and others, Reference Stokes, Clark, Lian and Tulaczyk2007). The existence of sticky
patches has significant influence on the nonuniformity of ice-flow dynamics (Wolovick and others, Reference Wolovick, Creyts, Buck and Bell2014).
Here we argue that for grounded regions of glaciers and ice sheets, variation of basal-stress conditions can promote basal crevassing, while high basal water pressure remains the prerequisite. We
quantify and explore the fracturing process induced by a combination of water pressure and sticky patches. This arrangement is different from the process of pure hydrofracturing, as considered by
Smith (Reference Smith1976) and Van der Veen (Reference Van der Veen1998). At long timescales, glacial ice deforms as a viscous fluid, with a viscosity modelled by Glen's law (Glen, Reference Glen
1955), which states that the viscosity has a power-law relation to the shear stress. The fracturing process, however, typically occurs on a shorter timescale over which ice behaves elastically.
Therefore, linear elastic fracture mechanics (LEFM) theory is widely used to explore the existence and growth of cracks in ice. Van der Veen (Reference Van der Veen1998) applied LEFM to develop
elastic models of surface and basal crevassing, with the assumption that basal crevasses are mode-I (i.e. opening) cracks propagating vertically in ice. More recently, Jimenez and Duddu (Reference
Jimenez and Duddu2018) reassessed a key component of previous LEFM models of crevassing, finding that the basal boundary conditions influence the crevasses. For grounded ice sheets, Jimenez and Duddu
(Reference Jimenez and Duddu2018) recommended a LEFM weight function from Tada and others (Reference Tada, Paris and Irwin2000) to study opening-mode fractures analytically.
The application of LEFM is not limited to opening-mode fractures, however. In several studies, LEFM is applied to calculating mixed-mode stress intensity factors (SIFs) in rift propagation on ice
shelves in two dimensions (Hulbe and others, Reference Hulbe, LeDOUX and Cruikshank2010) and three dimensions (Lipovsky, Reference Lipovsky2020). Here, we use LEFM theory in the context of
two-dimensional elasticity to explore the mixed-mode (in-plane opening mode and shearing modes) basal crevasses arising from sticky patches.
The paper is organised as follows. In the model section we introduce the mathematical theory and its numerical implementation. The results section illustrates the essential mechanics and shows the
dependence of crack propagation on physical parameters. We first consider crack propagation under the mode-I fracture assumption followed by a consideration of curved fracture trajectories produced
by mixed-mode fracturing. We find that the propagation of basal crevasses is controlled by basal water pressure, the size of the sticky patch and the stress variation between the sticky patch and the
neighbouring bed. We then evaluate how these parameters affect the trajectories of basal crevasses. The discussion section explores the thermal consequences of basal crevasses, potential applications
of the model to real ice sheets and limitations of the model.
2. The model
We model a grounded ice-sheet sliding due to gravity, influenced by a sticky patch. Under the assumption of plane strain (i.e. there is no cross-stream elastic strain), we model the ice sheet with a
two-dimensional, infinite elastic strip sliding down a slope. Figure 1 shows a schematic diagram of the mathematical model, which considers a finite length of the strip that extends across a sticky
patch. The total system is decomposed into two different sub-problems, based on the two components of the gravity vector in the tilted coordinate system. The first sub-problem addresses the sliding
state, where steady sliding is driven by down-slope component of gravity g [x] and resisted by uniform basal stress (Fig. 1a). The second sub-problem is the sticky state, where the sticky patch
provides additional basal drag, leading to tensile stress that partially offsets the static compression due to g [z] (Fig. 1b).
In the first sub-problem, the ice strip with height H and length 2L (L ≫ H) is sliding at a constant speed. A uniform basal shear stress τ[0], representing background basal drag, is imposed on (− L,
L). The uniform basal stress τ[0] exactly balances the down-slope component of gravity g [x], leading to steady sliding with internal deformation (to maintain torque balance). This is consistent with
the fact that the ice-sheet motion can be decomposed into basal sliding and internal deformation (Van der Veen, Reference Van der Veen2013).
In the sticky state of panel (b), we focus on the influence of the sticky patch by treating it in superposition to the steady, sliding state. To model the stickiness of the sticky patch, an excess
shear stress Δτ is imposed on x ∈ [ − W, W] at z = 0. Since the down-slope component of gravity is already balanced by τ[0] in panel (a), this excess traction needs to be balanced. We add an uniform
increment of traction WΔτ/L in the opposite direction on x ∈ [ − L, L] at z = 0 – along the whole bottom boundary of the finite ice strip. This extra increment of uniform traction vanishes for W/L
→ 0; for W/L ≪ 1 it has little effect on basal stress estimates around the sticky patch.
In panel (b), the excess traction on − W to W at z = 0 creates tension on the downstream side of the sticky patch. If that tension, combined with basal water pressure, is large enough to overcome the
ice overburden pressure plus the fracture toughness of ice, basal crevassing should occur. In natural ice sheets, the basal drag on the sticky patch Δτ can be much larger than the background basal
drag τ[0] (Sergienko and Hindmarsh, Reference Sergienko and Hindmarsh2013). Therefore, in regions close to the sticky patch, the elastic deformation caused by the uniform basal drag is negligible
compared with the effect of the patch. From this point forward, we will only focus on the model in panel (b), which accounts for the effect of the sticky patch.
Based on LEFM theory, the SIFs are calculated and used to predict the maximum penetration of the crack (Broek, Reference Broek1982). SIFs are parameters that describe the stress distribution and
magnitude close to the crack tip. A more detailed discussion of SIFs can be found in the following subsections.
2.1 2D elastic model of basal hydrofracture
Focusing on sub-problem (b) shown in Figure 1(b), the equation governing the conservation of momentum in ice is
(1)$${\boldsymbol \nabla}\cdot{\boldsymbol \sigma}-\rho_{\rm i} g_{z}\hat{{\boldsymbol z}} = {\bf 0},\; $$
where σ is the Cauchy stress tensor (tensile stress is positive), ρ [i] is the density of ice, g [z] = g cos α is the z-component of gravity and $\hat {{\boldsymbol z}}$ is the unit vector in
positive-z direction. With the assumption that slope α ≪ 1, we take g [z] ≈ g in the following calculations. Both side boundaries of the ice strip are loaded by a depth-dependent compression ρ [i]g(z
− H) that represents the overburden stress due to the weight of the overlying ice,
(2)$${\boldsymbol \sigma}\cdot{\boldsymbol n} = \rho_{{\rm i}}g \left(z-H\right){\boldsymbol n}\quad \rm{\,for}\ x = \pm L,\; $$
where n is the outward-pointing unit normal vector of the domain. The top boundary is assumed to be traction free, neglecting the atmospheric pressure and other traction on the surface,
(3)$${\boldsymbol \sigma}\cdot{\boldsymbol n} = {\bf 0} \quad \rm{\,for}\ z = H.$$
On the ice–bed interface (z = 0), we impose a zero-displacement boundary condition in the z-direction and the excess traction Δτ in x-direction,
(4)$${\boldsymbol u}\cdot{\boldsymbol n} = 0,\; $$
(5)$$\eqalign{{\boldsymbol t}\cdot{\boldsymbol \sigma}\cdot{\boldsymbol n} & = \tau\,\left(x,\; 0\right) = \left\{\matrix{ \displaystyle -{W\over L}\Delta\tau & \left\vert x\right\vert \geq W,\; \cr
\displaystyle \left(1-{W\over L}\right)\Delta\tau & \left\vert x\right\vert < W,\; }\right.}$$
where Δτ represents the stress variation caused by a sticky patch, u is the displacement and ${\boldsymbol t} = -\hat {{\boldsymbol x}}$ is the unit tangent vector to the boundary. On the bottom, t
is in the negative x-direction. The extra term − WΔτ/L is for force balance, as discussed above. Physically, we balance the excess shear stress on the sticky patch using a uniform stress on a much
larger area. In our computation, the ratio W/L is set to be 0.1, which is small enough to make the effect of the extra term unimportant.
Crack walls are loaded by water pressure that is, in general, dependent on the subglacial hydrology. In this study, we assume static basal water pressure and represent subglacial hydrology in terms
of the flotation fraction, which is the ratio of basal water pressure to basal overburden pressure,
(6)$$f = {\,p_{{\rm w}}\over \rho_{{\rm i}}g H} = {\rho_{{\rm w}}g H_{{\rm w}}\over \rho_{{\rm i}}g H},\; $$
where H [w] is the piezometric head measured in borehole experiments as p [w] = ρ [w]gH [w] (Harper and others, Reference Harper, Bradford, Humphrey and Meierbachtol2010). The fracture is assumed to
initiate at x = W, where tensile stress is maximum, and reach a height Z [C] that is to be determined. Using Eqn (6), the boundary conditions on the crack walls (x = W, 0 ≤ z ≤ Z [C]) can be written
in terms of f and z,
(7)$${\boldsymbol \sigma}\cdot{\boldsymbol n} = \left\{\matrix{ \displaystyle \rho_{{\rm i}}g H\left({\rho_{{\rm w}}\over \rho_{{\rm i}}}{z\over H} - f\right){\boldsymbol n} & \rm{at}\ {\it z} < {\it
H}_{\rm w},\; \cr {\bf 0}\hfill & \rm{at}\ {\it z} \ge {\it H}_{\rm w}. }\right.$$
The transition at z = H [w] occurs because water rises to that height in the crevasse.
In formulating the plane–strain elastic constitutive relation, we assume that strain occurs in response to deviations from the overburden stress, as suggested by Cathles (Reference Cathles2015). This
approach was used by Lipovsky (Reference Lipovsky2020) to model rift propagation in floating ice. A perturbation stress tensor T (sometimes referred to as resistive stress in glaciology), is
introduced as the total Cauchy stress tensor σ minus the ice overburden stress,
(8)$${\boldsymbol T} = {\boldsymbol \sigma} + p_{{\rm i}}{\boldsymbol I},\; $$
in which p [i] = ρ [i]g(H − z) accounts for the ice overburden pressure and − p [i]I is the ice overburden stress. The elastic constitutive law linearly relates the strain to the perturbation stress
(9)$${\boldsymbol T} = {E\nu\over \left(1 + \nu\right)\,\left(1-2\nu\right)} \rm{tr}\,\left({\boldsymbol \epsilon}\right)\,{\boldsymbol I} + {{\it E}\over 1 + \nu}{\boldsymbol \epsilon},\; $$
(10)$${\boldsymbol \epsilon}( \boldsymbol u) = {1\over 2}\left(\nabla \boldsymbol u + \nabla \boldsymbol u^{T} \right).$$
To define the constitutive law, two parameters are needed to account for ice properties. Here we use the Young's modulus E = 10 GPa, and a Poisson's ratio ν = 0.33 (Van der Veen, Reference Van der
Veen1998). Further details associated with the problem description in terms of the perturbation stress are provided in Appendix A.
2.2 Stress intensity factors
In LEFM theory, SIFs K [I], K [II] are used to describe the additional stress near the tip of a crack that is due to the presence of the crack (Broek, Reference Broek1982),
(11)$$\sigma_{i j}^{{I}}( r,\; \theta) = {K_{{I}}\over \sqrt{2\pi r}}f_{i j}^{{I}}\left(\theta\right) + {O}\left(r^{{1\over 2}}\right),\; $$
(12)$$\sigma_{i j}^{{II}}( r,\; \theta) = {K_{{II}}\over \sqrt{2\pi r}}f_{i j}^{{II}}\left(\theta\right) + {O}\left(r^{{1\over 2}}\right),\; $$
where I, II represent two in-plane modes of cracks (opening and shearing, respectively), r and θ are coordinates in a local plane-polar coordinate system with origin at the crack tip, and $f_{ij}^
{{I}}$, $f_{ij}^{{II}}$ are tensor-valued functions describing the angular dependence of the excess stress. The O(r ^1/2) terms are related to far-field stress and are omitted when r is small.
The predicted extent of the crack that is in equilibrium with the total stress field, including its direction (if not constrained to be vertical), is determined by the criterion of crack propagation.
The broadly accepted G-criterion states that cracks grow in the direction along which the maximum potential energy is released (Broek, Reference Broek1982). In the context of this study, the
potential energy is the strain energy stored in the elastic deformation. The elastic energy released when a crack extends is measured by the release rate G, defined as the energy released per unit of
crack extension. The crack extends when G is greater than or equal to G [C], a threshold value for crack growth. Equilibrium is attained when this condition is no longer met and the crack ceases to
G is related to the SIFs by
(13)$$G = G_{{I}} + G_{{II}} = {1-\nu^{2}\over E}\left(K_{{I}}^{2} + K_{{II}}^{2}\right),\; $$
where G [I] and G [II] denote the energy release rate for mode-I and mode-II components, with G [C] given by
(14)$$G_{{\rm C}} = {1-\nu^{2}\over E}K_{{I}, {\rm C}}^{2},\; $$
where K [I,C] is the fracture toughness, a material property measured in experiments. The G-criterion (G > G [C]) can be expressed in terms of the SIFs as $K = \sqrt {K_{{I}}^{2} + K_{{II}}^{2}}> K_
{{I}, {\rm C}}$. Therefore, extension of a crack occurs when the total SIF exceeds fracture toughness K [I,C]. We assume that K [I,C] = 100 kPa m^1/2 (Rist and others, Reference Rist1996), which is
an estimate widely used in ice-fracture problems.
In the results section, we first assume a vertical, pure mode-I crack and employ a simple criterion for its propagation. In particular, we neglect the mode-II component by assuming that K [II] = 0.
Thus the criterion for crack extension reduces to K [I] > K [I,C]; if satisfied, the crack can grow vertically in length. Later we reconsider mixed-mode crack growth with K [II] ≠ 0 and assess the
difference associated with the different criteria.
2.3 Non-dimensionalisation
In the results section we focus on solutions of non-dimensional problems; we denote the non-dimensional version of a variable using the same symbol appended with a prime ′. We choose to scale length
(and displacement) by H and stress by ρ [i]gH. Hence, the conservation of momentum (Eqns (1)–(7)) and stress intensity factors are non-dimensionalised by the following scales:
(15)$$\boldsymbol x^{\prime} = {\boldsymbol x\over H},\; \quad \boldsymbol u^{\prime} = {\boldsymbol u\over H},\; \quad {\boldsymbol \sigma}^{\prime} = {{\boldsymbol \sigma}\over \rho_{{\rm i}}g H},
\; \quad \Delta\tau^{\prime} = {\Delta\tau\over \rho_{\rm i} g H},\; \quad K_{{I}, {II}, {\rm C}}^{\prime} = {K_{{I}, {II}, {\rm C}}\over \rho_{\rm i} g H^{3/2}}.$$
Beside the flotation fraction representing basal water pressure, there are two important, non-dimensional parameters that control fracturing, W′ = W/H and Δτ′, which represent the size and excess
shear stress of the sticky patch, respectively. The non-dimensional SIFs and non-dimensional fracture toughness are used in the fracture criterion. Note that although the dimensional fracture
toughness is a material property measured by experiments, the non-dimensional fracture toughness is scaled by ρ [i]gH ^3/2, giving it a dependence on the ice-sheet thickness.
2.4 Numerical implementation
For the vertical-line-crack problem, the governing equations are solved using an open-source finite element (FE) software library, FEniCS (Logg and Wells, Reference Logg and Wells2010; Logg and
others, Reference Logg, Mardal and Wells2012; Langtangen and Logg, Reference Langtangen and Logg2017), with meshes generated by Gmsh (Geuzaine and Remacle, Reference Geuzaine and Remacle2009). The
basal crevasse is represented by a straight, triangular notch in the mesh, perpendicular to the bottom boundary. The mesh is locally refined near the tip of the notch and the bottom boundary. The
mesh is composed of triangular elements, with element size varying from 5 × 10^−3 times the crack length (near the tip) to 0.2 ice thicknesses (away from the tip). For an 1000 m-thick ice sheet with
a 500 m-high basal crevasse, the element sizes are 2.5 m near the tip and 200 m away from the tip. SIFs are calculated by the displacement correlation method (DCM) (Chan and others, Reference Chan,
Tuba and Wilson1970; Banks-Sills and Sherman, Reference Banks-Sills and Sherman1986) with Richardson extrapolation (Guinea and others, Reference Guinea, Planas and Elices2000). The code is
benchmarked by comparison with a weight function from Jimenez and Duddu (Reference Jimenez and Duddu2018) (see Appendix B).
The FEniCS code relies on meshing the interior of the elastic domain, and requires local mesh refinement at the fracture tip to resolve the stress singularity there. To perform simulations of
fracture growth where the path is not specified a priori, this method would require repeated remeshing. Dynamic remeshing is feasible with specialised software but is unreliable and incurs a
significant computational cost. To avoid such issues and to provide an additional means to verify the numerical results, we use a separate code for consideration of curved fracture trajectories. This
code employs the displacement discontinuity method (DDM) (Crouch and Starfield, Reference Crouch and Starfield1983), a scheme that is based on the boundary element method (BEM).
In the DDM approach, only boundaries subject to given conditions must be meshed. The interior of the domain is assumed to be a uniform, linear elastic material that deforms according to Green's
functions forced by the facets of the boundary mesh. This method avoids reliance on meshing software. To extend the fracture at a given step, a new straight-line segment is connected to the former
tip of the fracture. The orientation of this new segment is defined by the optimal direction of fracture growth. The new segment must have an appropriately small length, the choice of which defines
the resolution of the model. At each growth step, we test for multiple orientations of growth and choose that with the highest strain-energy release rate, as defined by the G-criterion (Dahm,
Reference Dahm2000). The BEM code used here is developed by Davis (Reference Davis2017). As with the FE models, in the DDM models we subject the base of the glacier domain to the condition of no
vertical displacement.
3. Results
Results are presented in two parts. The first is for vertical fractures and uses the FE model; the second is for mixed-mode, curving fractures and uses the DDM model.
3.1 Mode-I fracture growth
We consider a vertical, mode-I fracture as part of a reference case where the sticky patch has a width two times the ice thickness (W′ = 1). The non-dimensional excess shear stress is Δτ′ = 0.3,
which means the dimensional excess shear stress is 0.3ρ [i]gH. The reference flotation fraction is f = 0.7, which we think is a reasonable estimate (Engelhardt and others, Reference Engelhardt,
Humphrey, Kamb and Fahnestock1990; Harper and others, Reference Harper, Bradford, Humphrey and Meierbachtol2010; Andrews and others, Reference Andrews2014). Cases with larger f (0.7 ≤ f ≤ 1) are
also included in the following results. Figure 2 shows the three components of the perturbation stress that arise due to the sticky-patch excess stress and an existing basal crevasse. Among these, we
are most interested in $T_{xx}^{\prime }$ and $T_{xz}^{\prime }$, which are related to the fracture propagation. As shown in panel (a), there is horizontal tension concentrated on the downstream
(right) end of the sticky patch and compression concentrated on the upstream (left) end. In panel (b), the vertical normal stress $T_{zz}^{\prime }$ is also concentrated near the ends of the sticky
patch and the crack tip. This is a consequence of the no-vertical-displacement condition Eqn (4) imposed on the bottom boundary. Panel (c) shows the pattern of shear stress, which is localised to a
region around the sticky patch. In this case, the stress intensity is K [I]/K [I,C] = 8.75. Thus the crack is unstable and will continue growing vertically.
We are interested in the maximum stable crack length permitted by the fracture criterion, and how this depends on ice-sheet thickness and sticky-patch width. To investigate, we extend our calculation
to cases with varying crack length $Z_{{\rm C}}^{\prime }$ and assume that the criterion for crack growth is $K_{{I}}^{\prime }> K_{{I}, {\rm C}}^{\prime }$ (Van der Veen, Reference Van der Veen1998
). Figure 3 shows the normalised SIF K [I]/K [I,C] as a function of $Z_{{\rm C}}^{\prime }$ near the sticky patch for the reference case (W′ = 1, Δτ′ = 0.3) and another case with smaller Δτ′ = 0.2.
Comparison of the two cases shows that increasing Δτ′ leads to larger K [I]. In each case, with the crack length $Z_{{\rm C}}^{\prime }$ increasing, $K_{{I}}^{\prime }$ increases to its maximum due
to tension near the sticky patch, then begins to drop and eventually reaches negative values due to overburden pressure. The maximum crack length is the value of $Z_{{\rm C}}^{\prime }$ such that $K_
{{I}}^{\prime } = K_{{I}, {\rm C}}^{\prime }$ (i.e. the intersection of the $K_{{I}}^{\prime }$ curve and vertical $K_{{I}, {\rm C}}^{\prime }$ line in Fig. 3). For the crack with this value of $Z_
{{\rm C}}^{\prime }$ to be in a stable equilibrium, the crack should be resistive to perturbations. If we add a positive perturbation $\Delta Z_{{\rm C}}^{\prime }> 0$ to the crack length $Z_{{\rm
C}}^{\prime }$, the perturbed crack length should return to the stable state, which means $K_{{I}}^{\prime }( Z_{{\rm C}}^{\prime } + \Delta Z_{{\rm C}}^{\prime }) < K_{{I}, {\rm C}}^{\prime }$.
Thus, the condition for a stable crack is ${{\rm d} K_{{I}}}^{\prime }/{{\rm d} Z_{{\rm C}}^{\prime }}< 0$. Note there is K [I] < K [I,C] at small crack lengths. To propagate a basal crevasse, an
initial flaw or crack of adequate length and orientation is required (Van der Veen, Reference Van der Veen1998). Here we simply assume a pre-existing vertical line crack with K [I] > K [I,C].
Figure 4 shows the maximum stable crack length $Z_{{\rm C, max}}^{\prime }$ as a function of dimensionless problem parameters Δτ′ and W′ for an ice thickness of 100 m. In panel (a), flotation
fraction is f = 0.7. The maximum crack length increases monotonically with W′ or Δτ′. For sticky patches with W′ = 1, a minimum Δτ′ ≈ 0.2 is required to have a crack with length $Z_{{\rm C}}^{\prime
}\sim 0.2$. Panel (b) shows another case where the ice is closer to flotation (i.e. f = 0.9). Here, fracture can be propagated by smaller values of W′ and Δτ′, compared with panel (a). The sticky
patch creates a localised horizontal tension that promotes the growth of the vertical line crack, as shown by the reference case in Figure 2. Evidently, this effect is sensitive to the
non-dimensional parameters W′ and Δτ′.
3.2 Mixed-mode fracture growth
A limitation of the models above is that they assume basal crevasses are mode-I cracks that only propagate vertically and, to simplify the calculation, neglect the mode-II component. This approach is
consistent with previous LEFM models of basal crevasses that are pure hydrofractures (Van der Veen, Reference Van der Veen1998). However, with excess shear stress arising from the sticky patch, basal
crevasses are expected to have both mode-I and mode-II contributions. Relaxing our assumption that the crack propagates vertically, we now consider the curved, quasi-static path of fractures using
the BEM implementation. Note that for curved fracture paths, we use the G-criterion for fracture growth: the fracture stops when either G < G [C] or when there is closure of fracture walls just
behind the fracture's tip (K [I] ≤ 0).
For an ice thickness H = 1000 m, we impose an initial vertical line crack with length 30 m and propagate the crack in its curved trajectories determined by the above criterion. Figure 5 shows the
mixed-mode fracture paths with different values of dimensionless excess shear stress Δτ′ and flotation fraction f. The three panels represent three sizes of the sticky patch: W′ = 0.1 for panel (a),
W′ = 1.0 for panel (b) and W′ = 10.0 for panel (c).
In panel (a) we consider a sticky patch whose width is one-fifth of the ice thickness (W′ = 0.1). There is a zoom-in plot of the fracture paths in the black box. The four curves are quasi-static
paths with four combinations of flotation fraction f and dimensionless excess shear stress. The paths deviate from the vertical path assumed in models above. In particular, they incline upstream
under the influence of the shear stress. Comparing the four curves in panel (a), we find that Δτ′ and f have little effect on the direction of the paths.
In panel (b), the length of the panel is kept the same as panel (a). In addition, we keep the same combinations of f and Δτ′. With W′ = 1 (sticky patch width twice the ice thickness; an order of
magnitude larger than in panel (a)), the fractures align closer to the vertical and extend further into the ice sheet.
Panel (c) shows the fracture paths when W′ = 10. In this case the sticky patch is 20 times wider than the ice thickness. The magnitude of the excess shear stress Δτ′ required for fracture propagation
is reduced to about 0.035. Meanwhile, since $T_{xx}^{\prime }\gg T_{zz}^{\prime }$, the principal stress trajectories are nearly vertical lines, as indicated by the background stress field. Thus,
fracture paths can be approximated by vertical lines, as we did previously. The crack length becomes very sensitive to the magnitude of Δτ′, since a small perturbation to Δτ′ (from 0.033 to 0.036)
causes a large variation of the crack length.
It is possible to predict the trajectory of the curved cracks without solving the BEM model, using only stresses computed in an uncracked domain. This may be computationally convenient in combination
with Stokes-flow models of ice sheets (Krug and others, Reference Krug, Weiss, Gagliardini and Durand2014; Yu and others, Reference Yu, Rignot, Morlighem and Seroussi2017). As in the models discussed
above, the excess shear stress due to the sticky patch is imposed as a boundary condition on the uncracked domain. We then calculate the perturbation stress in the uncracked domain and plot the
principal stress trajectories as the dashed curves in Figure 5. We use the perturbation stress due to the excess shear only, because the overburden stress and water pressure are isotropic and do not
contribute to the orientation of the principal stresses. Comparison between these trajectories and the fracture paths predicted by the BEM shows that the former are accurate approximations of
fracture paths under the conditions considered here.
If there is no deviatoric stress from sources other than the sticky patch, the magnitude of Δτ′ does not contribute to the direction of the fracture path. Instead, the direction of trajectories
depends only on the ratio W′. This is consistent with the results predicted by the BEM and indicates that for real basal crevasses affected by sticky patches, the direction of the fracture is
predominantly controlled by the relative size of the sticky patch. Since the area of sticky patches can be of the order of 1–100 km^2 (Stokes and others, Reference Stokes, Clark, Lian and Tulaczyk
2007), we believe the panels in Figure 5 are representative cases that reveal the influence of patch size. If sticky patches are usually W′ ≫ 1 in real ice sheet, then crevasses are mostly vertical
and BEM would not be necessary for future studies.
4. Discussion
We have developed and analysed a model of basal crevassing associated with sticky patches at the bed of an elastic glacier or ice sheet. Our model, based on LEFM theory, evaluates the role of
shear-stress variations and makes predictions of crack lengths and trajectories. As shown above, the growth of such basal cracks depends on the flotation fraction f, the non-dimensional size of the
sticky patch W′ and the non-dimensional stress variation Δτ′.
4.1 Initiation of basal crevasses
When modelling both vertical and curved basal crevasses in the results section, we assumed pre-existing small cracks with K [I] > K [I,C]. They are a prerequisite for further fracture propagation. By
using a simple Griffith fracture model (Tada and others, Reference Tada, Paris and Irwin2000)
(16)$$K_{{I}} = K_{{I}, {\rm C}} = \sigma_{xx}\sqrt{\pi Z_{{\rm C}}},\; $$
we estimate the minimum crack length Z [C] required for growth, given a local effective stress σ[xx] across the fracture. In the BEM simulations we assumed a 30 m-long initial crack, corresponding to
a 10 kPa local stress; this combination guarantees K [I] > K [I,C]. If tensile stress reaches 326 kPa, basal fractures might initiate on the boundaries of grains 3 cm in diameter. This large value of
stress seems unlikely to be achieved under normal circumstances.
4.2 Potential applications to real ice sheets
For real ice sheets, basal shear stress patterns are difficult to observe directly. In several cases, they have been estimated by inversion of surface data under specific assumptions. Sergienko and
Hindmarsh (Reference Sergienko and Hindmarsh2013) showed that for a region of the Antarctic ice sheet with large ice thickness (H ≈ 10^3 m), the basal shear stress has rib-like patterns of variation.
The width of such sticky patches varies from one ice thickness (W′ = 0.5) to 10 ice thicknesses (W′ = 5). The excess shear stress (200–300 kPa) estimated for these locations is small compared with
the overburden pressure (Δτ′ ~ 0.03). In this case, the effect of the sticky patch depends on its size and local water pressure, as discussed in the Results section. For small patches (W′ ~ 1), such
excess shear stress only affects the direction of fracture propagation, as shown in Figure 5. Basal crevassing occurs when the flotation fraction f reaches 1, which is consistent with the conclusions
of Van der Veen (Reference Van der Veen1998). For larger patches with W′ ~ 10, basal crevasses would initiate on the downstream end at a lower water pressure f ~ 0.7.
For alpine glaciers with an ice thickness of order 100 m, basal shear stress was found to be in the range of 0–200 kPa (Brædstrup and others, Reference Brædstrup, Egholm, Ugelvig and Pedersen2016).
In the non-dimensional parameter space, Δτ′ is of order 0.1, larger than in Antarctic settings. Moreover, W′ is of the order of 1–10, which is similar to that of sticky patches in Antarctic ice
sheets. Thus, we predict that sticky patches play a more important role in determining the length and direction of basal crevasses in alpine glaciers. For further investigations of the basal
hydrofractures around sticky patches, we need to go beyond the inversion results, understand the specific causes of the sticky patches and include the essential physics in our model.
In natural glaciers and ice sheets, there are many factors that might create sticky patches, including bedrock bumps, till-free areas, well-drained tills and basal freeze-on. All of these would lead
to localised, high basal friction (Stokes and others, Reference Stokes, Clark, Lian and Tulaczyk2007). An important friction phenomenon is the stick-slip motion of ice, detected in Whillans Ice
Stream (WIS) in West Antarctica. Wiens and others (Reference Wiens, Anandakrishnan, Winberry and King2008) investigated the WIS stick-slip motion and related it to a sticky patch on the bed.
Furthermore, Sergienko and others (Reference Sergienko, MacAyeal and Bindschadler2009) argued that WIS can be considered as a typical stick-slip system controlled by basal friction, where the sticky
spot nucleates the stick-slip cycle. Therefore, for sticky spots associated with stick-slip ice motion, we can estimate the stress variation from parameters in relevant friction experiments (McCarthy
and others, Reference McCarthy, Savage and Nettles2017; Lipovsky and others, Reference Lipovsky2019; Zoet and others, Reference Zoet2020), rather than from inversion from surface data based on
viscous rheology. If we interpret Δτ in terms of friction coefficient variation Δμ, we find that
(17)$$\Delta\tau^{\prime} = \Delta\mu N^{\prime},\; $$
where N′ = 1 − f is the dimensionless effective normal stress. The basal stress variation is controlled by both friction coefficient variation Δμ and the flotation fraction f. The magnitude of Δμ
during the stick-slip motion of ice can be measured experimentally. By keeping a constant normal stress of N = 500 kPa between the ice sample and the bedrock asperity, Zoet and others (Reference Zoet
2020) found that the friction-coefficient decrease during stick-slip motion is between 0.1 and 0.4. We assume that the measurements of Δμ also apply to the sticky patch discussed in our model.
Well-drained till could serve as a sticky patch. For a well-drained till surrounded by a water saturated layer (Stokes and others, Reference Stokes, Clark, Lian and Tulaczyk2007), the water pressure
on the till would be smaller than the surroundings. For such a till, we assume that the excess friction coefficient Δμ is 0.4 and flotation fraction f is 0.7. Then, according to (17), the excess
basal shear stress is Δτ′ ~ 0.1, which would lead to a tensile-stress concentration on the downstream end of the patch. Meanwhile, in the surroundings, the local subglacial water pressure is expected
to be higher (f ≥ 0.7). For the case considered above, when f reaches 0.9, basal crevassing is likely to occur.
4.3 Thermal implications of basal crevasses
The temperature structure of ice has important effects on ice dynamics. Using high-vertical-resolution sensing, Law and others (Reference Law2021) reported spatial heterogeneity of englacial ice
temperature and deformation. They found a basal temperate-ice zone with thickness that varies from 5 to 73 m at two locations separated by only 9 km at Store Glacier, an outlet glacier of the
Greenland Ice Sheet. Their study indicates spatially varying basal thermal conditions over distances of a few ice thicknesses. Injection of water-filled basal crevasses can locally modify the thermal
profile of the ice sheet (Luckman and others, Reference Luckman2012) and is a potential explanation of the spatially-varying temperate ice layer. The thermal structure of basal crevassing, which is
similar to that of dykes in rock (Daniels and others, Reference Daniels, Bastow, Keir, Sparks and Menand2014), has been modelled in several studies (Jarvis and Clarke, Reference Jarvis and Clarke1974
; McDowell and others, Reference McDowell, Humphrey, Harper and Meierbachtol2021). Their approach recognises that water-filled basal crevasses in sub-temperate ice propagate on a short timescale,
followed by rapid refreezing of water inside the crack. McDowell and others (Reference McDowell, Humphrey, Harper and Meierbachtol2021) modelled the refreezing of a basal crevasse as an instantaneous
heat source in a one-dimensional heat-conduction system, using the analytical solution of Carslaw and Jaeger (Reference Carslaw and Jaeger1959). We use the same analytical solutions for a
two-dimensional thermal structure of a basal crevasse. Details of this calculation are provided in Appendix C.
To estimate the heat released by refreezing, we return to the dimensional problem and assume a static, water-filled vertical crack with crack length Z [C] = 50 m in a 100 m-thick, subtemperate ice
sheet. The crack width w is assumed to be 10 cm uniformly along the crack. The water inside the crack instantaneously refreezes at t = 0, releasing an amount of heat q [i] per unit length per unit
depth into the page (i.e. in the direction normal to the crack). The heat is estimated as
(18)$$q_{{\rm i}} = \rho_{{\rm i}} w L = 3\times 10^{7}\ \rm{J}\, \rm{m}^{-2},\; $$
where ρ [i]w is the mass of water per unit area of the fracture and L is the latent heat of solidification. Mathematically, the refreezing process is assumed to be an instantaneous source at t = 0.
The temperature rise caused by refreezing, ΔT, is held at 0 K on the surface boundary with the atmosphere, the basal boundary and at the limit of x → ±∞. The temperature rise will asymptotically
decay to zero after a long cooling process.
Figure 6 shows the perturbation in temperature due to the refreezing of water in a single basal crevasse over 20 years. The surrounding ice undergoes a rapid warming at t = 0, followed by a long
cooling period until the temperature drops back to the background state ΔT = 0 (McDowell and others, Reference McDowell, Humphrey, Harper and Meierbachtol2021). Panel (a) shows that after 5 years of
diffusion, the temperature perturbation due to refreezing is localised around the crevasse and decreases sharply within ± 30 m. Panels (b) and (c) show the temperature perturbation after 10, 15 and
20 years. After 20 years, ΔT drops back below 0.05 K and the system is again close to the background state.
If a sticky patch is fixed on the bedrock beneath a sliding ice sheet, the patch could generate a series of basal crevasses, as shown in Figure 9 (Appendix C). Thus, we next consider a case in which
the spacing between these crevasses is equal to the half-width of the sticky patch W = 100 m. For these equally spaced basal crevasses, the temperature field is a linear superposition of the effect
of each crevasse. In the mathematical model, the basal crevasse is initiated on the downstream end of the sticky patch and subsequently advected by sliding. Details are provided in Appendix C.
The thermal effect of a series of basal crevasses is shown in Figure 7. The downstream perturbation will gradually smooth out after decades of cooling. The stable pattern of temperature rise depends
on the volume of water inside the crack. The BEM simulations show that the thickness of basal crevasses is of order 0.1 m, which is much smaller than the upper limit of basal crevasse thickness
observed by Harper and others (Reference Harper, Bradford, Humphrey and Meierbachtol2010). Here we simply assume that the crack width is 0.1 m. The initial, localised temperature rise will decay
rapidly as it smooths out. Refreezing in basal crevasses is a possible factor influencing the temperature profile of basal ice, leading to localised heating around the relic basal crevasses, followed
by cooling back toward a steady state (McDowell and others, Reference McDowell, Humphrey, Harper and Meierbachtol2021).
Alternatively, if the basal fracture is embedded in temperate ice and hence doesn't freeze rapidly, it becomes a persistent mechanical perturbation to the ice sheet. A series of such basal crevasses
will be carried downstream to the grounding line. If an ice shelf extends seaward from the grounding line, the basal crevasses will weaken the shelf to lateral shear stresses associated with
buttressing and vertical shear associated with calving (Bassis and Ma, Reference Bassis and Ma2015). The recent numerical model developed by Berg and Bassis (Reference Berg and Bassis2022) suggests
that the advection of crevasses could increase the calving rate and promote glacier retreat.
4.4 Limitations of the model
The model presented has three significant limitations. First, it is based on equilibrium equations of elasticity. The ice sheet is assumed to be an isotropic, elastic body without any internal
viscous deformation that is commonly computed by a Stokes model. Therefore, our model only accounts for the physics of fracturing on a short timescale in which ice is dominated by elastic rheology.
This approach might miss important interactions between the sticky patch and viscous deformation that would modify the stress field. Basal crevasses can be deformed by the internal deformation caused
by viscous flow. For an ice sheet with a thickness H = 1000 m, the velocity difference between the surface and the bed can be up to several kilometres per year, which can significantly deform the
basal crevasses whose length is of the same order as the ice thickness. Thus, a viscoelastic rheology is required to study basal crevasses on a longer timescale. Moreover, it is important to include
the spatial and temporal variation of subglacial hydrology (Harper and others, Reference Harper, Humphrey, Pfeffer, Fudge and O'Neel2005; Hewitt, Reference Hewitt2013), which is simplified to a
static water pressure in the current model.
The second main limitation is that the model does not account for the three-dimensional effects of sticky patches. In a real ice sheet, some sticky patches may have a round shape instead of a long,
rib-like pattern. In that case, it is more appropriate to study them in a 3-D domain, which includes both vertical and lateral extension of cracks, rather than as a plane-strain problem in the x–z
The third limitation relates to the fracture toughness. In applying the LEFM approach, we implicitly assume a brittle medium. However, a brittle-to-ductile transition occurs at a critical grain size
(Schulson and others, Reference Schulson, Lim and Lee1984). Regions and layers of coarse-grained ice exist in ice sheets (Gow and others, Reference Gow1997). Thus, in some glaciers and ice sheets
there may exist ductile ice that eludes brittle fracture. This heterogeneity would modify the predicted crack lengths. Furthermore, Sinha (Reference Sinha1978) argued that the Young's modulus of ice
E depends on temperature, even on a short loading timescale. Further investigation is therefore required to determine whether the thermal profile is important for ice elasticity and fracture.
A more complete understanding of how basal crevasses grow and interact with viscous flow will require a three-dimensional, viscoelastic model including variations in fracture toughness.
5. Conclusions
Besides basal water pressure (Van der Veen, Reference Van der Veen1998), stress variations of sticky patches at the ice–bed interface can promote the propagation of basal crevasses. In previous
studies, basal crevasses were assumed to be mode-I hydrofractures under pure horizontal tension (Van der Veen, Reference Van der Veen1998; Krug and others, Reference Krug, Weiss, Gagliardini and
Durand2014; Jimenez and Duddu, Reference Jimenez and Duddu2018). Assuming water pressure smaller than the flotation condition, we examined the effect of sticky patches on basal crevassing in a
grounded glacier or ice sheet. We found that sticky patches can provide stress required for crack extension. Alongside the flotation fraction, such sticky-patch-assisted crevassing depends on two
non-dimensional parameters: (1) the ratio W′ of the sticky-patch half-width to the ice thickness, and (2) the ratio Δτ′ of excess shear stress to the basal ice overburden pressure.
With a sufficient variation of basal shear stress, the direction of basal fracture is controlled by the relative size of the sticky patch. When the width of the sticky patch is much larger than the
ice thickness, basal crevasses grow nearly vertically and are essentially mode-I fractures. When the width of the sticky patch is smaller than the ice thickness, however, curved basal crevasses grow
with trajectories inclined upstream. In this case, principal stresses can be used to approximate crack trajectories.
For real glaciers or ice sheets with complicated geometries, our model can be combined with the basal stress pattern to investigate how stress variation promotes basal crevassing. Compared to stress
estimates from inversion calculations (Sergienko and Hindmarsh, Reference Sergienko and Hindmarsh2013; Brædstrup and others, Reference Brædstrup, Egholm, Ugelvig and Pedersen2016), qualitative
analysis shows that for a flotation fraction of about 0.9, the basal-stress pattern in alpine glaciers and ice sheets may play an important role in determining the growth of basal crevasses. To
better understand the basal crevasses controlled by sticky patches, future research could incorporate viscous ice flow, spatially resolved subglacial hydrology and detailed fracturing properties of
We thank B. Lipovsky and C.-Y. Lai for their guidance on the application of LEFM to ice, R. Greve and A. Rempel for their insightful editorial comments, and two anonymous reviewers for helpful
suggestions. The general idea that we explore arose during a conversation with P. Christoffersen and R. Law. This research received funding from the European Research Council under Horizon 2020
research and innovation programme grant agreement number 772255.
Appendix A. 2-D elasticity in terms of perturbation stress
The domain is a notched ice strip with length 2L as shown in Figure 1. We set L ≫ W to make sure the crack is far from the boundaries to avoid any edge effects. Substituting T = σ + p [i]I into the
governing equation and boundary conditions, we can express the stress equilibrium equation in terms of the perturbation stress as
(A.1)$${\boldsymbol \nabla}\cdot{\boldsymbol T} = \bf 0.$$
For the top boundary, a traction-free boundary condition is imposed as
(A.2)$${\boldsymbol T}\cdot{\boldsymbol n} = {\bf 0} \quad \rm{\,for}\ {\it z} = {\it H},\; $$
where n is the outward-pointing unit normal vector of the domain. On both sides, we impose traction-free boundary condition for the perturbation stress,
(A.3)$${\boldsymbol T}\cdot{\boldsymbol n} = {\bf 0} \quad\rm{\,for}\ {\it x} = \pm {\it L}.$$
On the crack walls (x = W, 0 ≤ z ≤ Z [C]), compression caused by static water pressure is imposed as
(A.4)$${\boldsymbol T}\cdot{\boldsymbol n} = \left\{\matrix{ \displaystyle\rho_{{\rm i}}g H\left[\left({\rho_{{\rm w}}\over \rho_{{\rm i}}}-1\right){z\over H} + 1 - f\right]{\boldsymbol n} & \rm
{\,for}\ {\it z} < {\it H}_{\rm w},\; \cr \displaystyle \rho_{{\rm i}}g H\left(-{z\over H} + 1\right){\boldsymbol n} & \rm{\,for}\ {\it z}\ge {\it H}_{\rm w},\; }\right.$$
where ρ [i]fH/ρ [w] = H [w] is the hydraulic head. The bottom boundary condition remains the same as (5) and (4), and since the overburden stress doesn't contribute to the shear stress and the
elastic deformation,
(A.5)$$\eqalign{{\boldsymbol t}\cdot{\boldsymbol T}\cdot{\boldsymbol n} & = \tau\,\left(x,\; 0\right) = \left\{\matrix{ \displaystyle-{W\over L}\Delta\tau & \left\vert x\right\vert \geq W,\; \cr \
displaystyle\left(1-{W\over L}\right)\Delta\tau & \left\vert x\right\vert < W,\; }\right.}$$
(A.6)$${\boldsymbol u}\cdot{\boldsymbol n} = 0.$$
Here ${\boldsymbol t} = -\hat {{\boldsymbol x}}$ is the unit tangent vector to the bottom boundary. Note that in Eqn (A.5) an extra term − WΔτ/L is added on the bottom boundary in order to maintain
the total force balance of the ice strip. Because L ≫ W, we can neglect the near-field effect of this force-balance term.
Different from standard elasticity, we set up a constitutive relation between the perturbation stress tensor T and the strain tensor ε as in Eqn (9).
Appendix B. Benchmark
The weight function method has proved to be a useful tool to calculate stress intensity factors in ice sheets under certain basal boundary conditions (Tada and others, Reference Tada, Paris and Irwin
2000; Jimenez and Duddu, Reference Jimenez and Duddu2018). In essence, this involves calculating stress intensity factors by integrating the stress along a hypothesised crack in an uncracked domain
with an appropriate weight function. The advantage of the weight function method is that the computation is performed in the uncracked domain with no discontinuity and singularity caused by cracks.
For certain simple domain and crack geometry and boundary conditions, the weight function method can give SIFs with high accuracy.
Jimenez and Duddu (Reference Jimenez and Duddu2018) has provided the appropriate weight function from Tada and others (Reference Tada, Paris and Irwin2000) to be used for basal cracks in a grounded
ice sheet,
(B.1)$$K_{{I}} = \int_{0}^{Z_{C}}\sigma_{xx}\, \left(z \right)\,G_1\left(\lambda,\; \gamma\right)\,\rm{d}{\it z},\; $$
(B.2)$$G_1\left(\lambda,\; \gamma\right) = {2\over \sqrt{2H}}\sqrt{{\tan\left({\pi\lambda \over 2}\right)\over 1-{\cos{\pi\lambda \over 2}}/{\cos{\pi\lambda\gamma \over 2}}}} \,\left\{1 + 0.297\sqrt
{1-\gamma^{2}}\,\left[1-\cos\left({\pi\over 2}\lambda\right)\right]\right\},\; $$
where λ = Z [C]/H is the non-dimensional crack length and γ = z/Z [C] is the z coordinate normalised by the crack length. This weight function can be used to predict the mode-I stress intensity
factor K [I] in grounded ice. A similar weight function in Tada and others (Reference Tada, Paris and Irwin2000) can also be applied to calculating mode-II stress intensity factor,
(B.3)$$K_{{II}} = \int_{0}^{Z_{{\rm C}}}\sigma_{xz} \left(z \right)\,G_2\left(\lambda,\; \gamma\right)\rm{d}{\it z},\; $$
(B.4)$$G_2\left(\lambda,\; \gamma\right) = {2\over \sqrt{2H}}\sqrt{{\tan\left({\pi\lambda \over 2}\right)\over 1-{\cos{\pi\lambda \over 2}}/{\cos{\pi\lambda\gamma \over 2}}}} \,\left\{1 + 0.297\sqrt
{1-\gamma^{2}}\,\left[1-\cos\left({\pi\over 2}\lambda\right)\right]\right\}{\sin{{\pi \lambda\gamma \over 2}}\over \sin{{\pi \lambda \over 2}}}.$$
To verify the implementation of the DCM method we consider a problem where there is a mixed-mode (mode-I and mode-II) water-filled crack. SIFs are calculated by both the DCM method and the weight
function method in Figure 8. In the DCM calculation, the element size is set to be 0.005 times the crack length near the crack tip and 0.2 ice thicknesses away from the tip.
For K [I], these two methods agree over most crack lengths. However, there is large deviation for K [II] at large crack length.
Appendix C. Thermal structure of basal crevasses
C.1 Thermal structure of a single basal crevasse
Refreezing of water inside basal crevasses is a factor potentially affecting the thermal profile of ice. Using analytical solutions of Carslaw and Jaeger (Reference Carslaw and Jaeger1959), this
section shows how a single basal crevasse or a series of basal crevasses affects the thermal structure of ice.
In order to simplify the computation, we neglect advection and other englacial heat sources and focus on heat diffusion after refreezing (Luckman and others, Reference Luckman2012). The background
temperature profile is assumed to have no effect on the heat conduction caused by refreezing. That means that the simple analytical model just accounts for the effect of refreezing and doesn't make
any prediction of the net temperature profile. Let ΔT be the temperature perturbation induced by a basal crevasse. The ice sheet is simplified to an isotropic infinite strip 0 < z < H. On the
boundaries we assume zero temperature perturbation (ΔT = 0) at z = 0, z = H and x = ±∞. The governing equation of thermal conduction in terms of ΔT is
(C.1)$${\partial^{2} \Delta T\over \partial x^{2}} + {\partial^{2} \Delta T\over \partial z^{2}} = {1\over \kappa}{\partial \Delta T\over \partial t}.$$
The boundary conditions are
(C.2)$$\Delta T\left(\pm \infty,\; z\right) = 0,\; $$
(C.3)$$\Delta T\left(x,\; 0\right) = \Delta T\left(x,\; H\right) = 0.$$
Table 1 shows the values of parameters used in the calculation. At time t = 0, an amount of heat q [i] per unit length per unit depth into the page is released by a line heat source from (0, 0) to
(0, Z [c]). In order to get an estimated value of that released heat, we need to estimate the volume of water that refreezes at t = 0. Based on the BEM simulations, the width of the crack w is
approximately 0.1 m and hence q [i] is estimated as
(C.4)$$q_{{\rm i}} = \rho_{{\rm i}} L w = 3\times 10^{7}\, \rm{J}\, \rm{m}^{-2},\; $$
where L is the latent heat of melting.
Thus we have the full thermal problem in an infinite strip with an initial line heat source representing refreezing in real basal crevasses.
The solutions are given by Carslaw and Jaeger (Reference Carslaw and Jaeger1959) as
(C.5)$$\Delta T = {q_{\rm i}\over \pi\rho_{\rm i} C_{\rm p}\sqrt{\kappa \pi t}}\exp{\left(-{\left(x-x^{\prime}\right)^{2}\over 4\kappa t}\right)} \sum_{{n = 1}}^{ + \infty}{1\over n}\left(1-\cos\left
({{n\pi Z_{{\rm C}}\over H}}\right)\right)\sin\left({{n\pi z\over H}}\right)\exp\left({{-\kappa n^{2} \pi^{2} t\over H^{2}}}\right).$$
C.2 Thermal structure of a series of equally spaced basal crevasses
A stable sticky patch can cause a local stress variation and potentially generate a series of basal crevasses. The thermal effect of a single basal crevasse, which is shown above, can be advected
downstream and superposed with the effect of other basal crevasses, resulting in temperature variation on a longer timescale and larger area. Mathematically, based on ΔT (x, z, t) that we have
obtained above, the net effect of a series of basal crevasses can be obtained by a simple linear superposition of the ΔT of each crevasse.
Assuming these crevasses are generated at a fixed spacing w [s] = W = 100 m in an 100 m-thick ice sheet, starting from t = 0, we fix the coordinate system on the sticky patch and move the ice sheet
at v [s] = 100 m × y^−1. Thus the crevasses are also generated at a fixed time interval Δt = W/v [s] = 1 y. The net temperature perturbation ΔT [net] is
(C.6)$$\Delta T_{{\rm net}} = \sum_{n = 0}^{\infty} \Delta T\left(x-x_{n}^{\prime},\; z,\; t-t_{n}^{\prime}\right),\; $$
where n is the number of the crack, $x_{n}^{\prime } = n w_{s}$ is the position of the crack n, $t_{n}^{\prime } = -n\Delta t$ is the time when the crack n formed and refroze. | {"url":"https://core-cms.prod.aop.cambridge.org/core/journals/journal-of-glaciology/article/basal-hydrofractures-near-sticky-patches/CA9C2B8D64441D488FDF4B299E23EF89","timestamp":"2024-11-10T00:17:46Z","content_type":"text/html","content_length":"1049980","record_id":"<urn:uuid:fd62dc3e-df70-4293-bd79-bdd80f2d9a19>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00361.warc.gz"} |
Get Online Differential Topology Tutors
Learn Differential Topology Online with Best Differential Topology Tutors
Differential topology is challenging, but it doesn’t have to be.
Our experienced differential topology tutors will work with you one-on-one to help you understand the concepts, solve problems, and ace your exams.
Sign up for our differential topology tutoring program starting at just $28/hour.
What sets Wiingy apart
Expert verified tutors
Free Trial Lesson
No subscriptions
Sign up with 1 lesson
Transparent refunds
No questions asked
Starting at $28/hr
Affordable 1-on-1 Learning
Top Differential Topology tutors available online
2055 Differential Topology tutors available
Responds in 9 min
Star Tutor
Math Tutor
4+ years experience
Excel in various Math subjects with expert tutoring from a Master’s degree holder and 4 years of experience. Build a solid foundation and develop strong analytical skills.
Responds in 5 min
Star Tutor
Math Tutor
4+ years experience
Top-tier Math with 4+ years of tutoring experience for high school to university students. hold a Master's Degree in Mathematics Education assist with test prep and clarify doubts.
Responds in 14 min
Star Tutor
Math Tutor
2+ years experience
Top Math tutor with 2+ years of tutoring experience. Offers assignment support and exam prep sessions. Provides personalized 1-on-1 lessons.
Responds in 7 min
Star Tutor
Math Tutor
1+ years experience
Practice Math with the Data Maven. Professional in Calculus, ALgebra, Trigonometry, geometry, and Statistics. Tutors adult learners from USA, CA, AU and more.
Responds in 11 min
Student Favourite
Math Tutor
12+ years experience
Qualified math tutor with 12+ years of experience mentoring students in high school students. Supports with preparing for the test. Holds a bachelor's degree
Responds in 9 min
Star Tutor
Math Tutor
4+ years experience
Excel in Math with expert tutoring from a Master’s degree holder with 4 years of experience. Build a strong grasp of mathematical concepts and achieve your academic goals.
Responds in 10 min
Star Tutor
Math Tutor
2+ years experience
Math tutor with over 2 years of teaching experience, providing personalized classes and tailored lessons for high school and university students from various regions. Possesses a Bachelor's Degree in
Responds in 3 min
Star Tutor
Math Tutor
6+ years experience
An experienced math tutor, holding a Masters degree in Mathematics, also offers personalized lessons and provides assignment help on time.
Responds in 27 min
Student Favourite
Math Tutor
2+ years experience
Expert Math tutor with 2+ years of tutoring experience. Provides customized lessons, test prep, and assignment help in Algebra, Calculus, Geometry, Trigonometry, Statistics, and more.
Responds in 18 min
Student Favourite
Math Tutor
10+ years experience
Skilled Math Tutor with 10 years of experience. Personalized instruction and exam preparation to individual needs. Holds a Master's degree in Education, equipped with advanced teaching strategies to
enhance proficiency, and maximize scores.
Responds in 12 min
Star Tutor
Math Tutor
3+ years experience
Proficient math tutor online for K-12. MTech, with 3+ years of tutoring experience; also teaches Python. Provides customized 1-on-1 lessons, exam prep, and assignment help.
Responds in 9 min
Star Tutor
Math Tutor
5+ years experience
Skilled math tutor online with over 5 years of extensive teaching experience to school and college students. Provides 1-on-1 lessons, test prep, and assignment help to US, UK, AU and CA students.
Responds in 2 min
Star Tutor
Math Tutor
3+ years experience
Expert math tutor having 3+ years of 1-on-1 online teaching experience with US school students. Provides exam prep strategies and assignment help.
Responds in 15 min
Student Favourite
Math Tutor
8+ years experience
Advance your understanding of Math with a tutor who has a Bachelor's degree and 8+ years of experience. Receive focused help in diverse Math topics to achieve your learning goals.
Responds in 2 min
Star Tutor
Math Tutor
3+ years experience
Boost your Math skills with targeted guidance from a Master’s degree holder with 3 years of experience. Improve your understanding of mathematical concepts and excel in your studies.
Responds in 8 min
Star Tutor
Math Tutor
7+ years experience
Mathematics enthusiast with a master's degree in mathematics and 7 years of experience in coaching students the intricacies of the subject of mathematics. Well equipped to teach students of any
learning level
Responds in 15 min
Student Favourite
Math Tutor
9+ years experience
Highly skilled Maths tutor with 9+ years of teaching experience. Provides interactive lessons for concept clarification, homework assistance, and test preparation to high school to University
Responds in 26 min
Student Favourite
Math Tutor
6+ years experience
Certified Maths tutor with 6 years of tutoring experience, aiding high school to university students in test preparation and homework assistance. Holds a Master's degree in Business Administration.
Math Tutor
5+ years experience
Outstanding Math Tutor with 5+ years of teaching expertise. Delivers tailored instruction and clearing doubts to students globally. Holds a Master's Degree.
Responds in 7 min
Star Tutor
Math Tutor
2+ years experience
Qualified Math, Chemistry and Coding Tutor ( Front-end Development and C Programming) with 2+ Years of Experience
Responds in 7 min
Star Tutor
Math Tutor
3+ years experience
Certified Math tutor. Completed M.Sc. and B.Ed. Possesses 3+ years of teaching experience. Provides in-depth lessons, test prep, and homework help in Math, Physics, and related subjects.
Responds in 5 min
Star Tutor
Math Tutor
4+ years experience
Expert Math teacher holding a Bachelor's degree in Mathematics and having 4+ years of experience with school students. Offers personalized sessions, engaging classes and homework help.
Responds in 53 min
Student Favourite
Math Tutor
4+ years experience
Math tutor with over 4+ years of experience delivering effective and personalized instruction, the approach focuses on building strong foundational skills. Holds a Bachelor's Degree.
Responds in 29 min
Student Favourite
Math Tutor
4+ years experience
Qualified Online Math tutor with 4+ years of tutoring experience. Provides customized lessons, test prep, and assignment help in Algebra, Calculus, Geometry, Trigonometry, Statistics, and more.
Responds in 14 min
Star Tutor
Math Tutor
6+ years experience
Qualified Math Tutor with 6+ Years of Experience of online tutoring experience
Responds in 4 min
Star Tutor
Math Tutor
1+ years experience
Top-notch math tutor with 1 year of experience tutoring high school students across the UK, US, CA, and AU. Supports and helps with problem-solving and critical thinking. Currently pursuing a
Master's degree.
Responds in 3 min
Star Tutor
Math Tutor
3+ years experience
Top Math instructor with rich experience in academic math; BTech from IIT Kharagpur; Coached AU & UK students in 3+ years of 1-on-1 teaching experience; Provides home work help.
Responds in 3 min
Star Tutor
Math Tutor
2+ years experience
Expert Math tutor having 2+ years of online tutoring experience with MS in Computer Science. Provides homework and assignment help and test prep for AP, SAT and ACT.
Responds in 3 min
Star Tutor
Math Tutor
4+ years experience
Outstanding Maths Tutor Cultivating Proficiency and Exam Success with 4 Years of Expertise. Specializing in Hands-on Practice, Exam Preparation, backed by Ph.D in Statistics.
Differential Topology topics you can learn
• Definition
• Smooth manifolds
• Differentiable structures
• Examples - Euclidean spaces, spheres, tori
Try our affordable private lessons risk-free
• Our free trial lets you experience a real session with an expert tutor.
• We find the perfect tutor for you based on your learning needs.
• Sign up for as few or as many lessons as you want. No minimum commitment or subscriptions.
In case you are not satisfied with the tutor after your first session, let us know, and we will replace the tutor for free under our Perfect Match Guarantee program.
What is differential topology?
Differential topology is a branch of mathematics that studies the properties of smooth manifolds. Smooth manifolds are topological spaces that locally resemble Euclidean space, spaces that are
locally flat, even if they may be globally curved.
It is used to study a wide variety of objects, including surfaces, knots, and links. It has many applications in physics, engineering, and computer science.
Uses of differential topology
1. In physics: Differential topology is used to study the geometry of spacetime. This is important for understanding topics such as general relativity and quantum mechanics.
2. In engineering: Differential topology is used to design and analyze complex objects, such as aircraft and spacecraft.
3. In computer science: Differential topology is used in some areas of computer graphics and robotics.
Why study differential topology?
Studying differential topology gives a strong foundation in mathematics and physics, which can be useful in the careers of physicists, aerospace engineers, and computer scientists.
In addition to these specific careers, studying differential topology helps develop problem-solving and analytical skills.
Essential information about your
Differential Topology lessons
Average lesson cost: $28/hr
Free trial offered: Yes
Tutors available: 1,000+
Average tutor rating: 4.8/5
Lesson format: One-on-One Online | {"url":"https://wiingy.com/tutoring/subject/differential-topology-tutors/","timestamp":"2024-11-14T22:13:28Z","content_type":"text/html","content_length":"490401","record_id":"<urn:uuid:7e02d310-5bf3-4fe7-aaa4-4c03cc2222b4>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00347.warc.gz"} |
Riemann Sum Calculator
Find the left or right Riemann sum with a complete procedure of computation. The Riemann sum calculator allows the input of the subintervals and aids the inputting of the functions with a built-in
How to use this Calculator?
The Riemann sum calculator requires the following steps to be completed.
1. Enter the function and inbound limits.
2. Select the side of the sum and variable.
3. Input the subinterval and click Calculate.
What is the Riemann Sum?
A Riemann sum is a method used to approximate the definite integral of a function over a certain interval.
To understand the idea behind the Riemann Sum, think of a curve on a graph. If we want to find the area under the curve over a certain interval, we can approximate it using rectangles.
The idea is simple: split the interval into smaller subintervals, and for each of them, construct a rectangle. The sum of the areas of these rectangles gives an approximation of the area under the
Essentially, you're dividing the region under a curve into smaller rectangles (or sometimes other shapes) and adding up the areas of these shapes. The more rectangles you use, and the narrower they
become, the closer the approximation becomes to the actual value of the integral.
Why do we need it?
The Riemann Sum serves as the foundational idea for integration. The integral symbol ∫ actually represents an elongated "S," symbolizing a "sum." When we integrate, we are essentially taking the
limit of the Riemann Sum as n approaches infinity — i.e., as the rectangles get infinitely narrow and numerous. This concept is essential in calculus, especially in dealing with the Fundamental
Theorem of Calculus.
Formula for the Riemann sum:
If the interval [a,b] is divided into n equal subintervals of width Δx, and xi*is a point in the ith subinterval, then the Riemann sum is:
Sn = i=1nf(xi*)Δx
• Δx = b−a / n
• xi* is any point in the ith subinterval.
Types of the Riemann sum:
There are three main types of Riemann sums:
• Left Riemann Sum: Choose xi* as the left endpoint of each subinterval.
• Right Riemann Sum: Choose xi* as the right endpoint for each subinterval.
• Midpoint Riemann Sum: Choose xi* as the midpoint of each subinterval.
How to find the Riemann sum?
To compute a Riemann Sum, begin by selecting an interval [a,b] over which you want to approximate the area under a function f(x). Divide this interval into n equal subintervals, each with a width of
Δx = b−a / n.
Next, for each subinterval, pick a representative sample point xi*. The choice of this sample point determines the type of Riemann Sum: if you pick the left endpoint, it's a Left Riemann Sum; the
right endpoint, a Right Riemann Sum; and the midpoint results in a Midpoint Riemann Sum.
Evaluate the function at each chosen sample point to determine the height of the corresponding rectangle. The area of each rectangle is then f(xi*) × Δx.
Sum up the areas of all these rectangles to obtain the Riemann Sum, which provides an approximation of the area under the curve of the function over the chosen interval. The accuracy of this
approximation typically improves as the number of subintervals n increases.
Solved Example:
Consider f(x) = x2 on the interval [0, 2] with n = 4.
Calculate Δx: This represents the width of each subinterval.
Δx = b−a / n
= 2−0 / 4
Determine the Sample Points: For the Left Riemann Sum, we choose the leftmost points of each subinterval.
x1*= 0, x2*= 0.5, x3*= 1, x4*= 1.5
Evaluate the Function: We get the height of each rectangle by evaluating the function at the sample points.
f(0) = 0, f(0.5) = 0.25, f(1) =1, f(1.5) = 2.25
Compute the Area of Each Rectangle: For each rectangle, its area is the height (given by f(xi*)) multiplied by the width (Δx). Add up the areas of all rectangles.
Sn = 0(0.5) + 0.25(0.5) + 1(0.5) + 2.25(0.5) = 1.75
Thus, the Left Riemann Sum approximation for this interval and function with 4 subintervals is 1.75. As we increase n, our approximation will get closer to the actual area under the curve.
Real-Life Applications:
Physics: Calculating work done by a variable force, or the center of mass of objects with varying densities.
Engineering: Evaluating stresses and strains in materials with non-uniform density.
Economics: Finding total cost from a marginal cost curve or total profit from a marginal profit curve.
Environmental Science: Computing the total amount of materials required over time given a rate of use or the total pollutant produced given a rate of emission. | {"url":"https://www.limitcalculator.online/riemann-sum-calculator","timestamp":"2024-11-14T17:39:37Z","content_type":"text/html","content_length":"73970","record_id":"<urn:uuid:072e644a-955b-4838-97ac-da10117131e4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00663.warc.gz"} |
For the equation, write the value or values of the variable that make a denominator zero. These are the restrictions on the variable. Keeping the restrictions in mind, solve the equation. - The Story of Mathematics - A History of Mathematical Thought from Ancient Times to the Modern Day
This question aims to find the solution to the given equation by taking into consideration the restrictions on the given function.
The fraction of two polynomials is said to be a rational expression. Such expression can be expressed as $\dfrac{a}{b}$ in which $a$ and $b$ both are polynomials. The product, sum, division, and
subtraction of a rational expression can be carried out similarly as they are carried out for the polynomials. Rational expressions possess a good property that the application of arithmetic
operations results in a rational expression as well. More generally, it is simple to find out the product or quotient of two or more rational expressions, but tricky to subtract or add as compared to
the polynomials.
Expert Answer
A function is said to be rational if there is at least one variable in the denominator of the rational expression. Let $h(y)$ and $k(y)$ be two functions in $y$ and $\dfrac{h(y)}{k(y)}$ be the
rational function. A restriction on such a function can be defined as any value of the variable in the linear denominator that makes it zero. A restriction results in another function by selecting a
relatively small domain for the rational function.
The restrictions on the domain can be found by equating the denominator to zero. The values of variables for which the denominator becomes zero and the function becomes undefined are said to be
singularity and are excluded from the domain of the function.
Numerical Results
For restrictions:
Let $x+5=0$, $x-5=0$ and $x^2-25=0$
$x=-5$, $x=5$ and $x=\pm 5$
So, the restrictions are $x=\pm 5$.
Now solve the given equation as:
Example 1
Given below is a rational function with a non-linear denominator. Find the restrictions on the variable.
Now, to find the restrictions, equate the denominator to zero as:
Since $x=-2$ makes the denominator zero and the given function undefined, this is the restriction on the variable.
Example 2
Given below is a rational function with a linear denominator. Find the restrictions on the variable.
First, simplify the given expression as:
Now, to find the restrictions, equate the denominator to zero as:
Since $x=3$ makes the denominator zero and the given function undefined, this is the restriction on the variable. | {"url":"https://www.storyofmathematics.com/write-the-value-or-values-of-the-variable-that-make-a-denominator-zero/","timestamp":"2024-11-08T11:10:11Z","content_type":"text/html","content_length":"136952","record_id":"<urn:uuid:eb70e644-09ef-4950-99b6-6292d34dff52>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00286.warc.gz"} |
HeatMap with Numeric and Discrete Variables
Heat maps are a great way to visualize the bi-variate distribution of data. Traditionally, a heat may may have two numeric variables, placed along the X and Y dimension.
Each variable range is sub divided into equal size bins to create a rectangular grid of bins. The number of observations that fall into each bin is computed, and the grid is displayed by coloring
each bin with a shade of color computed from a color gradient as shown on the right. Click on the graph to see a higher resolution image.
GTL supports a HeatMapParm statement, which can draw a heat map if provided the X-Y grid of bins, along with a count of observations in each bin. Actually, the value can be count, or anything else.
So, it comes down to computing the values in each bin.
For the above graph, I used the KDE procedure to compute the frequency of observations in each grid using the "BIVAR" statement for two interval variables. The binned data is written out th the
KDEData data set using the ODS Output statement.
ods output bivariatehistogram=KDEData;
proc kde data=sashelp.heart;
bivar systolic ageatstart / plots=all ng=100;
Once the data is extracted, I keep the non-missing observations and feed the X, Y and Count data to the HeatMapParm statement using the GTL code shown below.
proc template;
define statgraph HeatMapNumNum;
dynamic _x _y _n;
entrytitle 'Distribution of Age by Systolic Blood Pressue';
layout overlay;
heatmapparm x=_x y=_y colorresponse=_n / colormodel=(white yellow red)
display=(fill outline) outlineattrs=(color=cxf7f7f7)
xbinaxis=false ybinaxis=false name='h';
continuouslegend 'h';
proc sgrender data=KDEData template=HeatMapNumNum;
dynamic _x='binx' _y='biny' _n='bincount';
Each bin is drawn using a fill color whose shade is computed from the three color map I have specified in the GTL code and also a light gray outline. It can be seen from the outlines that all bins
are drawn and the KDE procedure computes bins with zero frequencies.
Another way to compute the bins is to use the SURVEYREG procedure, as shown in the code below for two interval variables. This procedure can plot heat maps directly, but for our purposes, we will
get the data to draw our own heat map.
ods output fitplot=SurveyRegData;
proc surveyreg data=sashelp.heart plot=fit(shape=rec nbins=30);
model AgeAtStart = Systolic;
We can use the data written out by this procedure to draw our heat map just as before. Note, the SurveyReg procedure allows us to set the number of bins in each direction. So, here we have used 30
bins in each direction to get a fine grained heat map.
If you click on the graph on the right, you will notice that the map does not have all bins drawn. This means that the SurveyReg procedure only defines bins that contain non zero counts. Bins with
zero counts are not generated at all, resulting in the empty bins (no outline).
In many cases, we may want to create a Heatmap for a combination of one discrete variable and one interval variable. The HeatmapParm GTL statement can take either discrete or interval variables, but
now can we compute the bins in this case?
One easy way is using the new GTL or SGPLOT Histogram statement with the GROUP option released with SAS 9.4. Using the GROUP option, the Histogram statement computed a set number of bins for the
interval variable for each unique value of the discrete variable. The histogram does the work to make the interval bins the same for all the discrete levels, giving us exactly what we want.
Now, we can take this data, and use the HeatMapParm GTL statement with one discrete and one interval variable as shown on the right. I used a four color ramp just for some variety. The code is
shown below.
proc template;
define statgraph HeatMapCatNum;
dynamic _title _x _y _n;
entrytitle _title;
layout overlay / yaxisopts=(display=(ticks tickvalues));
heatmapparm x=_x y=_y colorresponse=_n / colormodel=(white green yellow red)
display=(fill outline) outlineattrs=(color=cxf7f7f7) name='h' ;
continuouslegend 'h';
One can also draw a Heatmap with two discrete variables. The data is easily computed using the MEANS or FREQ procedures. The value for each bin can be a response value as shown in this article.
Full SAS 9.4 GTL Code: HeatMap
4 Comments
Your readers might also be interested in these articles about creating heat maps in SAS:
1) Creating a basic heat map in SAS
2) Creating heat maps in SAS/IML
3) How to choose colors for maps and heat maps
4) How to create a hexagonal heat map in SAS
Many thanks for sharing this interesting code.
I am trying to use this code in SAS but after the define statement (in proc template) every other statement is marked in red (!) as if there is an error somewhere.
Any advice how to overcome that?
Many thanks,
It is hard to say without any code. I suggest you post your questions to the Communities page for SAS/Graph and ODS Graphics (with full code and data). There are many experts that can suggest | {"url":"https://blogs.sas.com/content/graphicallyspeaking/2014/10/23/heatmap-with-numeric-and-discrete-variables/","timestamp":"2024-11-09T00:36:02Z","content_type":"text/html","content_length":"53929","record_id":"<urn:uuid:9ac17ac0-6f44-4ee8-922e-ea11f06cdd52>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00239.warc.gz"} |
CylindricalTank Class
CylindricalTank Class#
class rocketpy.CylindricalTank[source]#
Class to define the geometry of a cylindrical tank. The cylinder has its zero reference point at its center (i.e. half of its height). This class inherits from the TankGeometry class. See the
TankGeometry class for more information on its attributes and methods.
__init__(radius, height, spherical_caps=False, geometry_dict=None)[source]#
Initialize CylindricalTank class. The zero reference point of the cylinder is its center (i.e. half of its height). Therefore the its height coordinate span is (-height/2, height/2).
○ radius (float) – Radius of the cylindrical tank, in meters.
○ height (float) – Height of the cylindrical tank, in meters.
○ spherical_caps (bool, optional) – If True, the tank will have spherical caps at the top and bottom with the same radius as the cylindrical part. If False, the tank will have flat caps
at the top and bottom. Defaults to False.
○ geometry_dict (Union[dict, None], optional) – Dictionary containing the geometry of the tank. See TankGeometry.
Adds spherical caps to the tank. The caps are added at the bottom and at the top of the tank with the same radius as the cylindrical part. The height is not modified, meaning that the total
volume of the tank will decrease. | {"url":"https://docs.rocketpy.org/en/latest/reference/classes/motors/geometries/CylindricalTank.html","timestamp":"2024-11-06T11:26:02Z","content_type":"text/html","content_length":"32548","record_id":"<urn:uuid:a4519e3b-8766-4a08-ba37-fd5c6bdac21b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00599.warc.gz"} |
Introducing SlideforMAP: a probabilistic finite slope approach for modelling shallow-landslide probability in forested situations
Articles | Volume 22, issue 8
© Author(s) 2022. This work is distributed under the Creative Commons Attribution 4.0 License.
Introducing SlideforMAP: a probabilistic finite slope approach for modelling shallow-landslide probability in forested situations
Shallow landslides pose a risk to infrastructure and residential areas. Therefore, we developed SlideforMAP, a probabilistic model that allows for a regional assessment of shallow-landslide
probability while considering the effect of different scenarios of forest cover, forest management and rainfall intensity. SlideforMAP uses a probabilistic approach by distributing hypothetical
landslides to uniformly randomized coordinates in a 2D space. The surface areas for these hypothetical landslides are derived from a distribution function calibrated on observed events. For each
generated landslide, SlideforMAP calculates a factor of safety using the limit equilibrium approach. Relevant soil parameters are assigned to the generated landslides from log-normal distributions
based on mean and standard deviation values representative of the study area. The computation of the degree of soil saturation is implemented using a stationary flow approach and the topographic
wetness index. The root reinforcement is computed by root proximity and root strength derived from single-tree-detection data. The ratio of unstable landslides to the number of generated landslides,
per raster cell, is calculated and used as an index for landslide probability. We performed a calibration of SlideforMAP for three test areas in Switzerland with a reliable landslide inventory by
randomly generating 1000 combinations of model parameters and then maximizing the area under the curve (AUC) of the receiver operation curve. The test areas are located in mountainous areas ranging
from 0.5–7.5km^2 with mean slope gradients from 18–28^∘. The density of inventoried historical landslides varies from 5–59slideskm^−2. AUC values between 0.64 and 0.93 with the implementation of
single-tree detection indicated a good model performance. A qualitative sensitivity analysis indicated that the most relevant parameters for accurate modelling of shallow-landslide probability are
the soil thickness, soil cohesion and the precipitation intensity$/$transmissivity ratio. Furthermore, we show that the inclusion of single-tree detection improves overall model performance
compared to assumptions of uniform vegetation. In conclusion, our study shows that the approach used in SlideforMAP can reproduce observed shallow-landslide occurrence at a catchment scale.
Received: 19 May 2021 – Discussion started: 25 May 2021 – Revised: 13 Jun 2022 – Accepted: 26 Jun 2022 – Published: 15 Aug 2022
Landslides pose serious threats to inhabited areas worldwide. They were the cause of 17% of the fatalities due to natural hazards in the period of 1994–2013 (Kjekstad and Highland, 2009). Average
annual monetary losses over the period of 2010–2019 were approximately USD25 billion (Munich RE, 2018). In addition, Swiss Re Institute (2019) notes a significant increase in damage by
hydrologically related natural hazards over the past 5 years, including hydrologically triggered shallow landslides. This has been attributed to increased urbanization in risk-prone areas and to an
increase in heavy-rainfall events. Furthermore, Swiss Re Institute (2019) notes that the modelling of shallow landslides is underdeveloped compared to the severity of the danger they pose. In
mountainous regions, landsliding is a prominent natural hazard. For instance, in the Alpine parts of Switzerland, 74 people died as a result of landslide events between 1946 and 2015 (Badoux et al.,
2016). The annual cost of landslide protective measures alone is approximately CHF15 million each year (Dorren and Sandri, 2009). No distinction is made between deep-seated and shallow landslides in
these numbers. Rain-induced shallow landslides are one of the most important and dangerous types of mass movement in mountainous regions (Varnes, 1978). Shallow landslides are defined as
translational mass movement with a maximum soil thickness of 2m and are the main focus in this paper. Fortunately, improvements in hazard assessment have significantly decreased the number of
shallow-landslide-related deaths over the past decades (Badoux et al., 2016). This general trend is also supported by long-term data (Munich RE, 2018). The fatality decrease is related to better
organizational measures regarding hazards, such as warning-based evacuations and road closures. Biological measures, such as management of protection forests, also play a role in mitigation of
natural hazards. The latter role is especially important for (shallow) landslides, rockfall, snow avalanches and debris flows (Corominas et al., 2014).
Modelling of shallow-landslide triggering has been an ongoing process. Shallow-landslide probability has been modelled mostly using a deterministic approach (Corominas et al., 2014). The
deterministic approach is defined by using average values of risk components, resulting in a univariate result (Corominas et al., 2014). An example of a deterministic approach in this sense is the
SHALSTAB model of Dietrich and Montgomery (1998). Other contemporary examples are TRIGRS (Baum et al., 2002) and SLIP (Montrasio et al., 2011), the latter showing good results in assessing soil
saturation in a spatially heterogeneous way. In a comparative piece of research it was noted that the SHALSTAB approach was not representative of the spatial variability in the parameters at a small
scale (Cervi et al., 2010). In recent decades, the development of probabilistic models and statistical methods has improved model performance for quantifying landslide probability and the
interpretation of its results (Corominas et al., 2014). In statistical methods (e.g. Baeza and Corominas, 2001), there is no explicit accounting for physical processes. Probabilistic methods could
take physical processes into account and additionally quantify the reliability of the results considering the probability distribution of values of one or more input parameters (Salvatici et al.,
2018). The output is a probability rather than a univariate result. A prime example of a probabilistic model is SINMAP (Pack et al., 1998). Generally, these models perform better than deterministic
ones (Park et al., 2013; Zhang et al., 2018), likely due to natural landslides having a mode of movement significantly controlled by internal inhomogeneities and discontinuities in the soil (Varnes,
1978). These control mechanisms are unpredictable at small scales, making it hard for deterministic models to identify exact locations of instabilities and adjust the heterogeneous parametrization
accordingly. Below we go into more detail on the initiation of shallow landslides.
Initiation of instability is a process that combines mechanical and hydrological processes on different spatial and temporal scales and can thereby be very localized, with successive movement
increasing the magnitude of the event (Varnes, 1978). In alpine environments, instabilities are typically triggered by rainfall, leading to soil wetting and ensuing increase in pore pressure, which
destabilizes the soil and can then initiate soil movement. An increase in pore pressure can build up in minutes to months following a rainfall event (Bordoni et al., 2015; Lehmann et al., 2013), and
rapid pore pressure changes are attributed to the macropore flow and slow pore pressure changes to the matrix water flow. The higher the horizontal hydraulic conductivity of the soil, the faster pore
pressure changes can develop (Iverson, 2000). The reaction of pore pressure to rainfall is variable and highly dependent on soil type. A key experimental study is the work of Bordoni et al. (2015) in
which in situ measurements were taken on a slope with clayey–sandy silt and clayey–silty sand soils that experienced a shallow landslide. It showed that intense rainfall and a rapid increase in pore
pressure were the triggering factors of the landslide. Over the duration of the measurements, comparable saturation degrees were reached during both prolonged-rainfall and intense-rainfall events.
Prolonged rainfall did not result in the pore pressure required to trigger a shallow landslide. Similar behaviour has been observed in an artificially triggered landslide in Switzerland (Askarinejad
et al., 2012; Lehmann et al., 2013; Askarinejad et al., 2018). In the first wetting phase (year 2008), homogeneously induced rainfall with a duration of 3d, accumulated rainfall of 1700mm and an
intensity of 35mmh^−1 induced a maximum pore water pressure of 2kPa at 1.2m soil depth, resulting in no landslide. In the second phase of the experiment (year 2009), the rainfall was
heterogeneous, with a maximum intensity of 50mmh^−1 in the upper part of the slope that induced an increase in pore water pressure of up to 5kPa at 1.2m soil depth, resulting in the triggering of
a shallow landslide. The triggering was reached after 15h with cumulative rainfall of 150mm. In addition, a computational study by Li et al. (2013) showed that at a high rainfall intensity (80mmh
^−1), the pore water pressure at a depth of 1m reached a constant value within 1h. For a lower intensity of 20mmh^−1, this took approximately 3h. This shows that landslide triggering is related
to a fast build-up of pore water pressure proportional to rainfall intensity. The work of Wiekenkamp et al. (2016) suggests that preferential flow dominates the runoff in a heterogeneous catchment
during extreme precipitation events. Water can move downslope very rapidly through macropores (in experimental conditions) under both saturated and unsaturated conditions (Mosley, 1982). The role of
macropores can be important in a closed soil structure or in the presence of a shallow impermeable bedrock, where they control the soil hydrological behaviour. Further examples of the influence of
macropores on hillslope hydrology in various soil types are presented in the work of Weiler and Naef (2003) and Bodner et al. (2014). Additionally, Torres et al. (1998) demonstrate the strong role of
macropore in preferential flow paths for landslide triggering in an artificial-rain experiment in a loamy sandy soil. Montgomery et al. (2002) and Montgomery and Dietrich (2004) also underline the
importance of macropore flow but state that the vertical flow governs response time and build-up of pore pressure rather than the lateral flow in their study areas.
The mechanical aspect of shallow-landslide initiation usually results from local instabilities that could extend indefinitely in an infinite constant slope if the shear resistance is low (Varnes,
1978). In complex topography, however, the passive earth pressure at the bottom of the triggering zone reacts with a resisting force, contributing thereby to landslide stabilization (Schwarz et al.,
2015; Cislaghi et al., 2018). It is important to note here that the passive earth pressure is activated in a later phase of the triggering of a shallow landslide and should not be added to active
earth pressure or tensile forces acting along the upper half of the shallow landslide (Cohen and Schwarz, 2017).
Besides hydrology, slope and soil characteristics, vegetation plays a key role in landslide triggering (Salvatici et al., 2018; Corominas et al., 2014; Greenway, 1987; González-Ollauri and Mickovski
, 2014). The role of vegetation can be subdivided into hydrological and mechanical effects. Vegetation influences the effective soil moisture by interception, increased evapotranspiration and
increased infiltration (Greenway, 1987; Masi et al., 2021). Over the short timescale with intense rainfall, these hydrological effects are negligible but do play an important role in pre-event
disposition of slope instability (Feng et al., 2020). Among the mechanical effects, root reinforcement, mobilized during soil movement, is an essential component (Greenway, 1987; Schwarz et al., 2010
). It is a leading factor in the failure criterion for many vegetated slopes (Dazio et al., 2018). In modelling studies, the influence of root reinforcement on slope stability is often quantified as
an apparent added cohesion (Wu et al., 1978; Borga et al., 2002). This apparent cohesion in turn can be added in the limit equilibrium computation of a safety factor (SF). Using a Monte Carlo
approach to this method (Zhu et al., 2017), it was found that the SF can gain up to 37% stability when including vegetation root reinforcement. In another study in New Zealand, trees had an effect
on soil stability up to 11m away from their position and had the ability to prevent 70% of instability events (Hawley and Dymond, 1988). Computational research furthermore shows that root
reinforcement by the larger roots is dominant over the smaller roots, even though they are far less numerous (Vergani et al., 2014). The planting pattern and management of the vegetation can have a
profound effect on root reinforcement and thus on slope stability (Sidle, 1992). Therefore a detailed approach to calculate the spatial distribution of root reinforcement is important for slope
stability calculations. Root reinforcement can be subdivided into two major components: basal root reinforcement and lateral root reinforcement. Basal root reinforcement is the anchoring of tree
roots through the sliding plane into the deeper soil. Lateral root reinforcement is the reinforcement from roots on the edges of the potential slide that stick into the soil outside of the potential
slide (Schwarz et al., 2010). In contrast, the mechanical influence of vegetation weight on slope stability is often considered negligible (Reinhold et al., 2009). In current shallow-landslide
probability modelling, whether deterministic or probabilistic, root reinforcement is generally modelled in a simplified way, for example by including homogeneous root reinforcement (Montgomery et al.
, 2000). These methods limit the evaluation of the effects of different forest spatial properties such as forest structure and the contribution of different root reinforcement mechanisms to slope
stabilization (Schwarz et al., 2012). In order to overcome this limitation, we develop a shallow-landslide probability model, named SlideforMAP. To ensure wide applicability, SlideforMAP is designed
for a regional scale. In concrete terms this means SlideforMAP should be applied to study areas of 1–1000km^2. The main objectives of this work are to
• present the SlideforMAP model as a tool for shallow-landslide probability assessment,
• show a calibration of SlideforMAP through a performance indicator over three study areas with 78 field-recorded shallow-landslide events in Switzerland,
• analyse the expected improvement in the performance of SlideforMAP with a detailed inclusion of vegetation,
• provide a qualitative sensitivity analysis and identify the parameters that are of greatest influence on the slope stability.
Strong emphasis within the SlideforMAP framework and this paper is put on the quantification of root reinforcement on a regional scale. We will show the effect of accurate, quantitative
representation of root reinforcement has on slope stability over three study areas. Simplifications, lack of a temporal component and calibration constraints make it impossible to use SlideforMAP as
an exact forecast tool. The main application for SlideforMAP is as a tool to quantify the effects of vegetation planting, growth and/or management for land managers in relation to shallow landslides.
2.1Probabilistic modelling concept
SlideforMAP is a probabilistic model that generates a 2D raster of shallow-landslide probability (P[ls]). It is an extension of the approach of Schwarz et al. (2010, 2015). It generates a large
number of hypothetical landslides (HLs, singular – HL) within the limits of a pre-defined region of interest. These HLs are assumed to have an elliptic shape and are characterized by a mix of
deterministic and probabilistic parameters, from which the landslide stability is computed following the limit equilibrium approach (Sect. 2.2). The probabilistic parameters are the HL location, its
surface area and its soil cohesion, the internal friction angle, and soil thickness parameters (drawn from appropriate random distributions). The location and surface area are approached in a
probabilistic way to compute a spatial probability distribution. The soil parameters are probabilistic because we assume their variation is high and important in mountainous environments. The
deterministic parameters include several vegetation parameters and hydrological soil parameters. A key original feature of the approach stems from the fact that the vegetation parameters can be
derived from single-tree-scale information (Sect. 2.5). The number of generated landslides is high enough such that each point in a region of interest is overlain by multiple HLs from which a
relative P[ls] can be estimated by considering the ratio of unstable HLs. A general flow chart of SlideforMAP is given in Fig. 1. More details on the modules follow in the subsequent sections.
2.2Stability estimation
The estimate of the stability of each HL is calculated following the limit equilibrium approach (described well in the work of Day, 1997). In this method, a landslide is assumed to be stable if its
safety factor (SF) is greater than 1.0. The SF is computed as the ratio of the parallel to slope-stabilizing forces to the destabilizing ones:
$\begin{array}{}\text{(1)}& \mathrm{SF}=\frac{{F}_{\mathrm{res}}}{{F}_{\mathrm{par}}},\end{array}$
where F[par] [N] is the force parallel to the slope, F[res] [N] is the maximum mobilized resistance force. The assumed forces that act upon a hypothetical landslide are schematically shown in Fig. 2.
As seen in Fig. 2, all landslides are assumed to be elliptical (Rickli and Graf, 2009) with a ratio between length and width of l[wr]=2. The forces assumed in SlideforMAP are typical of the second
stage of the activation phase: the displacement at which lateral root reinforcement is maximized under tension along the tension crack and at which passive earth pressure and lateral root compression
are assumed to not be fully mobilized (Cohen and Schwarz, 2017). The magnitude of the stabilization's effects under compression considerably changes depending on the stiffness of the landslide
material and the dimensions of the landslide. The quantification of those effects is still a challenge for slope stability calculation at large scales. In order to develop a conservative approach, we
neglect those effects in the stability calculations of SlideforMAP. The tension crack is assumed to span the entire upper half of the circumference of the HL and has an assumed length in the range of
0.01–0.1m (Schwarz et al., 2015) depending on the root distribution. This behaviour of progressive shallow-landslide failure with a tension crack opening up in the upper half of a shallow landslide
is described in detail in Cohen et al. (2009) and Askarinejad et al. (2012). This is different from the assumptions taken in most landslide models involving root reinforcement (e.g. Montgomery et al.
, 2000; Schmidt et al., 2001) that assume lateral root reinforcement is activated at the same time along the entire landslide perimeter. Quantification of the forces in the safety factor calculation
follows the limit equilibrium assumptions. This method is outlined in Eqs. (2) to (5) below:
$\begin{array}{}\text{(2)}& {F}_{\mathrm{par}}=g\left({m}_{\mathrm{soil}}+{m}_{\mathrm{w}}+{m}_{\mathrm{veg}}\right)\cdot \mathrm{sin}\left(s\right),\text{(3)}& {F}_{\mathrm{res}}=\frac{{c}_{\mathrm
{ls}}}{\mathrm{2}}\cdot {R}_{\mathrm{lat}}+{F}_{\mathrm{res},\mathrm{bas}}\phantom{\rule{0.25em}{0ex}},\text{(4)}& {F}_{\mathrm{res},\mathrm{bas}}={A}_{\mathrm{ls}}\cdot {C}_{\mathrm{soil}}+{A}_{\
mathrm{ls}}\cdot {R}_{\mathrm{bas}}+{F}_{\mathrm{per},\mathrm{eff}}\cdot \mathrm{tan}\left(\mathit{\varphi }\right),\text{(5)}& {F}_{\mathrm{per},\mathrm{eff}}=g\cdot \left({m}_{\mathrm{soil}}+{m}_{\
mathrm{w}}+{m}_{\mathrm{veg}}\right)\cdot \mathrm{cos}\left(s\right)-{P}_{\mathrm{water}}\phantom{\rule{0.25em}{0ex}}.\end{array}$
In these equations, m[soil] is the soil mass [kg], m[w] is the mass of the water [kg], m[veg] is the vegetation mass [kg], g is the gravitational acceleration assumed at 9.81 [ms^−2], s is the slope
[^∘], c[ls] is the circumference of the landslide [m], R[lat] is the lateral root reinforcement [Nm^−1], F[res,bas] is the basal resisting force, A[ls] [m^2] is the area of the landslide, C[soil]
[Pa] is the soil cohesion [Pa], R[bas] is the basal root reinforcement [Pa], F[per,eff] is the effective perpendicular resisting forces [N], ϕ is the angle of internal friction [^∘] and P[water] is
the water pressure [Pa].
2.3Placement and extent
The location of the centre of mass of the HLs is generated from two uniform distributions covering the latitudinal and longitudinal extent of the study area. HLs on the edge of the study area are
taken into account as well, though cut to the extent of the study area in the later spatial processes of SlideforMAP. The total number of HLs is determined by multiplying the landslide density
parameter (ρ[ls]) with the total surface area of the study area. This number is then uniformly sampled with replacements from the latitudinal and longitudinal distribution. The value of ρ[ls] should
be high enough such that each raster cell of the study domain is covered by several HLs. The HL surface area is sampled from an inverse gamma distribution following the work of Malamud et al. (2004),
which showed that the probability distribution of shallow-landslide surface areas follows an inverse gamma distribution (Johnson and Kotz, 1970). The parametrization of a three-parameter inverse
gamma distribution is shown in Eq. (6) below.
$\begin{array}{}\text{(6)}& {P}_{{A}_{\mathrm{ls}}}=\frac{\mathrm{1}}{a\cdot \mathrm{\Gamma }\left(\mathit{\rho }\right)}{\left(\frac{a}{{A}_{\mathrm{ls}}-s}\right)}^{\left(\mathit{\rho }+\mathrm{1}\
where A[ls] is the area of the landslide; ${P}_{{A}_{\mathrm{ls}}}$ is the probability of A[ls]; Γ is the gamma function; and a, ρ and s are parameters. These distributional parameters are estimated
using the landslide surface area data of the inventory (Sect. 3). The estimation is based on minimizing the root mean square error (RMSE) between the histogram counts (size of histogram bins is 10)
of the surface areas from the inventory and the distribution of Eq. (6). Users can follow this approach with an inventory or use a custom parametrization. The maximum HL surface area is set for all
case studies based on the maximum surface area observed in the landslide inventory. This maximum is set to 3000m^2, based on the rounded-up maximum value of a well-distributed landslide inventory in
Switzerland (Sect. 3.3), but users can vary this parameter.
2.4Soil parameters
Steeply sloped mountainous areas are prone to extreme and unpredictable heterogeneity in soil parameters (Cohen et al., 2009). This makes a heterogeneous deterministic parametrization inaccurate,
even if based on observations. To overcome this limitation, a probabilistic approach to the parametrization of soil parameters of the model is applied. Values of soil cohesion and the internal
friction angle of each HL are randomly generated from independent probability distributions. This is an approach similar to the one taken in Griffiths et al. (2009), who use the log-normal
distribution for soil cohesion only, and Pack et al. (1998), who use a uniform distribution for soil cohesion and the friction angle. We choose the log-normal distributions in our parametrization
because it has been shown to give a good fit (Fig. A1 with a comparison to a normal distribution in the Appendix; corresponding code in the Supplement), it ensures generating positive values only and
its accuracy has been shown in Griffiths et al. (2009). The distribution is parametrized by the mean and the standard deviation of observed samples. The mean and the standard deviation are based on
different information such as field soil classification or a geotechnical analysis. The soil cohesion in our computations is assumed to be representative of saturated, drained and unconsolidated
conditions. Soil thickness is parametrized following a different approach to account for the shallow soils found on steep slopes. An initial soil thickness (h[soil]) is derived from a log-normal
distribution. This is then multiplied by a correction factor which is a function of slope inclination as shown in Eq. (7). Soil thickness is defined here perpendicularly to the slope as opposed to
soil depth, which is measured in the vertical direction.
$\begin{array}{}\text{(7)}& {H}_{\mathrm{soil}}={h}_{\mathrm{soil}}\left(\mathrm{1}-{P}_{\mathcal{N}}\left(S\le s|{\mathit{\mu }}_{\mathrm{1}},{\mathit{\sigma }}_{\mathrm{1}}\right)\right),\end
where H[soil] [m] is the soil thickness and s is the observed slope, extracted for the HL. ${P}_{\mathcal{N}}\left(S\le s|{\mathit{\mu }}_{\mathrm{1}},{\mathit{\sigma }}_{\mathrm{1}}\right)$ is the
cumulative normal distribution of the slope S with ${\mathit{\mu }}_{\mathrm{1}}=a\cdot {m}_{h}$ and ${\mathit{\sigma }}_{\mathrm{1}}=b\cdot {\mathit{\sigma }}_{h}$. m[h] and σ[h] are the mean and
standard deviation of the slope angle of shallow landslides from an inventory or a best guess. a and b are estimated by fitting data from a landslide inventory containing the slope angle and soil
thickness. Relations other than those used by SlideforMAP to correct the soil thickness to the slope (e.g. Prancevic et al., 2020) are possible as well.
2.5Mechanical effects of vegetation
Three properties of vegetation are included in the model. These are vegetation weight, lateral root reinforcement and basal root reinforcement. SlideforMAP only incorporates trees and ignores
possible effects by shrubs, grasses and other vegetation. This choice is due to the fact that trees are predominant in influencing slope stability (Greenway, 1987). Single-tree detection (Korpela
et al., 2007; Menk et al., 2017) serves as a basis to estimate these properties. Single-tree position and dimensions are derived from a canopy height model (CHM), which is the difference between the
digital surface model (DSM) and the digital elevation model (DEM), using a local maxima detection (LMD) method described in the work of Eysn et al. (2015) and Menk et al. (2017). First, the trees are
rasterized. The resolution of this raster has to exceed the effective radial dimension of the trees in order to calculate representative vegetation parameter values at the stand scale. The weight of
the tree is calculated by using the tree height and the diameter at breast height (DBH), assuming that the trees are cone shaped. The tree mass, m[veg], used in Eqs. (2) and (5), is calculated
assuming a mean tree density (ρ[tree]) of 850kgm^−3. Root reinforcement is added in the model using the method proposed by Schwarz et al. (2012), which relates the root reinforcement to the
distance to a tree, the size of the tree and the tree species. Two rasters are computed. A raster with the shortest distance to a tree (D[trees]) and a raster with the average DBH of all trees within
an assumed maximum distance of root influence (D[trees,max]), set at 15m. We compute actual lateral root reinforcement for a given grid cell as a function of maximum lateral root reinforcement and
soil thickness, which reduces maximum lateral root reinforcement. The maximum lateral root reinforcement, RR[max] [Nm^−1], is computed as a function of D[trees] and DBH (Moos et al., 2016; Gehring
et al., 2019) according to Eq. (8) below:
$\begin{array}{}\text{(8)}& R{R}_{\mathrm{max}}=\left(c\cdot \mathrm{DBH}\right)\cdot {\mathrm{\Gamma }}_{\mathrm{PDF}}\left(\frac{{D}_{\mathrm{trees}}}{\mathrm{DBH}\cdot \mathrm{18.5}}|{\mathit{\
alpha }}_{\mathrm{1}},{\mathit{\beta }}_{\mathrm{1}}\right).\end{array}$
In Eq. (8), c is a fitting parameter in Nm^−2 based on the work of Schwarz et al. (2010). DBH is in metres [m]. The ${\mathrm{\Gamma }}_{\mathrm{PDF}}\left(x|{\mathit{\alpha }}_{\mathrm{1}},{\mathit
{\beta }}_{\mathrm{1}}\right)$ is the gamma probability density function (Γ[PDF]) evaluated as a function of x with shape parameter α[1] and rate parameter β[1]. Both α[1] and β[1] are dimensionless.
The parameters should ideally reflect any knowledge about how root reinforcement decreases with distance for specific tree species. The general Γ[PDF] is written as
$\begin{array}{}\text{(9)}& {\mathrm{\Gamma }}_{\mathrm{PDF}}\left(x|\mathit{\alpha },\mathit{\sigma }\right)=\frac{{x}^{\mathit{\alpha }-\mathrm{1}}{e}^{-x/\mathit{\sigma }}}{{\mathit{\sigma }}^{\
mathit{\alpha }}\mathrm{\Gamma }\left(\mathit{\alpha }\right)},\left(x,\mathit{\alpha },\mathit{\sigma }>\mathrm{0}\right).\end{array}$
In this equation α and σ are the shape and scale parameter. The rate parameter, β, as used in this research, is defined as 1$/$scale. Soil thickness reduces the effects of lateral root
reinforcement that contribute to stabilizing a shallow landslide. This decrease in lateral root reinforcement with soil thickness is obtained as follows:
$\begin{array}{}\text{(10)}& {R}_{\mathrm{lat}}=R{R}_{\mathrm{max}}\cdot \underset{\mathrm{0}}{\overset{{H}_{\mathrm{soil}}}{\int }}{\mathrm{\Gamma }}_{\mathrm{PDF}}\left(H|{\mathit{\alpha }}_{\
mathrm{2}},{\mathit{\beta }}_{\mathrm{2}}\right)\mathrm{d}H.\end{array}$
In this equation ${\mathrm{\Gamma }}_{\mathrm{PDF}}\left(H|{\mathit{\alpha }}_{\mathrm{2}},{\mathit{\beta }}_{\mathrm{2}}\right)$ is the Γ[PDF] for the normalized root distribution over the soil
thickness with shape parameter α[2] and rate parameter β[2]. β[2] has the unit of metres [m] in order to make the integral of the Γ[PDF] dimensionless. SlideforMAP computes this integral by numerical
approximation. This method computes the root reinforcement where only one tree can influence a cell. A spatially representative minimum root reinforcement value is calculated in a stand assuming a
triangular lattice (Giadrossich et al., 2020). Under this assumption, three root systems interact additively. Basal root reinforcement, R[bas], is assumed to be proportional to lateral root
reinforcement and dependent on soil thickness according to the relation shown in Eq. (11):
$\begin{array}{}\text{(11)}& {R}_{\mathrm{bas}}=R{R}_{\mathrm{max}}\cdot {\mathrm{\Gamma }}_{\mathrm{PDF}}\left({H}_{\mathrm{soil}}|{\mathit{\alpha }}_{\mathrm{2}},{\mathit{\beta }}_{\mathrm{2}}\
where ${\mathrm{\Gamma }}_{\mathrm{PDF}}\left({H}_{\mathrm{soil}}|{\mathit{\alpha }}_{\mathrm{2}},{\mathit{\beta }}_{\mathrm{2}}\right)$ is the normalized root distribution in the vertical direction.
The Γ[PDF] in this application has the unit of inverse metres [m^−1], which leads to a unit of pascals [Pa] for the term R[bas], under the assumption of isotropic conditions.
The hydrological module in SlideforMAP is based on the TOPOG model (O'Loughlin, 1986), which includes a specific topographic index as inspired by Kirkby (1975). In this framework we specifically
assume macropore flow dominates hillslope hydrology. An identical model is used in the SHALSTAB stability model (Montgomery and Dietrich, 1994) and SINMAP (Pack et al., 1998). It is assumed that the
saturated soil fraction of each cell holds a relation to its specific catchment area, its slope angle, a constant precipitation intensity and the soil transmissivity (Eq. 12). This is in close
correspondence with the parametrization used in the widely used TOPMODEL (Beven and Kirkby, 1979). Limitations of this approach is the assumption of uniform soil transmissivity, no inclusion of
initial conditions, steady-state flow and lateral-flow governing of soil moisture pattern. These limitations and generalizations make the model insufficient in capturing detailed hydrological
patterns, especially in mountainous regions modelled by SlideforMAP. Despite this, we assume the approach to be suitable for a general pattern of the saturated fraction and subsequent pore pressure.
In addition to this shortcoming we ignore the apparent hydrological cohesion (Chae et al., 2017) prominent in unsaturated fine and clayey soils but of little prominence in other conditions (Montrasio
and Valentino, 2008). The saturated soil fraction, ${h}_{\mathrm{sat}}^{*}$ [−], of a soil column is defined in Eq. (12) below:
$\begin{array}{}\text{(12)}& {h}_{\mathrm{sat}}^{*}=\frac{I\cdot a}{T\cdot b\cdot \mathrm{sin}\left(s\right)},\end{array}$
where I [ms^−1] is the constant precipitation intensity, T [m^2s^−1] is the transmissivity, a is the contributing catchment area [m^2], s is the slope inclination [^∘], and b is the contour length
[m] that in our model corresponds to the cell size (see Sect. 3.2 for details on its computation). We assume dominant macropore flow, which has the ability to quickly drain a catchment and
potentially reach a state of stationary flow. Using this estimated ${h}_{\mathrm{sat}}^{*}$, pore water pressure is computed as
$\begin{array}{}\text{(13)}& {P}_{\mathrm{water}}={H}_{\mathrm{soil}}\cdot \mathrm{cos}\left(s\right)\cdot {h}_{\mathrm{sat}}^{*}\cdot g\cdot {\mathit{\rho }}_{\mathrm{water}}\phantom{\rule{0.25em}
where P[water] [Pa] is the pore water pressure (used in Eq. 5), H[soil] [m] is the soil thickness, s is the slope angle, g=9.81ms^−2 is the gravitational acceleration and ρ[water] is the density of
water assumed equal to 998kgm^−3. The same value for water density is used in the computation of the water mass in the HL.
2.7Model initialization
The model has a total of 3 probabilistic parameters and 15 deterministic parameters (Table 1). The deterministic parameters as well as the distributional parameters for the probabilistic parameters
are determined from in situ data or from the literature (Sect. 3). In a first step of the workflow for the application of SlideforMAP, after assigning the deterministic parameter values and sampling
a value for each probabilistic parameter, a minimum value of soil cohesion is computed for each HL to obtain stable conditions (safety factor SF≥1.0) under a uniform precipitation intensity of
28.3mmd^−1 or 1.2mmh^−1. This threshold of precipitation intensity is chosen according to Leonarduzzi et al. (2017), who statistically analysed over 2000 landslides in Switzerland over the period
1972–2012 and found this as a triggering threshold. The minimum value of soil cohesion is obtained by equating F[par] (Eq. 2) and F[res] (Eq. 3). If the minimum value of soil cohesion is larger than
the sampled soil cohesion, the soil cohesion is updated to the minimum value. This procedure can be altered by users when another threshold or no threshold at all applies.
Leonarduzzi et al. (2017)Gehring et al. (2019)Gehring et al. (2019)Gehring et al. (2019)Gehring et al. (2019)Gehring et al. (2019)
2.8Landslide probability computation
After model initialization, the SF (Eq. 1) is computed for each of the generated HLs. Based on the SF for all generated HLs, landslide probability per raster cell (with the resolution of the original
DEM), P[ls], is computed as
$\begin{array}{}\text{(14)}& {P}_{\mathrm{ls}}=\frac{{n}_{\mathrm{us}}}{{n}_{\mathrm{HL}}},\end{array}$
where n[us] is the number of unstable HLs, i.e. of HLs with SF<1.0, and n[HL] is the total number of generated HLs (the HLs are overlapping). Both are per raster cell. Finally, this results in a
raster of shallow-landslide probability at a resolution of the input DEM.
3.1Study areas
Three study areas were chosen to test SlideforMAP based on the availability of elevation data and detailed records of historical shallow-landslide events (Fig. 3), each varying in size and location
to test the robustness and the general applicability of the model.
The geological formations in the Eriz study area vary with Oligocene freshwater molasse in the lower northern part, morainic material in the central part and Cretaceous limestone in the highest
parts. Forests are dominated by spruce (Picea abies), except for the lower regions where broad-leaved trees are dominant. In the Trub study area, the dominant geological formation is Miocene marine
molasse and forests are dominated by spruce. In the St. Antönien (from here abbreviated to “StA”) study area, the dominant geological formation is flysch (Prättigauer flysch), partially covered by
till (Moos et al., 2016). The forest in this study area is also dominated by spruce (Moos et al., 2016). Further characteristics of the study areas are given in Table 2.
3.2Input data
To accurately measure P[ls] for each study area, the following data are required.
• 1.
a digital surface model (DSM) and digital elevation model (DEM);
• 2.
average and standard deviation values for soil cohesion, thickness and the friction angle;
• 3.
a representative landslide inventory containing at least
□ a.
average landslide soil thickness,
□ b.
landslide surface area.
In addition to the DEM, the DSM is applied in the vegetation module of SlideforMAP. The DEM and the DSM are both acquired from the swissALTI3D database (Swisstopo, 2018), which makes use of aerial
laser scanning (ALS). Both the DSM and the DEM are available at a resolution of 0.5m. As an alternative to the use of a landslide inventory and the DSM for single-tree identification, users can also
use synthesized values for the parameters derived from these data. After pit filling, the DEM is used to compute a slope map following the method of Zevenbergen and Thorne (1987). The topographic
wetness index θ for Fig. 4 is computed on a raster cell basis based on the 2m DEM using Eq. 15.
$\begin{array}{}\text{(15)}& \mathit{\theta }=\frac{a}{b\cdot \mathrm{sin}\left(s\right)},\end{array}$
where a is the specific upslope catchment area, b is the contour length and s is the slope angle. To avoid numerical problems for elongated catchments, θ is computed using a 2km buffer around the
catchment. The large buffer size is chosen arbitrarily but can be reduced by other users. The standard D8 method is applied for the computation of the upslope catchment area from the DEM (O'Callaghan
and Mark, 1984). For single-tree detection, the FINT algorithm (Menk et al., 2017) is used. Since the results of such detection methods are strongly influenced by the resolution and smoothness of the
input data (Eysn et al., 2015), we applied the LMD method to the canopy height model (CHM). This canopy height model is computed by subtracting the DEM from the DSM and is resampled to a resolution
of 1, 1.5 and 2m. In addition, three different Gaussian filters were applied to the 1m resolution CHM. These three filters have a radius of 3, 5 and 7 cells and a standard deviation of 2m. To
identify the input data that lead to LMD results with the highest accuracy, we evaluated the identified trees in three randomly selected forest inventory plots with an area of 20m×20m for each
study site. In these plots, we visually identified all recognizable tree crowns, on the basis of aerial photos (Swisstopo, 2017) and the CHM. The identified trees were then compared to the LMD
result, using the difference in the number of detected trees. The input data leading to the most accurate results in all three study sites were the 1m resolution CHM with a Gaussian filter of a
3-cell radius and with the fixed standard deviation of 2m. This combination has been applied to the entire area of the three study sites. To estimate the DBH from the tree heights of all detected
trees, the following empirical equation (Dorren, 2017) was used:
$\begin{array}{}\text{(16)}& {\mathrm{DBH}}_{\mathrm{tree}}=\frac{\left({H}_{\mathrm{tree}}{\right)}^{\mathrm{1.25}}}{\mathrm{100}},\end{array}$
where DBH[tree] [m] is the diameter at breast height of a given tree and H[tree] [m] its height. Details resulting from the LMD method for the three study areas are shown in Table 3.
The lateral root reinforcement and the basal root reinforcement (Eqs. 10 and 11) are parametrized using the values from Gehring et al. (2019) (α[1]=0.862, β[1]=3.225, c=25068.54, α[2]=1.284, β[2]=
3.688). In their work, the calibration was performed on beech (Fagus sylvatica) stands over varying elevations. Our study areas, however, are predominantly vegetated by spruce trees. Therefore a
discrepancy in the estimated root reinforcement will likely arise. Unfortunately, this is the only published set of calibrated values.
3.3Landslide inventory
A landslide inventory is required to quantify a distribution for slope, surface area and soil thickness for the HLs. This inventory does not necessarily have to be well distributed in the study area
or even be present in the area. However, it should be representative of the conditions in the area of interest as much as possible. A dataset of 668 shallow landslides that occurred between 1997 and
2012 in Switzerland has been created by the Swiss Federal Office for the Environment (BAFU; Rickli et al., 2019). Statistical information on the landslides can be seen in Fig. 4. We assume the
properties in this inventory to be representative of shallow landslides in Switzerland. All landslides are triggered by rainfall, and the majority of the landslides are shallower than 1.5m (Fig. 4).
The landslides in the StA and Trub area took place in 2005 during or shortly after heavy rainfall in August. The landslides in the Eriz area from 2012 are related to heavy rainfall in July. Exact
precipitation amounts and intensities are unknown. The data are formatted with centre points and the surface area of the shallow-landslide initiation area. In our analysis we assume the areas have an
elliptical shape.
The inventory is used to estimate the parameters for the surface area distribution used in SlideforMAP (Eq. 6), via minimization of the RMSE between observed frequencies and theoretical frequencies.
The estimated values of the parameters are as follows: a=1.40, $\mathit{\rho }={\mathrm{1.5}}^{-\mathrm{4}}$m^2, $s={\mathrm{4.28}}^{-\mathrm{8}}$m^2. In addition, the inventory is used to
calibrate the a and b parameters for the soil thickness correction factor as used in Eq. (7). For the fitting (Appendix, Fig. A2) of the correction factor we use classes of inclination of 2.5^∘ and
the soil thickness values corresponding to the 95th percentile. This best fit for Eq. (7) was obtained with the values of a=1.47 and b=0.50.
3.4Model calibration and sensitivity analysis
The model has a total of 21 parameters that are derived from observed data, derived from the literature or set to default values; their values, given in Table 1, are not further varied in the model
behaviour analysis due to their assumed low variance. The remaining parameters can potentially influence the landslide probability, mostly given their variation as observed in nature. These
parameters are as follows: I[min], l[wr], c, α[1], β[1], α[2], β[2], D[trees,max], ρ[tree], ρ[water]. The remaining 12 parameters are then calibrated by Monte Carlo simulation, drawing a high number
of parameter samples for all calibration parameters and evaluating the corresponding model performance based on the area under the curve (AUC) method (Metz, 1978; Fawcett, 2006). We hereafter first
present the performance evaluation method used, followed by the parameter sampling method used for the calibration as well as for the sensitivity analysis. In addition, we present four vegetation
parameter scenarios that are developed to test the potential influence of vegetation. Due to the limited size of the landslide inventory, we do not include an independent validation of SlideforMAP.
3.4.1Model performance evaluation
The basis of the application of the AUC method is a spatial representation of the landslide inventory in a Boolean raster (0 is no past landslide present, 1 is past landslide present). For each
randomly generated parameter set, the simulated P[ls] (Sect. 2.8) is also converted to a Boolean raster, by selecting a threshold to assign 0 or 1. Overlaying the inventory raster onto the modelled
raster results in a confusion matrix with four possible combinations, as shown in Table 4.
A so-called receiver operator curve (ROC) can be obtained by computing the values of the confusion matrix for all unique values in the simulated raster as threshold values and for each plotting the
sensitivity, TP$/$(TP+FN), against the specificity, TN$/$(TN+FP). The area under the ROC is the AUC and defines the accuracy of the model on a scale of 0.5–1.0, where 0.5 is no better than a
random guess and 1.0 is a perfect prediction.
3.4.2Parameter sampling and qualitative sensitivity
The parameter samples for the Monte Carlo-based model calibration and the subsequent sensitivity analysis are generated using the Latin hypercube sampling (LHS) technique (McKay et al., 1979). This
makes use of semi-random samples of variables over pre-defined ranges. The outcome of a Monte Carlo-based calibration is highly influenced by the ranges chosen for the parameters. For this reason,
parameter ranges were chosen as realistically as possible. To estimate the parameter ranges for soil properties, soil types in USCS (Unified Soil Classification System) classes are taken from the
shallow-landslide inventory (a total of 377 had their soil type listed). Soil types present more than 10 times are taken into account and aggregated into a hybrid table of soil cohesion and angle of
internal friction values per soil type based on the values given in the work of Dysli and Rybisar (1992) and VSS-Kommission (1998) (see Appendix, Table A1). In order to obtain a realistic range for
the soil cohesion, first the mean soil cohesion (weighted on USCS soil type occurrence) is computed and then the weighted standard deviation is subtracted and added twice to the weighted mean. This
is to account for 95% of the variation in the observed soil cohesion (assuming a normal distribution). The same procedure is performed for the angle of internal friction. The range of transmissivity
values is obtained by taking the saturated hydraulic conductivity from the work of Freeze and Cherry (1979) for the respective soil classes and by multiplying these saturated hydraulic conductivities
with the minimum and maximum soil thickness of the soil class. From the resulting list of possible transmissivity values per soil class, the minimum and maximum are taken for the LHS range. For the
precipitation intensity, four depth duration values are defined. These correspond to a duration of 1 and 24h with subsequent return periods of 10 and 100 years. The duration of 1 to 24h is in line
with the SlideforMAP assumption of quick macropore-flow-dominated lateral groundwater flow. The return periods of 10 and 100 years were chosen arbitrarily in line with forest management timescales.
Precipitation intensities are computed using data from the work of Jensen et al. (1997) and the methodology as described in the work of HADES (2020). An overview of the intensity–return period
rainfall values is given in Table 5.
The R script implementing the sampling methodology and a description are included in the Supplement. The minimum and maximum values from Table 5 are used as the range in the sensitivity analysis
(Table 6). The maximum value for vegetation weight is taken from a biomass study in Switzerland by Price et al. (2017). For the other parameters, realistic ranges have been assumed. In Table 6 an
overview is given of the tested parameters and the ranges used to generate the parameter samples. The precipitation intensity and transmissivity together determine the saturation degree of the soil
(Eq. 12) and are therefore prone to equifinality. We grouped them as an additional parameter, the $I/T$ ratio.
For the model calibration and qualitative sensitivity analysis, 1000 LHS parameter sets were generated per study area by drawing samples from the ranges in Table 6. The number 1000 was chosen
arbitrarily for computational constraints. The vegetation is set to a global uniform vegetation, which results in constant root reinforcement and vegetation weight in space. This is necessary because
the same runs are used for model calibration and for model sensitivity analysis, where we need such uniform vegetation to ensure that the sensitivity of the (hypothetical) vegetation has an effect on
all raster cells of the whole study area (and not only on the actually vegetated cells). The parameter set with the highest AUC value is retained for model calibration. In addition, all
1000 parameter sets are used for a qualitative sensitivity analysis. The response variables are the AUC as a measure for accuracy and the ratio of unstable landslides as a measure for instability.
The AUC is chosen for the sensitivity analysis as the main response variable since it expresses the performance relative to the independent landslide inventory. We then consider the AUC as a
generalized measure of parameter likelihood (Beven and Binley, 1992) and assess how selected best parameter sets (e.g. the best 10% out of the 1000 sampled sets) are distributed (parameter
3.4.3Vegetation parameter scenario analysis
SlideforMAP has potential in testing the effect of different vegetation scenarios on the landslide probability. For this research, besides the reference scenario for model calibration and sensitivity
analysis (global uniform vegetation), three additional scenarios are tested: (i) without vegetation, (ii) with uniform vegetation in forested areas and (iii) with a fully diverse vegetation based on
single-tree detection. The single-tree version uses the input data as mentioned in Sect. 3.2. The forested areas are defined as areas where the single-tree-detection method leads to a lateral root
reinforcement (Fig. 9) which is not equal to zero.
4.1Sensitivity analysis
We use the 1000 model simulations corresponding to the 1000 generated parameter sets per study area for a sensitivity analysis of the model. The objective of this analysis is to quantify how the
distribution of AUC values and of the landslide probability vary as a function of the parameters. Applying the parameter subsampling technique (see Sect. 3.4.2), we see that for some parameters, the
histogram shape (i.e. their marginal distribution) does not significantly deviate from the initial uniform distribution (from which we sampled), even if we retain only the best 10% (in terms of the
AUC) of all parameter sets (Fig. 5). This apparent lack of sensitivity does not necessarily mean that the model is not sensitive to this parameter; in fact, the sensitivity could be hidden by strong
parameter correlation (see Bárdossy, 2007, for a discussion of how uniform marginal distributions can result from strong parameter correlation). Our addition of the $I/T$ ratio gives a hint of such
behaviour. From Fig. 5 it appears that the sensitivity to the AUC of the $I/T$ ratio is slightly stronger than either the precipitation or the transmissivity independently. Some parameters, in
exchange, show very strong sensitivity of their marginal distributions if only the best (in terms of the AUC) parameter sets are retained. For the Trub case study (Fig. 5), we see that the mean
thickness m[d], the mean cohesion m[C], the $I/T$ ratio and the transmissivity show a well-defined maximum around the parameter values retained for calibration (the best-performing ones). This
suggests a good sensitivity of the model to these parameters in terms of model performance. Two of these three parameters also show a clear sensitivity if we retain subsamples that lead to
successively higher unstable landslide ratio (Fig. 6): high unstable ratios are obtained for high m[d] values or for low m[C]. Also for RR[max], the highest ratios are clearly obtained for low
lateral root reinforcement values (for all three case studies, Figs. 6 and S1 and S4 in the Supplement). For transmissivity, while it shows a clear effect on model performance, the relation between
its marginal distribution and the ratio of unstable landslides is less visible.
4.2Model calibration
Based on the generated 1000 parameter sets, we identified the parameter set that resulted in the highest AUC value and assumed this to be an optimal calibration of the model. These calibrated
parameter sets for each study area and their AUC values are shown in Table 7 together with the ratio of generated HLs that are unstable.
Parameter consistency between the study areas appears to be visible in ρ[soil], m[d], m[C], σ[C], m[ϕ], σ[ϕ], T and W[veg]. Other parameters show stronger variation, relative to their LHS range,
between case studies. A realization of the shallow-landslide probability computed with SlideforMAP for the three areas with their calibrated parameter set is given in Fig. 7.
In general, the model represents well the spatial distribution of the shallow landslides from the inventory. A cumulative plot of the shallow-landslide probability for the study areas based on Fig. 7
is given in Fig. 8.
4.3Mechanical effects of vegetation
To test the impact of vegetation on the model behaviour, we compare the different vegetation scenarios. The spatial distribution of lateral root reinforcement, resulting from single-tree detection
and SlideforMAP, is given in Fig. 9.
The selected vegetation scenarios (no vegetation, global uniform vegetation, forest area uniform vegetation, single-tree detection) affect the computation of the vegetation weight, the lateral root
reinforcement and the basal root reinforcement. The latter is due to its dependence on lateral root reinforcement (Eq. 11). Accordingly, the vegetation scenario has a direct impact on the SF (Eqs. 1,
3, 4) and on P[ls] (Eq. 14). For the analysis, we use the optimal parameter set from Table 7, obtained for a global uniform vegetation cover. The model runs are repeated 10 times to produce an
average result and to show the variation from the probabilistic approach. Due to sampling from distributions, every realization produces a (slightly) different result. The resulting influence of the
selected vegetation scenarios on the AUC and on the ratio of unstable landslides is given in Table 8. The results from Tables 8 and 9 display that the model is sensitive to the vegetation scenarios
and that it predicts lower ratios of unstable ratios for vegetated scenarios as compared to the unvegetated scenario. This underlines the value of the model for future-scenario analyses.
ROCs corresponding to the scenarios with repetitions as presented in Table 8 are given in Fig. 10. Significance of the differences between vegetation scenarios from Table 8 is given in Table 9.
It is important to point out that the inventory to which the model performance is calibrated plays a key role in all the results discussed below. The inventory was obtained after triggering rainfall
events, for which the precipitation intensity, duration and the spatial distribution are not known precisely. Despite this shortcoming, the inventory represents a unique source of information, and
the spatial localization of the landslides can be assumed to be of high quality. Below, we discuss the model behaviour as a function of the different model parameter groups and the performance of the
model and give directions for future research.
5.1Soil parameters
The best-performing parameter sets show high values for the soil thickness for all study areas (by comparing the values of Tables 7 and 6). The qualitative sensitivity analysis (Fig. 6) also shows
that the highest unstable ratios are obtained for the highest soil thicknesses; this indicates that a certain minimum soil thickness is required for landslide triggering, which is in line with
previous findings by D'Odorico and Fagherazzi (2003) and by Iida (1999). In these studies, soil thickness is noted as the conditional factor for landslide triggering along with precipitation
intensity and duration. The best-performing parameter sets display cohesion values with a clear tendency towards low values for all three study areas (Figs. 5 and S1 and S3 in the Supplement), which
suggests that the observed landslides can only be reproduced with low soil cohesion for all case studies. The mean angle of internal friction appears to show consistency for a low value (Table 7).
The sensitivity of the AUC and unstable ratio to the angle of internal friction, however, appears to be small (Figs. 6 and 5).
5.2Hydrological parameters
Soil transmissivity showed considerable sensitivity to the AUC (Fig. 5), and the values are consistently high for all three case studies for the parameter range (Fig. 7), which is a hint that a
correct estimation of soil transmissivity is paramount for a reliable estimate of shallow-landslide occurrence. Regarding precipitation intensity, we see variability between the best values for the
three case studies and minor univariate sensitivity of the model performance or the model output (ratio of unstable landslides). The application of the TOPOG approach has the major shortcoming that
it assumes a groundwater gradient parallel to the surface gradient. It has been shown in the past that this assumption decreases the accuracy of water content simulations as compared to distributed
dynamic hydrological models (Grabs et al., 2009). However, as discussed earlier, it has also been shown in the past that macropore flow is omnipresent in landslide triggering, and SlideforMAP has
been parametrized assuming an important role of macropore flow. In macropore-driven systems, steady-state groundwater flow can be reached (see Introduction), which implies that the TOPOG assumption
holds well in this case. Due to the lack of detailed meteorological data, the precipitation intensity and duration are unknown. This makes computation on the exact pore pressure during the landslide
event impossible. The precipitation intensity$/$transmissivity ratio ($I/T$) is assumed to include both precipitation intensity and transmissivity sensitivity. This is reflected in Figs. 5 and 6.
The calibrated values for the $I/T$ ratio and subsequent pore pressure computation should be regarded as a measure for landslide propensity. In the landslide inventory underlying the study here, the
dominant soil types are GM (silty gravel), GC (clayey gravel) and CL (low-plasticity clayey silt) accordingly. Due to large pore sizes, we can assume that the TOPOG assumptions are valid for a wide
range of the domain (for GM and GC soil types), even if this probably holds less well for the CL soil types.
A key aspect of the model is the use of single-tree detection to parametrize vegetation, a method that was previously found by Menk et al. (2017) to be reliable to detect single trees and derive
their DBHs from the detected tree heights for sloped forests. As mentioned in Sect. 3.2, we found for the selected case studies that single-tree detection provides the best results in terms of the
correct number of trees counted if applied to a 1m resolution DSM with a three-cell kernel Gaussian filter. This is in line with the results of Menk et al. (2017), who found in a similar
scenario-testing approach that a 1m resolution DSM with no Gaussian correction provided the most accurate results, noting, however, that the difference in performance between these two methods (with
and without Gaussian filter) is small. In SlideforMAP, we consider not only basal but also lateral root reinforcement. This is unique for shallow-landslide probability models. As shown in the
sensitivity analysis (Fig. 6), RR[max] has a clear effect on the ratio of unstable landslides, with low values leading to high ratios. In the SlideforMAP workflow and calibration, a fixed
relationship between the lateral and the basal root reinforcement is assumed; accordingly, the model sensitivity cannot be attributed to R[lat] or R[bas]. Mobilization of the lateral root
reinforcement in the SlideforMAP workflow is independent of time and not countered by passive earth pressure. A shortcoming in this parametrization of the effect of vegetation is the assumption of
uniform forest structure and a uniform tree species (beech) within a landslide area. The field recordings in the StA area of Moos et al. (2016) show that the forest consists mainly of Norway spruce.
For the Trub and Eriz area, visual interpretation of aerial photos allowed us to identify mixed forests with Norway spruce and beech. The latter is known for having a high root reinforcement, and
therefore the beech assumption will overestimate both the lateral and the basal root reinforcement (Gehring et al., 2019). Vegetation weight shows no clear relation to both the AUC and the unstable
ratio (Figs. 5, 6). However, this does not mean that vegetation weight does not influence the response variables. The relationship could depend on other parameters and therefore be obscured (Bárdossy
, 2007). In contrast to the soil and hydrological parameters, vegetation configures both the magnitude and the spatial pattern of the probability. Vegetation can be modified by land management
practices with relative ease (Amishev et al., 2014) and is therefore of ultimate importance in shallow-landslide mitigation.
5.4Implementation of the mechanical effects of vegetation
In Table 8 it can be seen that the vegetation scenario has a considerable impact on the modelled unstable ratio for all study areas. The unstable ratio is lowest in the single-tree-detection scenario
for the Trub study area. In the StA and Eriz study areas, it is the lowest for the uniform vegetation. We assume this is caused by the low calibrated uniform root reinforcement in Trub and a higher
value in the other study areas (Table 7). Both single-tree detection and uniform vegetation are determined to have the ability to decrease instability. From a practical perspective vegetating parts
of a study area is more realistic than uniformly vegetating the whole area. Influence of the vegetation scenario on the AUC is present, with an absolute mean increase of 0.023, 0.030 and 0.046AUC
points between single-tree detection and uniform vegetation, forest uniform vegetation and unvegetated respectively (Table 8). Additionally the performance improvement can be described relatively in
terms of the percentage of extra AUC gained (AUC range from 0.5–1.0) between two vegetation scenarios. For the overall single-tree detection compared to uniform vegetation, forest uniform vegetation
and no vegetation this is 8%, 10% and 16% respectively. Results in Table 9 show that the differences are relevant for the uniform scenario in all study areas at both a 90% and 99% confidence
level. The difference between single-tree detection and no vegetation is relevant for all confidence levels and study areas except for the StA study area at 99% confidence. The difference between
single-tree detection and forest uniform is more ambiguous, with notably a significant difference at a 90% confidence level in the Trub and Eriz study area. This is likely related to the forest
uniform scenario being closest to single-tree detection in the distribution of root reinforcement of all scenarios.
In both Eriz and Trub, the single-tree detection is the best-performing scenario. Our overall finding that the model output is sensitive to the vegetation scenario and gives the second-lowest values
in the unstable ratio and highest values in the AUC for single-tree detection. We argue that even though the model is calibrated on a global uniform-vegetation scenario (Table 7) and the single-tree
detection gives a significantly better overall performance, single-tree detection is more accurate in assessing shallow-landslide susceptibility (Tables 8 and 9). Adding to this explanation is that
in these study areas, where slope angle is a highly predictive factor, even marginal gains in the AUC due to vegetation are important and the result of extensive parametrization. Our analysis is in
line with the findings of Roering et al. (2003), who state that single-tree-based modelling, including the tree dimensions, has the highest accuracy in the prediction of shallow landslides. Moreover,
Vergani et al. (2014) state that a site-specific estimation of vegetation and root extent is essential in the correct estimation of root reinforcement.
5.5Model performance
As pointed out by Corominas et al. (2014), the absolute values of the AUC are dependent on the characteristics of the study area. In larger areas, with low overall landslide activity, the AUC will
overestimate the predictive performance. This most likely explains why the StA study area has a low overall AUC compared to Eriz and Trub (Table 8). In particular, the StA study area shows a higher
prevalence of steep slopes. The Trub and the Eriz study areas both show relatively high AUC values, indicating high model performance, with very similar AUC values; this is in agreement with a
similar occurrence of steep and gradual slopes in these areas. Another explanation for the discrepancy in model performance between the study areas could be the assumption that all trees are beech
trees. This does not hold equally well for all three study areas. Based on visual inspection and on elevation, the mismatch between actual vegetation and this assumption is probably most pronounced
in the StA area, where the dominant tree species appears to be spruce. Though no published data are available, it can be estimated from the work of Moos et al. (2016) that the root reinforcement of a
spruce forest is lower than that of a beech forest, but this cannot confirmed by our parameter analysis at this stage.
A comparison between the shallow-landslide density (Table 2) and the calibrated unstable ratio (Table 7) shows moderate consistency. The Eriz and Trub study areas have a low unstable area
corresponding to a low shallow-landslide density. StA has both a higher landslide density and a higher unstable ratio. From the consistency in Table 7 and the sensitivity analysis results of Fig. 5,
it can be concluded that the main configuration of the model lies in the parametrization of the mean soil thickness, the mean cohesion and the $I/T$ ratio. In addition, the vegetation scenario
strongly influences the model performance and is of high influence on calculated shallow-landslide probability (Table 8). Equifinality between the parameters in the qualitative sensitivity analysis
is likely as it is very common in similar multi-parameter modelling (Beven and Binley, 1992). However, we believe the sensitivity as observed in Fig. 5 is valid and a qualitative indicator for
important parameters in SlideforMAP. The calibrated optimal parameter set (Table 7) is still within realistic bounds as are the ranges for the sensitivity analysis. In addition, the calibrated
combination of the mean friction angle (26–34^∘) and mean soil cohesion (1.75–4.29kPa) is possible, according to the Appendix, Table A1. Finally, we would like to add here that the case study
dependence of the model performance measure used is a limitation that typically occurs for all model performance measures that compare the model behaviour to some reference model (Schaefli and Gupta
, 2007) (the reference model for the AUC is a random process). Accordingly, we cannot compare the performance of SlideforMAP to other published AUC values despite the fact that values above 0.8 are
considered to indicate good performance (e.g. Xu et al., 2012).
5.6Comparison to other slope stability models
The main advantage of SlideforMAP compared to other models is the more realistic approach to implementing root reinforcement. This includes a spatial distribution in both the basal and the lateral
root reinforcement and a focus on the second stage of the activation phase in accordance with the Root Bundle Model as described in Gehring et al. (2019). Compared to previous slope stability models
that include the effect of root reinforcement, SlideforMAP uses a more realistic implementation of root reinforcement based on recent knowledge of shallow landslides triggering mechanisms and root
reinforcement activation (Schwarz et al., 2012, 2013; Cohen and Schwarz, 2017). In particular, only part of the lateral root reinforcement under tension is considered for the force balance
calculation. Moreover, the spatial distribution of root reinforcement as a function of forest structure is included. The assumptions made in SlideforMAP allow a probabilistic calculation at a
regional scale that are not possible with more complex models such as SOSlope (Cohen and Schwarz, 2017). In comparison to more simple models based on infinite slope calculations (Pack et al., 1998;
Montgomery and Dietrich, 1994; SINMAP, SHALSTAB), SlideforMAP considers the effect of lateral root reinforcement on landslides of different sizes. SINMAP with a homogeneous root reinforcement is
comparable to our global uniform-vegetation scenario (Table 8). A version of SINMAP with no root strength is comparable to our no-vegetation scenario. When no vegetation data are available or
complexity is not desired, these are valid options to assess shallow-landslide susceptibility in a probabilistic way.
A hydrological and slope stability model identical to SlideforMAP is applied in Montgomery et al. (2000), which is used to estimate sediment yield resulting from forest clearing. This is comparable
to our global uniform-vegetation scenario as well. Their result of a high significance of root reinforcement is in line with our findings. Another difference in the model approach is the assumption
of fixed landslide dimensions, including soil thickness. In addition, the root reinforcement is assumed to act around the full perimeter of the landslide. In its approach, SlideforMAP shares many
similarities with PRIMULA, as developed by Cislaghi et al. (2018), which applies a probabilistic approach and a spatially distributed root reinforcement as well. The PRIMULA root reinforcement is
based on a stand-scale approach rather than single-tree detection though. The AUC values in this paper are higher, but that could be the result of different characteristics of the study areas and our
parameter optimization by the qualitative sensitivity analysis. Other differences as compared to PRIMULA are their assumption of lateral root reinforcement along the entire landslide perimeter, the
inclusion of lateral soil cohesion simultaneously with lateral root cohesion, the assumption of rectangular landslides rather than elliptical ones and a different landslide surface area distribution.
The model 3DTLE (Hess et al., 2017) is a deterministic landslide susceptibility model with a similar detailed spatially heterogeneous inclusion of root reinforcement. Differences are their
deterministic approach and the assumption of a simultaneous maximum of tension and compression forces.
5.7Future research
SlideforMAP uses a relatively simple hydrological module to estimate soil saturation. The TOPOG approach used could be improved, and multiple papers have presented simple to more advanced rewriting
of formulas (e.g. Beven and Freer, 2001; Blazkova et al., 2002). A common denominator is the inclusion of time dependency since the stationary flow assumption rarely, if ever, holds in nature. This
time dependency is a solution to simulate a different response to a precipitation event at different locations within a study area. Future work could also focus on improving the vegetation module by
including different tree species (those that are often used in protection forest) in the parametrization of lateral root reinforcement (Eq. 10). For practical application of SlideforMAP we have not
found a specific lower boundary in landslide density that still generates reliable results. More specific testing on this would be useful for future application of SlideforMAP. A comparison between
SlideforMAP and SHALSTAB and/or SINMAP would be interesting. It could validate whether the uniform-vegetation scenario in SlideforMAP produces similar results to these models in terms of
shallow-landslide probability. Finally performing a validation over study areas with a larger shallow-landslide inventory would be a vital procedure to further analyse the SlideforMAP model.
In this paper, we present a probabilistic model to assess shallow-landslide (landslides with a scar thickness <2m) probability. The main motivation to develop yet another model is to provide a
detailed inclusion of the influence of root reinforcement. Its application is illustrated based on three mid-elevation case studies from Switzerland, for which a detailed landslide inventory is
available. The model has a total of 21 parameters, of which 12 are calibrated using the AUC of the receiver operator curve as a performance measure to identify the best parameter set among a large
number of sets generated using Latin hypercube sampling. The AUC maximum values for the three study areas vary between 0.64 and 0.93 under a single-tree-detection vegetation scenario, which reflects
an overall good model performance. Our model parameter analysis has shown that soil thickness, the precipitation intensity$/$transmissivity ratio and soil cohesion are the key parameters to predict
slope stability in the studied mountainous regions. A major focus of the presented work was the assessment of the model's ability to study scenarios of vegetation distribution. Comparison of
different scenarios ranging from uniform to single-tree-detection-based vegetation clearly showed that the model output, in terms of shallow-landslide probability, is sensitive to the spatial
distribution of vegetation. Additionally, in two of our three study areas, the single-tree-detection scenario provides significantly (Welch's t test confidence >99%) higher AUC values. Accordingly,
the model is fit for future-scenario analysis, including for example different protection forest management scenarios. In fact, a single-tree-scale model parametrization provides the opportunity to
run hypothetical vegetation scenarios reflecting small-scale management strategies or disturbances. Future improvements in the hydrological approach, concerning a more catchment-based approach to
compute the saturation degree, could likely further improve the performance of SlideforMAP.
All data used in this research are open data. The topographical data and the landslide inventory as used in this research are published on Zenodo: https://doi.org/10.5281/zenodo.6793533 (van
Zadelhoff, 2022).
AA collected the landslide inventory and made it ready for use. DC and MS developed the basic concept of SlideforMAP. LD contributed to further development. FBvZ executed further development, the
sensitivity analysis and testing. FBvZ is the main writer. BS, LD, CP, AA and MS revised the text. CP and MS organized funds.
The contact author has declared that none of the authors has any competing interests.
The shallow-landslide probability maps generated by SlideforMAP are a guideline and should be interpreted by an expert before application.
Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We thank the STEC (Smarter Targeting of Erosion Control) project by the Ministry of Business, Innovation and Employment of New Zealand for financial support. In addition we would like to thank the
two anonymous reviewers and David Milledge, who contributed a community review. Their contribution greatly improved the quality of this paper.
This research has been supported by the Ministry of Business, Innovation and Employment, New Zealand (grant no. C09X1804).
This paper was edited by David J. Peres and reviewed by Dave Milledge and two anonymous referees.
Amishev, D., Basher, L., Phillips, C. J., Hill, S., Marden, M., Bloomberg, M., and Moore, J. R.: New forest management approaches to steep hills, Ministry for Primary Industries, ISBN 9780478437867,
Askarinejad, A., Casini, F., Bischof, P., Beck, A., and Springman, S. M.: Rainfall induced instabilities: a field experiment on a silty sand slope in northern Switzerland, rivista italiana di
geotecnica, 12, 50–71, http://www.associazionegeotecnica.it/rig/archivio (last access: 20 April 2021), 2012.a, b
Askarinejad, A., Akca, D., and Springman, S. M.: Precursors of instability in a natural slope due to rainfall: a full-scale experiment, Landslides, 15, 1745–1759, https://doi.org/10.1007/
s10346-018-0994-0, 2018.a
Badoux, A., Andres, N., Techel, F., and Hegg, C.: Natural hazard fatalities in Switzerland from 1946 to 2015, Nat. Hazards Earth Syst. Sci., 16, 2747–2768, https://doi.org/10.5194/nhess-16-2747-2016,
2016.a, b
Baeza, C. and Corominas, J.: Assessment of shallow landslide susceptibility by means of multivariate statistical techniques, Earth Surf. Processes, 26, 1251–1263, https://doi.org/10.1002/esp.263,
Bárdossy, A.: Calibration of hydrological model parameters for ungauged catchments, Hydrol. Earth Syst. Sci., 11, 703–710, https://doi.org/10.5194/hess-11-703-2007, 2007.a, b
Baum, R. L., Savage, W. Z., and Godt, J. W.: TRIGRS – a Fortran program for transient rainfall infiltration and grid-based regional slope-stability analysis, US geological survey open-file report,
424, 38, https://doi.org/10.3133/ofr02424, 2002.a
Beven, K. and Binley, A.: The future of distributed models: Model calibration and uncertainty prediction, Hydrol. Process., 6, 279–298, https://doi.org/10.1002/hyp.3360060305, 1992.a, b
Beven, K. and Freer, J.: A dynamic topmodel, Hydrol. Process., 15, 1993–2011, https://doi.org/10.1002/hyp.252, 2001.a
Beven, K. J. and Kirkby, M. J.: A physically based, variable contributing area model of basin hydrology, Hydrol. Sci. B., 24, 43–69, https://doi.org/10.1080/02626667909491834, 1979.a
Blazkova, S., Beven, K., Tacheci, P., and Kulasova, A.: Testing the distributed water table predictions of TOPMODEL (allowing for uncertainty in model calibration): The death of TOPMODEL?, Water
Resour. Res., 38, 39-1, https://doi.org/10.1029/2001wr000912, 2002.a
Bodner, G., Leitner, D., and Kaul, H. P.: Coarse and fine root plants affect pore size distributions differently, Plant Soil, 380, 133–151, https://doi.org/10.1007/s11104-014-2079-8, 2014.a
Bordoni, M., Meisina, C., Valentino, R., Lu, N., Bittelli, M., and Chersich, S.: Hydrological factors affecting rainfall-induced shallow landslides: From the field monitoring to a simplified slope
stability analysis, Eng. Geol., 193, 19–37, https://doi.org/10.1016/j.enggeo.2015.04.006, 2015.a, b
Borga, M., Dalla Fontana, G., Gregoretti, C., and Marchi, L.: Assessment of shallow landsliding by using a physically based model of hillslope stability, Hydrol. Process., 16, 2833–2851, https://
doi.org/10.1002/hyp.1074, 2002.a
Cervi, F., Berti, M., Borgatti, L., Ronchetti, F., Manenti, F., and Corsini, A.: Comparing predictive capability of statistical and deterministic methods for landslide susceptibility mapping: A case
study in the northern Apennines (Reggio Emilia Province, Italy), Landslides, 7, 433–444, https://doi.org/10.1007/s10346-010-0207-y, 2010.a
Chae, B. G., Park, H. J., Catani, F., Simoni, A., and Berti, M.: Landslide prediction, monitoring and early warning: a concise review of state-of-the-art, Geosci. J., 21, 1033–1070, https://doi.org/
10.1007/s12303-017-0034-4, 2017.a
Cislaghi, A., Rigon, E., Lenzi, M. A., and Bischetti, G. B.: A probabilistic multidimensional approach to quantify large wood recruitment from hillslopes in mountainous-forested catchments,
Geomorphology, 306, 108–127, https://doi.org/10.1016/j.geomorph.2018.01.009, 2018.a, b
Cohen, D. and Schwarz, M.: Tree-root control of shallow landslides, Earth Surf. Dynam., 5, 451–477, https://doi.org/10.5194/esurf-5-451-2017, 2017.a, b, c, d
Cohen, D., Lehmann, P., and Or, D.: Fiber bundle model for multiscale modeling of hydromechanical triggering of shallow landslides, Water Resour. Res., 45, 1–20, https://doi.org/10.1029/2009WR007889,
2009.a, b
Corominas, J., van Westen, C., Frattini, P., Cascini, L., Malet, J. P., Fotopoulou, S., Catani, F., Van Den Eeckhaut, M., Mavrouli, O., Agliardi, F., Pitilakis, K., Winter, M. G., Pastor, M.,
Ferlisi, S., Tofani, V., Hervás, J., and Smith, J. T.: Recommendations for the quantitative analysis of landslide risk, B. Eng. Geol. Environ., 73, 209–263, https://doi.org/10.1007/s10064-013-0538-8,
2014.a, b, c, d, e, f
Day, R. W.: State of the art: Limit equilibrium and finite-element analysis of slopes, J. Geotech. Geoenviron., 123, 894, https://doi.org/10.1061/(ASCE)1090-0241(1997)123:9(894), 1997.a
Dazio, E. P. R., Conedera, M., and Schwarz, M.: Impact of different chestnut coppice managements on root reinforcement and shallow landslide susceptibility, Forest Ecol. Manag., 417, 63–76, https://
doi.org/10.1016/j.foreco.2018.02.031, 2018.a
Dietrich, W. E. and Montgomery, D. R.: SHALSTAB: a digital terrainmodel for mapping shallow landslide potential., Tech. rep., NCASI (NationalCouncil of the Paper Industry for Air and Stream
Improvement),http://calm.geo.berkeley.edu/geomorph/shalstab/index.htm (last access: 25 August 2021), 1998.a
D'Odorico, P. and Fagherazzi, S.: A probabilistic model of rainfall-triggered shallow landslides in hollows: A long-term analysis, Water Resour. Res., 39, 1–14, https://doi.org/10.1029/2002WR001595,
Dorren, L.: FINT – Find individual trees, User manual, ecorisQ paper, 5 p., https://www.ecorisq.org/, (last access: 10 December 2017), 2017.a
Dorren, L. and Sandri, A.: Landslide risk mapping for the entire Swiss national road network, Proceedings of the International Conference “Landslide Processes”, 6–7 February 2009, 2009.a
Dysli, M. and Rybisar, J.: Statistique sur les caractéristiques des sols suisses- Statistische Behandlung der Kennwerte der Schweizer Boeden, Bundesamt fuer Strassenbau, Institut Francais des
Sciences et Technologies des Transports, de l'Aménagement et des Réseaux (IFSTTAR), 128 p., accession no. 01233497, 1992.a, b
Eysn, L., Hollaus, M., Lindberg, E., Berger, F., Monnet, J. M., Dalponte, M., Kobal, M., Pellegrini, M., Lingua, E., Mongus, D., and Pfeifer, N.: A benchmark of lidar-based single tree detection
methods using heterogeneous forest data from the Alpine Space, Forests, 6, 1721–1747, https://doi.org/10.3390/f6051721, 2015.a, b
Fawcett, T.: An introduction to ROC analysis, Pattern Recogn. Lett., 27, 861–874, https://doi.org/10.1016/j.patrec.2005.10.010, 2006.a
Feng, S., Liu, H. W., and Ng, C. W.: Analytical analysis of the mechanical and hydrological effects of vegetation on shallow slope stability, Comput. Geotech., 118, 103335, https://doi.org/10.1016/
j.compgeo.2019.103335, 2020.a
Freeze, R. A. and Cherry, J. A.: Groundwater, No. 629.1 F7, ISBN 0133653129, 1979.a
Frei, C., Isotta, F., and Schwanbeck, J.: Mean Precipitation 1981–2010, in: Hydrological Atlas of Switzerland, Geographisches Institut der Universität Bern, https://hydromaps.ch/#de/8/46.830/8.190/
bl_hds--b01_b0100_rnormy8110v1_0$0/NULL, last access: 26 October 2020.a
Gehring, E., Conedera, M., Maringer, J., Giadrossich, F., Guastini, E., and Schwarz, M.: Shallow landslide disposition in burnt European beech (Fagus sylvatica L.) forests, Scientific Reports, 9,
1–11, https://doi.org/10.1038/s41598-019-45073-7, 2019.a, b, c, d, e, f, g, h, i
Giadrossich, F., Schwarz, M., Marden, M., Marrosu, R., and Phillips, C.: Minimum representative root distribution sampling for calculating slope stability in pinus radiata D.Don plantations in New
Zealand, New Zeal. J. For. Sci., 50, 1–12, https://doi.org/10.33494/nzjfs502020x68x, 2020.a
González-Ollauri, A. and Mickovski, S. B.: Integrated Model for the Hydro-Mechanical Effects of Vegetation Against Shallow Landslides, EQA – International Journal of Environmental Quality, 13, 37–59,
https://doi.org/10.6092/issn.2281-4485/4535, 2014.a
Grabs, T., Seibert, J., Bishop, K., and Laudon, H.: Modeling spatial patterns of saturated areas: A comparison of the topographic wetness index and a dynamic distributed model, J. Hydrol., 373,
15–23, https://doi.org/10.1016/j.jhydrol.2009.03.031, 2009.a
Greenway, D. R.: Vegetation and slope stability, Slope stability: Geotechnical Engineering And Geomorphology, edited by: Anderson, M. G. and Richards, K. S., Chichester, West Sussex, Wiley, 1987,
187–230, US201302688028, 1987.a, b, c, d
Griffiths, D. V., Huang, J., and Fenton, G. A.: Influence of Spatial Variability on Slope Reliability Using 2-D Random Fields, J. Geotech. Geoenviron., 135, 1367–1378, https://doi.org/10.1061/(asce)
gt.1943-5606.0000099, 2009.a, b
HADES: https://hydrologischeratlas.ch/downloads/01/content/Text_Tafel24.de.pdf (last access: 30 September 2020), 2020.a
Hawley, J. and Dymond, J.: How much do trees reduce landsliding?, J. Soil Water Conserv., 43, 495–498, 1988.a
Hess, D. M., Leshchinsky, B. A., Bunn, M., Benjamin Mason, H., and Olsen, M. J.: A simplified three-dimensional shallow landslide susceptibility framework considering topography and seismicity,
Landslides, 14, 1677–1697, https://doi.org/10.1007/s10346-017-0810-2, 2017.a
Iida, T.: A stochastic hydro-geomorphological model for shallow landsliding due to rainstorm, Catena, 34, 293–313, https://doi.org/10.1016/S0341-8162(98)00093-9, 1999.a
Iverson, R. M.: Landslide triggering by rain infiltration, Water Resour. Res., 36, 1897–1910, https://doi.org/10.1029/2000WR900090, 2000.a
Jensen, H., Lang, H., and Rinderknecht, J.: Extreme Punktregen unterschiedlicher Dauer und Wiederkehrperioden 1901–1970, Tafel 2.4, in: Hydrologischer Atlas der Schweiz, Geographisches Institut der
Universität Bern, https://hydromaps.ch/#de/8/46.830/8.190/bl_hds--b04_b0401_precip_60m_2a_0_5v2_0$4/NULL, (last access: 31 January 2020), 1997.a
Johnson, N. L. and Kotz, S.: Continuous univariate distributions, Houghton Mifflin, Boston, 1, 70018030, https://books.google.ch/books?id=-wPvAAAAMAAJ (last access: 20 February 2021), 1970.a
Kirkby, M.: Hydrograph modelling strategies, 69–90, in: Processes in Physical and Human Geography, edited by: Peel, R., Chisholm, M., and Haggert, P., Heineman, London, 69–90, 1975.a
Kjekstad, O. and Highland, L.: Economic and Social Impacts of Landslides, in: Landslides – Disaster Risk Reduction, edited by: Sassa, K. and Canuti, P., Springer, Berlin, Heidelberg, 573–587, https:/
/doi.org/10.1007/978-3-540-69970-5_30, 2009.a
Korpela, I., Dahlin, B., Schäfer, H., Bruun, E., Haapaniemi, F., Honkasalo, J., Ilvesniemi, S., Kuutti, V., Linkosalmi, M., Mustonen, J., Salo, M., Suomi, O., and Virtanen, H.: Single-tree forest
inventory using lidar and aerial images for 3D treetop positioning, species recognition, height and crown width estimation, International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, 36, 227–233, 2007.a
Lehmann, P., Gambazzi, F., Suski, B., Baron, L., Askarinejad, A., Springman, S. M., Holliger, K., and Or, D.: Evolution of soil wetting patterns preceding a hydrologically induced landslide inferred
from electrical resistivity survey and point measurements of volumetric water content and pore water pressure, Water Resour. Res., 49, 7992–8004, https://doi.org/10.1002/2013WR014560, 2013.a, b
Leonarduzzi, E., Molnar, P., and McArdell, B. W.: Predictive performance of rainfall thresholds for shallow landslides in Switzerland from gridded daily data, Water Resour. Res., 53, 6612–6625,
https://doi.org/10.1002/2017WR021044, 2017.a, b
Li, W. C., Lee, L. M., Cai, H., Li, H. J., Dai, F. C., and Wang, M. L.: Combined roles of saturated permeability and rainfall characteristics on surficial failure of homogeneous soil slope, Eng.
Geol. 153, 105–113, https://doi.org/10.1016/j.enggeo.2012.11.017, 2013.a
Malamud, B., Turcotte, D., Guzzetti, F., and Reichenbach, P.: Landslide inventories and their statistical properties, Earth Surf. Processes, 29, 687–711, https://doi.org/10.1002/esp.1064, 2004.a
Masi, E. B., Segoni, S., and Tofani, V.: Root reinforcement in slope stability models: A review, Geosciences (Switzerland), 11, 212, https://doi.org/10.3390/geosciences11050212, 2021.a
McKay, M. D., Beckman, R. J., and Conover, W. J.: Comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, 21, 239–245,
https://doi.org/10.1080/00401706.1979.10489755, 1979.a
Menk, J., Dorren, L., Heinzel, J., Marty, M., and Huber, M.: Evaluation automatischer Einzelbaumerkennung aus luftgestützten Laserscanning-Daten, Schweizerische Zeitschrift fur Forstwesen, 168,
151–159, https://doi.org/10.3188/szf.2017.0151, 2017.a, b, c, d, e
Metz, C. E.: Basic principles of ROC analysis, Semin. Nucl. Med., 8, 283–298, https://doi.org/10.1016/S0001-2998(78)80014-2, 1978.a
Montgomery, D. R. and Dietrich, W. E.: A physically based model for the topographic control on shallow landsliding, Water Resour. Res., 30, 1153–1171, https://doi.org/10.1029/93WR02979, 1994.a, b
Montgomery, D. R. and Dietrich, W. E.: Reply to comment by Richard M. Iverson on “piezometric response in shallow bedrock at cb1: implications for runoff generation and landsliding”, Water Resour.
Res., 40, W03802, https://doi.org/10.1029/2003WR002815, 2004.a
Montgomery, D. R., Schmidt, K. M., Greenberg, H. M., and Dietrich, W. E.: Forest clearing and regional landsliding, Geology, 28, 311–314, https://doi.org/10.1130/0091-7613(2000)28<311:FCARL>2.0.CO;2,
2000.a, b, c
Montgomery, D. R., Dietrich, W. E., and Heffner, J. T.: Piezometric response in shallow bedrock at CB1: Implications for runoff generation and landsliding, Water Resour. Res., 38, 1274, https://
doi.org/10.1029/2002wr001429, 2002.a
Montrasio, L. and Valentino, R.: A model for triggering mechanisms of shallow landslides, Nat. Hazards Earth Syst. Sci., 8, 1149–1159, https://doi.org/10.5194/nhess-8-1149-2008, 2008.a
Montrasio, L., Valentino, R., and Losi, G. L.: Towards a real-time susceptibility assessment of rainfall-induced shallow landslides on a regional scale, Nat. Hazards Earth Syst. Sci., 11, 1927–1947,
https://doi.org/10.5194/nhess-11-1927-2011, 2011.a
Moos, C., Bebi, P., Graf, F., Mattli, J., Rickli, C., and Schwarz, M.: How does forest structure affect root reinforcement and susceptibility to shallow landslides?, Earth Surf. Process., 41,
951–960, https://doi.org/10.1002/esp.3887, 2016.a, b, c, d, e
Mosley, M. P.: Subsurface flow velocities through selected forest soils, South Island, New Zealand, J. Hydrol., 55, 65–92, https://doi.org/10.1016/0022-1694(82)90121-4, 1982.a
Munich RE: Relevant hydrological events worldwide 1980–2018, Münchener Rückversicherungs-Gesellschaft, NatCatService, https://www.munichre.com/en/solutions/for-industry-clients/natcatservice.html,
last access: 2 July 2020.a, b
O'Callaghan, J. and Mark, D.: The Extraction of Drainage Networks from Digital Elevation Data, Comput. Vision Grap., 28, 323–344, https://doi.org/10.1016/0734-189X(89)90053-4, 1984.a
O'Loughlin, E. M.: Prediction of Surface Saturation Zones in Natural catchments by Topographic Analysis, Water Resour. Res., 22, 794–804, 1986.a
Pack, R. T., Tarboton, D. G., and Goodwin, C. N.: The SINMAP Approach to Terrain Stability Mapping, in: 8th Congress of the International Association of Engineering Geology, Vancouver, British
Columbia, Canada, 21–25 September 1998, edited by: Moore, D. and Hungr, O., Vol. 2: Engineering Geology And Natural Hazards, A A Balkema, 1157–1166, 1998.a, b, c, d
Park, H. J., Lee, J. H., and Woo, I.: Assessment of rainfall-induced shallow landslide susceptibility using a GIS-based probabilistic approach, Eng. Geol., 161, 1–15, https://doi.org/10.1016/
j.enggeo.2013.04.011, 2013.a
Prancevic, J. P., Lamb, M. P., McArdell, B. W., Rickli, C., and Kirchner, J. W.: Decreasing Landslide Erosion on Steeper Slopes in Soil-Mantled Landscapes, Geophys. Res. Lett., 47, 1–9, https://
doi.org/10.1029/2020GL087505, 2020.a
Price, B., Gomez, A., Mathys, L., Gardi, O., Schellenberger, A., Ginzler, C., and Thürig, E.: Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and
non-forest trees, Environ. Monit. Assess., 189, 1–14, https://doi.org/10.1007/s10661-017-5816-7, 2017.a
Reinhold, S., Medicus, G., Fellin, W., and Zangerl, C.: The influence of deforestation on slope (In-) stability, Austrian J. Earth Sci., 102, 90–99, https://doi.org/10.1139/t01-031, 2009.a
Rickli, C. and Graf, F.: Effects of forests on shallow landslides – case studies in Switzerland, Forest Snow and Landscape Research, 44, 33–44, 2009.a
Rickli, C., Graf, F., Bebi, P., Bast, A., Loupt, B., and McArdell, B.: Schützt der Wald vor Rutschungen? Hinweise aus der WSL-Rutschungsdatenbank, Schweizerische Zeitschrift fur Forstwesen, 170,
310–317, https://doi.org/10.3188/szf.2019.0310, 2019.a
Roering, J., Schmidt, K. M., Stock, J. D., Dietrich, W. E., and Montgomery, D. R.: Shallow landsliding, root reinforcement, and the spatial distribution of trees in the Oregon Coast Range, Can.
Geotech. J., 40, 237–253, 2003.a
Salvatici, T., Tofani, V., Rossi, G., D'Ambrosio, M., Tacconi Stefanelli, C., Masi, E. B., Rosi, A., Pazzi, V., Vannocci, P., Petrolo, M., Catani, F., Ratto, S., Stevenin, H., and Casagli, N.:
Application of a physically based model to forecast shallow landslides at a regional scale, Nat. Hazards Earth Syst. Sci., 18, 1919–1935, https://doi.org/10.5194/nhess-18-1919-2018, 2018.a, b
Schaefli, B. and Gupta, H.: Do Nash values have value, Hydrol. Process., 21, 2075–2080, https://doi.org/10.1002/hyp.6825, 2007.a
Schmidt, K. M., Roering, J. J., Stock, J. D., Dietrich, W. E., Montgomery, D. R., and Schaub, T.: The variability of root cohesion as an influence on shallow landslide susceptibility in the Oregon
Coast Range, Can. Geotech. J., 38, 995–1024, https://doi.org/10.1139/cgj-38-5-995, 2001.a
Schwarz, M., Preti, F., Giadrossich, F., Lehmann, P., and Or, D.: Quantifying the role of vegetation in slope stability: A case study in Tuscany (Italy), Ecol. Eng., 36, 285–291, https://doi.org/
10.1016/j.ecoleng.2009.06.014, 2010.a, b, c, d
Schwarz, M., Cohen, D., and Or, D.: Spatial characterization of root reinforcement at stand scale: theory and case study, Geomorphology, 171, 190–200, 2012.a, b, c
Schwarz, M., Giadrossich, F., and Cohen, D.: Modeling root reinforcement using a root-failure Weibull survival function, Hydrol. Earth Syst. Sci., 17, 4367–4377, https://doi.org/10.5194/
hess-17-4367-2013, 2013.a
Schwarz, M., Rist, A., Cohen, D., Giadrossich, F., Egorov, P., Büttner, D., Stolz, M., and Thormann, J. J.: Root reinforcement of soils under compression, J. Geophys. Res.-Earth, 120, 2103–2120,
https://doi.org/10.1002/2015JF003632, 2015.a, b, c
Sidle, R. C.: A Theoretical Model of the Effects of Timber harvesting on Slope Stability, Water Resour. Res., 28, 1897–1910, 1992.a
Swiss Re Institute: Natural catastrophes and man-made disasters in 2018: “secondary” perils on the frontline, Sigma, 2, 1–36, 2019.a, b
Swisstopo: SWISSIMAGE, Luftbilder Level 2 (25cm) Wabern: Bern, 2014–2016, Bundesamt für Landestopografie swisstopo, Wabern, 2017.a
Swisstopo: SwissALTI3D Das hoch auf-gelöste Terrainmodell der Schweiz, LIDAR based Digital Terrain Model, Bundesamt für Landestopografie swisstopo, Wabern, 2018.a, b, c, d
Swisstopo: Switzerland forest cover map; https://www.swisstopo.admin.ch/de/geodata/landscape/tlm3d.html (last access: 29 September 2015), 2020.a, b
Torres, R., Dietrich, W. E., Montgomery, D. R., Anderson, S. P., and Loague, K.: Unsaturated zone processes and the hydrologic response of a steep, unchanneled catchment, Water Resour. Res., 34,
1865–1879, https://doi.org/10.1029/98WR01140, 1998.a
van Zadelhoff, F. B., Albaba, A., Cohen, D., Philips, C., Schaefli, B., Dorren, L., and Schwarz, M.: Introducing SlideforMAP; a probabilistic finite slope approach for modelling shallow landslide
probability in forested situations, Zenodo [data set], https://doi.org/10.5281/zenodo.6793533, 2022.a
Varnes, D. J.: Slope Movement Types and Processes, Special Report, 176, 11–33, https://doi.org/10.1016/j.mser.2018.11.001, 1978.a, b, c, d
Vergani, C., Schwarz, M., Cohen, D., Thormann, J., and Bischetti, G.: Effects of root tensile force and diameter distribution variability on root reinforcement in the Swiss and Italian Alps, Can. J.
Forest Res., 44, 1426–1440, https://doi.org/10.1139/cjfr-2014-0095, 2014.a, b
VSS-Kommission: Schweizer Norm, 670 010b, Tech. rep., Schweizer Norm, Characteristic Coefficients of soils, Association of Swiss Road and Traffic Engineers, 670 010b, 1998.a, b
Weiler, M. and Naef, F.: An experimental tracer study of the role of macropores in infiltration in grassland soils, Hydrol. Process., 17, 477–493, https://doi.org/10.1002/hyp.1136, 2003.a
Welch, B. L.: The generalisation of student's problems when several different population variances are involved, Biometrika, 34, 28–35, https://doi.org/10.1093/biomet/34.1-2.28, 1947.a
Wiekenkamp, I., Huisman, J. A., Bogena, H. R., Lin, H. S., and Vereecken, H.: Spatial and temporal occurrence of preferential flow in a forested headwater catchment, J. Hydrol., 534, 139–149, https:/
/doi.org/10.1016/j.jhydrol.2015.12.050, 2016.a
Wu, T., McKinnel, W. P., and Swanston, D. N.: Strength of tree roots and landslides on Prince of Wales Island, Alaska, Can. Geotech. J., 16.1, 19–33, 1978.a
Xu, C., Xu, X., Dai, F., and Saraf, A. K.: Comparison of different models for susceptibility mapping of earthquake triggered landslides related with the 2008 Wenchuan earthquake in China, Comput.
Geosci., 46, 317–329, https://doi.org/10.1016/j.cageo.2012.01.002, 2012.a
Zevenbergen, L. and Thorne, C.: Quantitative analysis of land surface topography, Earth Surf. Proc. Land., 12, 47–56, 1987.a
Zhang, S., Zhao, L., Delgado-Tellez, R., and Bao, H.: A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale, Nat. Hazards Earth Syst. Sci., 18,
969–982, https://doi.org/10.5194/nhess-18-969-2018, 2018. a
Zhu, H., Zhang, L. M., Xiao, T., and Li, X. Y.: Enhancement of slope stability by vegetation considering uncertainties in root distribution, Comput. Geotech., 85, 84–89, https://doi.org/10.1016/
j.compgeo.2016.12.027, 2017.a | {"url":"https://nhess.copernicus.org/articles/22/2611/2022/","timestamp":"2024-11-09T11:16:13Z","content_type":"text/html","content_length":"484511","record_id":"<urn:uuid:a6b661bf-2cf4-483a-bb08-0b2ed947361c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00604.warc.gz"} |
Gigaflop to Flop Converter (Gflop to flop) | Kody Tools
1 Gigaflop = 1000000000 Flops
One Gigaflop is Equal to How Many Flops?
The answer is one Gigaflop is equal to 1000000000 Flops and that means we can also write it as 1 Gigaflop = 1000000000 Flops. Feel free to use our online unit conversion calculator to convert the
unit from Gigaflop to Flop. Just simply enter value 1 in Gigaflop and see the result in Flop.
Manually converting Gigaflop to Flop can be time-consuming,especially when you don’t have enough knowledge about Computer Speed units conversion. Since there is a lot of complexity and some sort of
learning curve is involved, most of the users end up using an online Gigaflop to Flop converter tool to get the job done as soon as possible.
We have so many online tools available to convert Gigaflop to Flop, but not every online tool gives an accurate result and that is why we have created this online Gigaflop to Flop converter tool. It
is a very simple and easy-to-use tool. Most important thing is that it is beginner-friendly.
How to Convert Gigaflop to Flop (Gflop to flop)
By using our Gigaflop to Flop conversion tool, you know that one Gigaflop is equivalent to 1000000000 Flop. Hence, to convert Gigaflop to Flop, we just need to multiply the number by 1000000000. We
are going to use very simple Gigaflop to Flop conversion formula for that. Pleas see the calculation example given below.
\(\text{1 Gigaflop} = 1 \times 1000000000 = \text{1000000000 Flops}\)
What Unit of Measure is Gigaflop?
Gigaflop is a unit of measurement for computer performance. Gigaflop is a multiple of computer performance unit flop. One giggaflop is equal to 1e9 flops.
What is the Symbol of Gigaflop?
The symbol of Gigaflop is Gflop. This means you can also write one Gigaflop as 1 Gflop.
What Unit of Measure is Flop?
Flop is a unit of measurement for computer performance. Flop, also known as flops, stands for floating point operations per second.
What is the Symbol of Flop?
The symbol of Flop is flop. This means you can also write one Flop as 1 flop.
How to Use Gigaflop to Flop Converter Tool
• As you can see, we have 2 input fields and 2 dropdowns.
• From the first dropdown, select Gigaflop and in the first input field, enter a value.
• From the second dropdown, select Flop.
• Instantly, the tool will convert the value from Gigaflop to Flop and display the result in the second input field.
Example of Gigaflop to Flop Converter Tool
Gigaflop to Flop Conversion Table
Gigaflop [Gflop] Flop [flop] Description
1 Gigaflop 1000000000 Flop 1 Gigaflop = 1000000000 Flop
2 Gigaflop 2000000000 Flop 2 Gigaflop = 2000000000 Flop
3 Gigaflop 3000000000 Flop 3 Gigaflop = 3000000000 Flop
4 Gigaflop 4000000000 Flop 4 Gigaflop = 4000000000 Flop
5 Gigaflop 5000000000 Flop 5 Gigaflop = 5000000000 Flop
6 Gigaflop 6000000000 Flop 6 Gigaflop = 6000000000 Flop
7 Gigaflop 7000000000 Flop 7 Gigaflop = 7000000000 Flop
8 Gigaflop 8000000000 Flop 8 Gigaflop = 8000000000 Flop
9 Gigaflop 9000000000 Flop 9 Gigaflop = 9000000000 Flop
10 Gigaflop 10000000000 Flop 10 Gigaflop = 10000000000 Flop
100 Gigaflop 100000000000 Flop 100 Gigaflop = 100000000000 Flop
1000 Gigaflop 1000000000000 Flop 1000 Gigaflop = 1000000000000 Flop
Gigaflop to Other Units Conversion Table
Conversion Description
1 Gigaflop = 1000000000 Flop 1 Gigaflop in Flop is equal to 1000000000
1 Gigaflop = 1000000 Kiloflop 1 Gigaflop in Kiloflop is equal to 1000000
1 Gigaflop = 1000 Megaflop 1 Gigaflop in Megaflop is equal to 1000
1 Gigaflop = 0.001 Teraflop 1 Gigaflop in Teraflop is equal to 0.001
1 Gigaflop = 0.000001 Petaflop 1 Gigaflop in Petaflop is equal to 0.000001
1 Gigaflop = 1e-9 Exaflop 1 Gigaflop in Exaflop is equal to 1e-9 | {"url":"https://www.kodytools.com/units/cspeed/from/gigaflop/to/flop","timestamp":"2024-11-12T09:56:53Z","content_type":"text/html","content_length":"75382","record_id":"<urn:uuid:ea2bbbf6-4260-4674-b3a0-b4f616ae1797>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00246.warc.gz"} |
[UNITEXT for Physics] Antonio Saggion, Rossella Faraldo, Matteo Pierno - Thermodynamics Fundamental Principles and Applications (UNITEXT for Physics) (2019, Springer) - libgen.li
UNITEXT for Physics
Antonio Saggion
Rossella Faraldo
Matteo Pierno
Fundamental Principles and Applications
UNITEXT for Physics
Series Editors
Michele Cini, University of Rome Tor Vergata, Roma, Italy
Attilio Ferrari, University of Turin, Turin, Italy
Stefano Forte, University of Milan, Milan, Italy
Guido Montagna, University of Pavia, Pavia, Italy
Oreste Nicrosini, University of Pavia, Pavia, Italy
Luca Peliti, University of Napoli, Naples, Italy
Alberto Rotondi, Pavia, Italy
Paolo Biscari, Politecnico di Milano, Milan, Italy
Nicola Manini, University of Milan, Milan, Italy
Morten Hjorth-Jensen, University of Oslo, Oslo, Norway
UNITEXT for Physics series, formerly UNITEXT Collana di Fisica e Astronomia,
publishes textbooks and monographs in Physics and Astronomy, mainly in English
language, characterized of a didactic style and comprehensiveness. The books
published in UNITEXT for Physics series are addressed to graduate and advanced
graduate students, but also to scientists and researchers as important resources for
their education, knowledge and teaching.
More information about this series at http://www.springer.com/series/13351
Antonio Saggion Rossella Faraldo
Matteo Pierno
Fundamental Principles and Applications
Antonio Saggion
Dipartimento di Fisica e Astronomia
“Galileo Galilei”
Università di Padova
Padova, Italy
Rossella Faraldo
Liceo Statale “Celio-Roccati”
Via G. Carducci, 8
45100 Rovigo, Italy
Matteo Pierno
Dipartimento di Fisica e Astronomia
“Galileo Galilei”
Università di Padova
Padova, Italy
ISSN 2198-7882
ISSN 2198-7890 (electronic)
UNITEXT for Physics
ISBN 978-3-030-26975-3
ISBN 978-3-030-26976-0 (eBook)
© Springer Nature Switzerland AG 2019
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Antonio Saggion dedicates his contribution to
the memory of Nicolò Dallaporta, former
Professor of Theoretical Physics of the
University of Padova and renowned scientist.
In order to explain what pushed us to write this book on thermodynamics, and to
help the reader understand how to approach it, it seems appropriate to begin with
the following quote:
In November 1919, the London Times published an article by A. Einstein
entitled “My Theory.” We have copied here a section of the article:
There are several kinds of theory in physics. Most of them are constructive. These attempt
to build a picture of complex phenomena out of some relatively simple proposition. The
kinetic theory of gases, for instance, attempts to refer to molecular movement the
mechanical, thermal, and diffusional properties of gases. When we say that we understand
a group of natural phenomena, we mean that we have found a constructive theory which
embraces them.
But in addition to this most weighty group of theories, there is another group consisting of
what I call theories of principle. These employ the analytic, not the synthetic method. Their
starting point and foundation are not hypothetical constituents, but empirically observed
general properties of phenomena, principles from which mathematical formulae are
deduced of such a kind that they apply to every case which presents itself.
Thermodynamics, for instance, starting from the fact that perpetual motion never occurs in
ordinary experience, attempts to deduce from this, by analytic processes, a theory which
will apply in every case. The merit of constructive theories is their comprehensiveness,
adaptability, and clarity; that of the theories of principle, their logical perfection, and the
security of their foundation.
Along with special relativity, thermodynamics is a theory of principle, according to
Einstein’s categorization. This means that it is a theory based upon a universally
accepted assumption, starting from which the analytic method is used to deduce
constraints or correlations within all natural phenomena. As with relativity, its
fundamental property consists in generality.
Other macroscopic theories such as mechanics or electromagnetism bring
together many experimental observations in an abstract mathematical formulation.
This allows for precise predictions, which, however, are constrained to the field of
phenomena within which they were synthesized.
Thermodynamics is also based on an abstract mathematical formulation, but this
produces two types of results: inequalities and general relations between quantities
which appear independent from each other at first sight. This is why thermodynamics is so often carried out by engineers and is so important for biologists.
Another very important aspect, which symbolizes the cultural choices made in
this work, regards the connection with statistical mechanics. It is often believed
that, for example, the concept of temperature is, essentially, absorbed within
“preexisting” concepts in mechanics, like that of kinetic energy in its various forms.
In these cases, there may be the conviction that, after the “discovery of atoms,”
mechanics, developing an opportune probabilistic language, can make thermodynamics obsolete, as happened for optics, which had its “own laws” and ended up
becoming a chapter of electromagnetism.
In the previous quote, Einstein brings the problem clearly into focus: Kinetic
theory (and statistical mechanics) represents an attempt to attribute some so-called
thermal phenomena to the movement of molecules. This reductionist attitude can
provide results which may agree, more or less, with observations but cannot represent a “true” law in the sense to which Einstein refers:
The example I saw before me was thermodynamics. The general principle was there given
in the statement: the laws of nature are such that it is impossible to construct a perpetuum
For these reasons, thinking about our students, it seemed necessary to us to highlight the distinction between the two areas: that of thermodynamics and that of
statistical mechanics. Differently from the choices made in many of the more recent
treatises, in which the two aspects are mixed, partly for reasons of didactical
completeness and partly to show how the two aspects support each other reciprocally, in this work we have avoided referring, even if only to make analogies, to
kinetic theories (or of statistical mechanics in general). The work on thermomechanical effects in Part III is no exception to this. The relations obtained with the
general methods of thermodynamics of irreversible processes have been covered
again in the Knudsen gas case using a classic mechanical-statistical modeling tool.
This is a very informative comparison which provides a “hands-on approach” to
understanding the amount of particular hypotheses that need to be satisfied in order
to obtain the same result.
Thermodynamics nowadays is based upon a set of concepts and on the choice of
a mathematical structure which has undergone significant evolutions, starting from
the first theorizations of the second half of the nineteenth century but despite this,
the conceptual and formal original structure is often maintained. Examples of this
are the persistence of the calorimeter definition of “heat” and the establishment
of the second law according to the original formulation of Lord Kelvin or
R. Clausius. The first example (which risks keeping alive the old idea of “caloric,”
meaning a fluid that flows from one body to another) faces the student with two
substantial difficulties bordering on real conceptual errors. On the one hand, the
concept of heat is persistently associated with variations in temperature1; on the
other hand, it creates the necessity to “demonstrate” the heat–work equivalence
(which further reinforces the idea of caloric) in order to then “demonstrate” that
energy is a function of state.
Commenting on the second example (the Second Principle) is more demanding.
One arrives at the concept of entropy, which is the theorization, meaning the
translation into a mathematical form, of the fundamental postulate on the impossibility of perpetual motion, after a long, strenuous and mathematically questionable
Whatever the solution of the first and second principles, their formulation is
always dependent on the definition of the amount of heat transferred in an interaction. This represents a limitation that has to be overcome, and this can only be
obtained through further postulation. The problem lies in the fact that the concept
of the amount of heat transferred only makes sense for closed systems, meaning
systems which do not exchange mass, therefore leaving out all of those systems
(open systems) which are of primary interest to biology and to applications in
engineering. This further step is carried out by generalizing the relation between
energy and entropy, that the first and second principles indicated as for closed
The final postulation, which brings everything together, consists in establishing,
for every macroscopic system, the existence of the so-called Fundamental Relation.
This contains all of the properties of a macroscopic system: the conditions of
stability (i.e., of observability) of states of equilibrium and the responses that the
system provide to any external perturbation. All the thermomechanical properties of
materials can be expressed with partial derivatives of the Fundamental Relation of
that material. The fact that these relations are verified by experimental observation
allows us to understand that the existence of this relation expresses that value of
generality as Einstein sees it, and therefore has that characteristic of “true,” meaning
not linked to modelings.
In conclusion, together with the formulation of the first law, which defines energy
and postulates its conservation, the existence and the geometric properties of the
Fundamental Relation constitute the complete and all-inclusive formulation of the
postulate “according to which perpetual motion is impossible.” The Fundamental
Relation and its differential expression (which is sometimes called the Fundamental
Equation) also constitute the starting point for the study of the dynamics of irreversible processes. It is from this differential expression that the “forms” of the flows
describing the dynamics of the processes in action, and of the forces which put these
processes into motion, are identified.
Temperature (absolute) should be correctly defined together with entropy, i.e., within the Second
Outline of This work
• Part I is dedicated to the fundamental principles that, as in Feynman’s
methodological approach, we can call the “Toolbox.” In conformity with
Einstein’s statement, the student will focus on “… the principles that give rise to
mathematically formulated criteria” which will be used to expand their
knowledge of natural phenomena. Knowledge of nature will be reflected in
complete and clear descriptions based on the certainty of foundations.
• The First Principle is expressed according to M. Born’s formulation given in
1921 in an article published in Physikalische Zeitschrift 22(21), 224. The most
relevant part of this approach is his definition of amount of heat as an
all-inclusive amount which sums up the energy transferred to the system by all
the interactions not controlled by the observer. The Second Principle consists in
the definition of entropy. There are various excellent formulations, but we have
chosen that adopted by E. A. Guggenheim in his celebrated treatise on thermodynamics first published in 1949. In this formulation, entropy and temperature are defined as two fundamental, complementary quantities (as in other
formulations), but the core point consists in the separation of the variation in
entropy, in any infinitesimal process, into the sum of two contributions: one due
to the processes which occur within the system and one due to the interaction
with the outside. This subdivision into two contributions, which has to be done
for the variation of any extensive quantity, is of fundamental importance
because it clearly shows the role of the interactions in the transfer of entropy
from the outside world. This choice will prove to have been very advantageous
when studying irreversible processes: The contributions from interactions with
the outside will make up the flows whose dynamics constitute our description
of the processes underway. In Part I, we have also included a chapter on
Maxwell’s relations. The reason for situating this topic, which would appear to
be more technical, in the part dedicated to fundamental principles lies in the
need to show how knowledge of the Fundamental Relation is equivalent to the
knowledge (in general, it will be a knowledge gained from experimental measures) of the coefficients of compressibility, of thermal expansion, and of a
specific heat of the system under consideration.
• In Part II, named “Applications,” we apply the fundamental principles to a series
of generalized situations which are frequently to be found. The various topics
are listed in the index, but two of them may be highlighted. One concerns the
particular attention devoted to the law of corresponding states, a topic which, in
our opinion, should always form part of the specific training. The other concerns
the relation between radiation, relativity and thermodynamics; it contains a
summary of the route by which Planck was able to extract the correct expression
of spectral density of radiation in thermal equilibrium and which peaked in his
famous article at the end of 1900. The reason for this choice is that we consider
it necessary to remember the fundamental role thermodynamics had in obtaining
the result which provided him with a Nobel Prize and all of us with one fundamental step in the advancement of knowledge.
• Part III is dedicated to the study of configurations outside equilibrium and of the
dynamics of natural processes. We have kept separate in this work the case of
discontinuous systems, that is the case of those configurations that are formed by
systems each in internal equilibrium but not in equilibrium among themselves,
from the case of continuous systems, meaning those configurations in which the
intensive state variables can be considered continuous functions of the coordinates. While for the discontinuous systems the starting point will be the fundamental equation valid for states of equilibrium, the case of continuous systems
requires, to some extent, a reformulation of the thermodynamic equations within
this new light. Throughout all this Part, the centrality of the production of
entropy (which measures the speed with which the processes make entropy
increase) emerges as the physical quantity which governs the dynamics of the
processes. It is invariant with respect to the different descriptions the observer
may opt for, and it enables the identification of the generalized forces which
determine the establishment of the flows and the study of the interference
between different processes. Due attention is reserved for the study of stationary
states and the role the production of entropy has in establishing the evolution
criteria toward stationary states. In this context, the principles due to Le
Chatelier and to Le Chatelier–Braun, still used both in physics and in chemistry,
find their explanation.
• In Part IV, the L. Szilard’s version of the Maxwell paradox is discussed together
with solution proposed by Bennet in order “to save” the Second Principle. This
establishes a profound link between thermodynamics and the theory of information, giving rise to a new branch of science where the two disciplines are
deeply merged. It changes our point of view on the relation between observer
and observed and opens the way to proposals of new paradigms.
• Two appendices have been added at the end. The first is of technical nature in
the sense that we provide some mathematical formulae useful throughout the
book and a clear discussion concerning the notations we adopted to describe
small and infinitesimal processes. The other is devoted to the mechanical
interpretation of pressure. In it we obtain some general relations between
pressure and energy density for isotropically distributed molecules.
Concluding Remark
This book contains, in an extended and in-depth version, the topics that have been
treated in the course of thermodynamics held, for several years, at the Department
of Physics and Astronomy of the University of Padua. The choice of specific topics
may vary from year to year, but the conceptual structure, consisting of half the
course devoted to the thermodynamics of equilibrium states and half to the thermodynamics of irreversible processes, is kept constant.
For the part concerning the thermodynamics of equilibrium states, the chosen
approach was the one well defined by E. A. Guggenheim in his famous treatise, and
this has already been previously commented in this introduction.
Regarding the study of irreversible processes, there may be two levels of
approach. A complete approach, for advanced students, which must include the
development of the formalism related to continuous systems. Another approach,
simpler but more intuitive, concerns the study of the irreversible processes that
develop between systems in internal equilibrium (we call it discontinuous systems
approximation). This part has always been carried out by adopting, as a reference
text, the excellent little book Thermodynamics of Irreversible Processes by
I. Prigogine and published in 1961 by John Wiley & Sons, by I. Prigogine but no
longer easily available in the libraries.
The great merit of this short monograph consists in providing the undergraduate
student with a complete picture of the so-called Onsager Theory on the definition
and the interference of irreversible processes.
Both Guggenheim’s and Prigogine’s treatises have in common the fundamental
starting point of dividing the variation of any extensive quantity in the two contributions related, respectively, to interactions with the external world and to processes occurring internally to the system.
For this reason, they formed a coherent and solid didactical tool.
Padua, Italy
May 2019
Antonio Saggion
Rossella Faraldo
Matteo Pierno
We are indebted to the students for their stimulating questions and active participation in the classes. We also thank many colleagues in the Department of Physics
and Astronomy of the University of Padova for useful discussions. Special thanks
to the colleagues and students in the Laboratory of Surfaces and Interfaces (LaFSI)
for discussions, support for typesetting, and “regenerating” parties.
Part I
Formulation of the Theory
Macroscopic Systems and Empirical Temperature . . . . .
1.1 Macroscopic Systems . . . . . . . . . . . . . . . . . . . . . . .
1.2 Macroscopic Observer . . . . . . . . . . . . . . . . . . . . . . .
1.3 Thermodynamic State . . . . . . . . . . . . . . . . . . . . . . .
1.4 The Concept of Empirical Temperature . . . . . . . . . .
1.5 The Perception of Hotness and Coldness . . . . . . . . .
1.6 The Empirical Temperature and the Zeroth Principle
of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . .
1.6.1 Equilibrium State . . . . . . . . . . . . . . . . . . . .
1.6.2 The Zeroth Principle . . . . . . . . . . . . . . . . . .
S and T .
The First Principle of Thermodynamics . . . . . . . . . . . . . .
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Closed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Adiabatic Walls and Adiabatic Transformations . . . . .
2.4 The Definition of Energy . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Energy of Familiar Adiabatic Systems . . . . . .
2.5 Definition of Heat (Quantity of) . . . . . . . . . . . . . . . . .
2.6 Infinitesimal Transformations . . . . . . . . . . . . . . . . . . .
2.7 Formulation of the First Principle of Thermodynamics
The Second Principle of Thermodynamics . . . . . . . . .
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Natural and Unnatural Processes . . . . . . . . . . . . .
3.3 Quasi-static Processes . . . . . . . . . . . . . . . . . . . . .
3.4 Reversible Processes . . . . . . . . . . . . . . . . . . . . . .
3.5 Formulation of the Second Principle: Definition of
3.5.1 State Functions . . . . . . . . . . . . . . . . . . . .
3.5.2 Extensive and Intensive Quantities . . . . .
3.5.3 Measuring S . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.4 The Absolute, or Thermodynamic, Temperature T
3.6 Discontinuous Systems Approximation . . . . . . . . . . . . . . .
3.6.1 Resume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7 On the Predictive Power of Thermodynamics . . . . . . . . . .
3.8 Efficiency of Thermal Engines . . . . . . . . . . . . . . . . . . . . .
3.9 Carnot Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.10 On the Determination of the New Scale of Temperature T
3.11 The Carnot Engine and Endoreversible Engines . . . . . . . .
3.12 Coefficient of Performance (COP) . . . . . . . . . . . . . . . . . .
3.12.1 Refrigerator . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.12.2 Heat Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.13 Availability and Maximum Work . . . . . . . . . . . . . . . . . . .
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Fundamental Relation and the Thermodynamic
Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 The Equilibrium State Postulate for Closed Systems
with No Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Simple Systems . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 The Fundamental Relation . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1 The General Case for Open Systems with Variable
Composition: The Chemical Potential . . . . . . . . . .
4.3.2 Other Thermodynamic Potentials . . . . . . . . . . . . . .
4.3.3 The Free Energy and Isothermal Processes
in Closed Systems . . . . . . . . . . . . . . . . . . . . . . . .
4.3.4 The Enthalpy and Isobaric Processes . . . . . . . . . . .
4.3.5 The Gibbs Potential and Isothermal and Isobaric
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.6 The Stability Problem in a Thermodynamical
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.7 Adiabatic Systems . . . . . . . . . . . . . . . . . . . . . . . .
4.3.8 Systems at Constant Temperature . . . . . . . . . . . . .
4.3.9 Systems at Constant Entropy . . . . . . . . . . . . . . . . .
4.3.10 The Isothermal Compressibility . . . . . . . . . . . . . . .
4.3.11 The Dependence of Entropy on Temperature . . . . .
4.3.12 Other Consequences from the Stability Conditions .
Maxwell Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Some Properties of Materials . . . . . . . . . . . . . . . .
5.3 The Volume and Pressure Dependance of Entropy
The Heat Capacities and the Temperature Dependance
of Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.1 The Heat Capacity at Constant Pressure . . . . . . .
5.4.2 The Heat Capacity at Constant Volume . . . . . . .
5.4.3 The Relation Between Cp and CV . . . . . . . . . . .
5.4.4 The Adiabatic Compressibility Coefficient . . . . .
5.4.5 The Equations of the Adiabatic Transformations
5.5 Concluding Remarks and the Role of Cp , a, vT . . . . . . .
5.5.1 Isothermal Processes . . . . . . . . . . . . . . . . . . . . .
5.5.2 Free Expansion . . . . . . . . . . . . . . . . . . . . . . . .
5.5.3 Pressure Drop in Free Expansion . . . . . . . . . . .
5.5.4 Temperature–Pressure Variations in Adiabatic
Transformations . . . . . . . . . . . . . . . . . . . . . . . .
5.5.5 Temperature–Volume Variations in Adiabatic
Transformations . . . . . . . . . . . . . . . . . . . . . . . .
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part II
General Properties of Gaseous Systems . . . . . . . . . . . . . . . . . .
6.1 Isothermal Behavior of Gases . . . . . . . . . . . . . . . . . . . . .
6.2 The First Virial Coefficient for Gases . . . . . . . . . . . . . . . .
6.2.1 The Joule–Thomson Experiment . . . . . . . . . . . . .
6.2.2 Some Thermodynamic Potentials for Gases . . . . .
6.2.3 Calorimetric Measurements for the Determination
of the First Virial Coefficient . . . . . . . . . . . . . . .
6.3 Definition of the Temperature Scale by Means of Gases . .
6.3.1 Other Determinations of the Temperature Scale . .
6.4 The Universal Constant of Gases . . . . . . . . . . . . . . . . . . .
6.5 The Joule–Thomson Coefficient . . . . . . . . . . . . . . . . . . . .
6.6 The Inversion Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.1 Liquefaction of Gases and the Attainability
of Low Temperatures . . . . . . . . . . . . . . . . . . . . .
6.7 A Simple Approximation of the Isothermal Behavior
of Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.8 The Chemical Potential in Diluted Gases . . . . . . . . . . . . .
6.9 Molar Heat at Constant Volume for Dilute Gases . . . . . . .
6.9.1 Microscopic Degrees of Freedom . . . . . . . . . . . .
6.9.2 Energy Equipartition . . . . . . . . . . . . . . . . . . . . . .
6.9.3 On the Temperature Dependence of Molar Heats .
. 100
. . . . 108
Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.1 Phases Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.2 Latent Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Liquid–Vapor Equilibrium . . . . . . . . . . . . . .
Equilibrium Between Condensed Phases:
Solid–Liquid . . . . . . . . . . . . . . . . . . . . . . . .
7.2.3 Solid–Vapor Equilibrium . . . . . . . . . . . . . . .
Triple Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Phase Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.1 (p; V) Diagrams . . . . . . . . . . . . . . . . . . . . . .
7.4.2 Molar Heat at Equilibrium . . . . . . . . . . . . . .
7.4.3 Temperature Dependence of the Latent Heats .
Continuity of States . . . . . . . . . . . . . . . . . . . . . . . . .
Continuous-Phase Transitions . . . . . . . . . . . . . . . . . .
7.6.1 Differences Between Continuousand Discontinuous-Phase Transitions . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 123
. . . . . . . 136
. . . . . . . 137
van der Waals Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 A Simple Modification to the Equation of State for Ideal
Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3 Successes and Failures of the van der Waals Equation . . . . .
8.3.1 van der Waals Equation and the Boyle
Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.2 The Critical Point . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.3 The Dependence of the Energy of a van der Waals
Gas on Volume . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.4 The Coefficient of Thermal Expansion for a van der
Waals Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.5 The Molar Heats at Constant Volume
and at Constant Pressure in a van der Waals Gas . . .
8.3.6 The Joule–Thomson Coefficient and the Inversion
Curve for a van der Waals Gas . . . . . . . . . . . . . . . .
8.3.7 Determination of Vapor Pressure from
the van der Waals Equation . . . . . . . . . . . . . . . . . .
8.3.8 Free Energy in a van der Waals Gas . . . . . . . . . . . .
8.4 The Law of Corresponding States . . . . . . . . . . . . . . . . . . . .
8.4.1 Corresponding States for the Second Virial
Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.2 The Compressibility Factor and the Generalized
Compressibility Chart . . . . . . . . . . . . . . . . . . . . . . .
8.4.3 Vapor Pressure and Latent Heat of Vaporization . . .
. . 139
. . 139
. . 139
. . 144
. . 145
. . 146
. . 148
. . 149
. . 149
. . 150
. . 151
. . 153
. . 155
. . 156
. . 156
. . 160
Triple Point and the Law of Corresponding States . .
The Inversion Curve and the Law of Corresponding
States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.6 The Law of Corresponding States and the van der
Waals’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . .
Power Laws at the Critical Point in a van der Waals Gas . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Surface Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Surface Tension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3 Properties of Surface Layers . . . . . . . . . . . . . . . . . . . . .
9.3.1 Stability of Equilibrium States . . . . . . . . . . . . . .
9.4 Interfaces at the Contact Between Two Phases in
Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5 Curvature Effect on Vapor Pressure: Kelvin’s Relation . .
9.6 Nucleation Processes and Metastability in Supersaturated
Vapor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.1 Spinodal Decomposition . . . . . . . . . . . . . . . . . .
9.6.2 Temperature Dependence . . . . . . . . . . . . . . . . .
9.6.3 Surface Tension and the Law of Corresponding
States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.4 Interfaces at Contact Between Three Phases
in Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . .
. . 162
. . 164
. . 165
. . 167
. . . . . 176
. . . . . 178
. . . . . 183
. . . . . 186
. . . . . 187
. . . . . 189
. . . . . 189
10 Electrostatic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2 The Response of Matter . . . . . . . . . . . . . . . . . . . . . . . .
10.3 The Dielectric Constant . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 Thermodynamic Potentials for Linear Dielectrics . . . . . .
10.4.1 Thermodynamic Potentials for Linear Dielectrics
Without Electrostriction . . . . . . . . . . . . . . . . . .
10.5 Dielectric Constant for Ideal Gases . . . . . . . . . . . . . . . .
11 Magnetic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.2 Electric Work, Magnetic Work, and Radiation . . . . . . .
11.3 Constitutive Relations . . . . . . . . . . . . . . . . . . . . . . . . .
11.3.1 Uniform Medium . . . . . . . . . . . . . . . . . . . . . .
11.4 Diamagnetic Materials . . . . . . . . . . . . . . . . . . . . . . . . .
11.5 Paramagnetic Materials . . . . . . . . . . . . . . . . . . . . . . . .
11.5.1 Long, Rectilinear, and Homogeneous Solenoid
. . 162
. . . . . 197
. . . . . 200
11.6 Thermodynamic Potentials in the Presence of Magnetostatic
Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.6.1 Expression of the Thermodynamic Potentials . . . . .
11.6.2 Linear Media . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.7 Adiabatic Demagnetization . . . . . . . . . . . . . . . . . . . . . . . .
11.8 Ferromagnetic Materials . . . . . . . . . . . . . . . . . . . . . . . . . .
12 Thermodynamics of Radiation . . . . . . . . . . . . . . . . . . . . . . . .
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2 Kirchhoff’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.1 Absorptivity of Material Bodies . . . . . . . . . . . . .
12.2.2 Emissivity of Material Bodies . . . . . . . . . . . . . . .
12.2.3 Black Body . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.4 Kirchhoff’s Law for the Emissivity of a Black
Body . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.5 One Fundamental Consequence of Kirchhoff’s
Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.6 Extended Form of the Kirchhoff’s Law . . . . . . . .
12.2.7 Emittance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.8 Radiation Energy Density and Emissivity . . . . . .
12.3 Wien’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3.1 Wien’s Law According to Wien . . . . . . . . . . . . .
12.3.2 Wien’s Law and Relativity . . . . . . . . . . . . . . . . .
12.3.3 Some Consequences of the Wien’s Law . . . . . . .
12.4 Thermodynamic Potentials for Radiation . . . . . . . . . . . . .
12.5 Thermodynamical Processes for Radiation . . . . . . . . . . . .
12.5.1 Isothermal Processes . . . . . . . . . . . . . . . . . . . . . .
12.5.2 Adiabatic Processes . . . . . . . . . . . . . . . . . . . . . .
12.5.3 Isochoric Transformations (Constant Volume) . . .
12.5.4 Free Expansion . . . . . . . . . . . . . . . . . . . . . . . . .
12.6 Planck and the Problem of Black-Body Radiation . . . . . . .
12.6.1 The Situation at the End of the Nineteenth
Century and the Black-Body Radiation . . . . . . . .
12.6.2 Planck and the Problem of Matter–Radiation
Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6.3 The Planck Solution (Through Thermodynamics) .
12.6.4 The Dawn of Quantum Physics . . . . . . . . . . . . . .
12.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13 Third Law of Thermodynamics . . . . . . . . . . . . . . . . . . . .
13.1 The Third Law of Thermodynamics . . . . . . . . . . . . . .
13.1.1 Formulation According to Nernst and Planck .
13.1.2 Some Observational Consequences . . . . . . . .
. . . . 224
. . . . 244
Part III
Irreversible Processes
15 Irreversible Processes: Applications . . . . . . . . . . . . . . . . . . . .
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.1.1 Thermomechanical Effects . . . . . . . . . . . . . . . . .
15.1.2 Knudsen Gases . . . . . . . . . . . . . . . . . . . . . . . . .
15.1.3 Electrokinetic Effects . . . . . . . . . . . . . . . . . . . . .
15.2 Stationary States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.1 Configurations of Minimal Entropy Production . .
15.2.2 Determination of the Stationary State . . . . . . . . .
15.2.3 Stability of Stationary States and the Principles
of Le Chatelier and of Le Chatelier–Braun . . . . .
15.3 Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.1 Theory of Fluctuations in a Isolated System . . . . .
15.3.2 Fluctuations Distribution Function . . . . . . . . . . . .
15.3.3 Mean Values and Correlations . . . . . . . . . . . . . .
15.3.4 Onsager Relations and the Decay of Fluctuations
in Isolated Systems . . . . . . . . . . . . . . . . . . . . . . .
16 Thermodynamics of Continua .
16.1 Introduction . . . . . . . . . .
16.2 Definition of System . . . .
16.3 Mass Conservation . . . . .
16.4 Equation of Motion . . . . .
16.5 The Equation for Energy .
14 Irreversible Processes: Fundamentals . . . . . . . . . . . . . . . . . .
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.1.1 Rephrasing the First Principle . . . . . . . . . . . . . .
14.2 Heat Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3 Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3.1 The Rate of Reaction . . . . . . . . . . . . . . . . . . . .
14.3.2 Entropy Production and the Chemical Affinity . .
14.4 Open Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.5 Electrochemical Reactions . . . . . . . . . . . . . . . . . . . . . . .
14.6 Generalized Fluxes and Forces . . . . . . . . . . . . . . . . . . .
14.6.1 Determination of Generalized Fluxes and Forces
14.7 Onsager Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.7.1 The Curie Symmetry Principle . . . . . . . . . . . . .
14.8 The Approximation of Linearity . . . . . . . . . . . . . . . . . .
14.8.1 Chemical Affinity . . . . . . . . . . . . . . . . . . . . . . .
14.8.2 Reaction Rate . . . . . . . . . . . . . . . . . . . . . . . . . .
14.8.3 Linear Relations Between Rates and Affinities . .
14.8.4 Relaxation Time for a Chemical Reaction . . . . .
. . . . 328
16.6 The Equation for Entropy . . . . . . . . . . . . . . . . . . . . . .
16.6.1 Entropy Balance in Continuous Systems . . . . .
16.6.2 The Entropy Production . . . . . . . . . . . . . . . . .
16.6.3 Mechanical Equilibrium . . . . . . . . . . . . . . . . .
16.6.4 The Einstein Relation Between Mobility
and Diffusion Coefficient . . . . . . . . . . . . . . . .
16.7 Thermoelectric Phenomena . . . . . . . . . . . . . . . . . . . . .
16.7.1 Seebeck Effect—Thermoelectric Power . . . . . .
16.7.2 Peltier Coefficient—Phenomenology . . . . . . . .
16.7.3 Thomson Effect—Phenomenology . . . . . . . . . .
16.7.4 Peltier Effect—Explanation . . . . . . . . . . . . . . .
16.7.5 Thomson Effect . . . . . . . . . . . . . . . . . . . . . . .
16.7.6 Galvanomagnetic and Thermomagnetic Effects .
16.8 Thermodiffusion Processes . . . . . . . . . . . . . . . . . . . . .
16.8.1 Binary Systems . . . . . . . . . . . . . . . . . . . . . . .
16.8.2 Thermodiffusion . . . . . . . . . . . . . . . . . . . . . . .
16.8.3 Dufour Effect . . . . . . . . . . . . . . . . . . . . . . . . .
16.9 Appendix—The Gibbs–Duhem Relation . . . . . . . . . . . .
17 Introduction to the Role of Information in Physics . . . . . . . . . . .
17.1 The Maxwell’s Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2 The Leo Szilard’s Article in 1929 . . . . . . . . . . . . . . . . . . . .
17.3 The Observer Creates Information . . . . . . . . . . . . . . . . . . . .
17.4 The Solution of Maxwell–Szilard Paradox . . . . . . . . . . . . . .
17.5 Landauer Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.6 On the Separation Observer–Observed . . . . . . . . . . . . . . . . .
17.6.1 Information as a Physical Quantity Which Acquires
a Physical Reality . . . . . . . . . . . . . . . . . . . . . . . . . .
17.6.2 New Perspectives: The Physical Entropy According
to Zurek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part IV
Thermodynamics and Information
. . 382
. . 390
Appendix A: Math Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Appendix B: Pressure Exerted by a Particle Gas . . . . . . . . . . . . . . . . . . . 399
Solutions to the Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
E; E
d^ i
d^ e
Empirical temperature
Thermodynamic or absolute temperature
Amount of work done on the thermodynamic system by the external
Energy of a system
Force (mechanical)
Rest mass
Lorentz factor
Kinetic energy
Electrostatic potential
Electric charge
Electrostatic field (vector, modulus)
Amount of heat given to a system in a finite transformation
Finite difference of a quantity
Differential of a function
Infinitesimal amount, not an exact differential
Infinitesimal variation of an extensive quantity, due to internal processes
Infinitesimal variation of an extensive quantity, due to the interaction with
the external world
Generic extensive quantity
Generic intensive quantity
Efficiency of a thermal engine
Available energy
Available work
Number of moles
Number of moles of component c
A; B; C
S eq
Chemical potential of component c
Free energy or Helmholtz potential
Enthalpy or heat function
Gibbs potential or free enthalpy
Internal degree of freedom of a constrained system. Degree of advancement of a chemical reaction. In a generalized sense, degree of
advancement of a generic process
Coefficient of isothermal compressibility
Coefficient of adiabatic compressibility
Heat capacity at constant pressure
Heat capacity at constant volume
Coefficient of thermal expansion
Ratio between the heat capacities at constant pressure and at constant
Molar Gibbs potential
Molar energy
Molar volume
Molar entropy
Molar enthalpy
First, second, and third virial coefficient
Joule–Thomson coefficient
Critical temperature
Critical pressure
Critical volume
Reduced temperature
Reduced pressure
Reduced volume
Number of microscopic degrees of freedom
Surface tension
Spreading parameter
Equilibrium contact angle
Free charge
Free charge density
Electric permittivity of a material
Electric susceptibility of a material
Magnetic permeability of a material
Magnetic susceptibility
Electric displacement
Electric polarization
Magnetic field
Magnetizing field
Magnetization vector
Curie–Weiss Temperature
j ch
Emittance of a black body
Emissivity of a black body
Absorptivity of a black body
Spectral emissivity of entropy for a black body
Spectral energy density
Energy of a particle or, in general, of an elementary constituent.
Infinitesimal energy transferred to an open system
Production of entropy
Generalized force
Generalized flux
Stoichiometric coefficient of the component c
Stoichiometric coefficient of the component c in the q-th chemical
Velocity of the reaction or reaction rate
Degree of advancement of the q-th chemical reaction
Reaction rate of the q-th chemical reaction
Affinity of the chemical reaction
Affinity of the q-th chemical reaction
Generalized force of q-th chemical reaction
Flux of energy in discontinuous systems
Generalized force relative to the flux of energy in discontinuous systems
Generalized flux of component c in discontinuous systems
Generalized force relative to the flux of component c in discontinuous
Electric valence or electrovalency of component c
Electrochemical affinity of component c
Electrochemical potential of component c
Thermal flux
Generalized force relative to the thermal flux
Generalized flux of the q-th process in discontinuous systems
Linear phenomenological coefficient describing the interference between
process q and q0
Rate per unit volume of the chemical reaction
Production of entropy per unit volume
Equilibrium constant of a chemical reaction
Forward reaction rate
Backward reaction rate
Forward kinetic constant of a chemical reaction
Backward kinetic constant of a chemical reaction
Relaxation time of a process or of a system
Generalized flux of matter in discontinuous systems
Generalized force relative to the flux of matter in discontinuous systems
Heat of transfer
Energy of a single particle
Generalized flux in the forward direction in discontinuous systems
Generalized flux in the backward direction in discontinuous systems
Energy flux in the forward direction in discontinuous systems
Energy Flux in the backward direction in discontinuous systems
Energy of transfer
Generalized force relative to the electric current, in discontinuous systems
Fluctuation of the q-th state parameter
Mass density
Mass density of the component c
Molar concentration of the component c
Mass concentration of the component c
Mole fraction of the component c
Density of a generic extensive quantity
Specific value (per unit mass) of a generic extensive quantity
Stoichiometric number times the molecular weight m0c ¼ mc Mc
Molecular weight of the component c
Local velocity of the component c
Local velocity of the center of mass of the fluid
Diffusion flux density of the component c
Force per unit mass acting on the component c
Specific (per unit mass) energy
Energy flux density in continuous systems
Heat flux density in continuous systems
Specific (per unit mass) entropy
Specific (per unit mass) volume
Specific chemical potential of component c
Entropy flux density deprived of the convective term
Total entropy flux density (including the convective term)
Diffusion flux density of the component c respect to a generic velocity ~
Mobility or mobility coefficient
Diffusion coefficient
Heat conductivity of a material
Electric conductivity of a material
Absolute thermoelectric power of metal A
Absolute thermoelectric power of metal B
Absolute thermoelectric power of metal AB
Peltier coefficient of a thermocouple AB
Thomson coefficient of metal A
Charge per unit mass of the electron
Diffusion flux of electrons (momentum density)
Electron's mass
Specific electrochemical potential of the electron
Thermodiffusion coefficient
Soret’s coefficient
Dufour’s coefficient
Number of microscopic states which correspond to the same macroscopic
Amount of Information possessed by a message in the case of
equiprobable symbols
Amount of information possessed by a message in the general case
Entropy for 1 bit of Information
Average information per symbol
Statistical entropy
Density matrix
Algorithmic entropy
Momentum density of electromagnetic fields
Density of energy flux
Accurate macroscopic observer
Coefficient of performance
Information gathering and using system
Low-accuracy macroscopic observer
Local thermodynamic equilibrium
List of Some Useful Constants
Speed of light in vacuum, c ¼ 2:998 108 m s1
Electric permittivity or electric constant in vacuum,
0 ¼ 8:854 1012 C m1 V1
Avogadro number, NA ¼ 6:022 1023
Boltzmann constant, kB ¼ 1:381 1023 J K1
Gas constant, R ¼ 8:314 J mol1 K1 ¼ 1:987 cal mol1 K1
Electric charge, e ¼ 1:602 1019 C
Faraday constant, F ¼ NA e ¼ 0:965 105 C
Magnetic permeability in vacuum, l0 ¼ 4p 107 N A2
Stefan–Boltzmann constant, a ¼ 7:56 1016 Jm3 K4
Radiation constant, r ¼ 5:67 s1 m2 K4
Solar constant, S ¼ 1366 Wm2
Planck constant, h ¼ 6:626 1034 m2 kgs1
Part I
Formulation of the Theory
Chapter 1
Macroscopic Systems and Empirical
Abstract The definitions of the macroscopic system and of empirical temperature
are discussed. At the first step of the theory, a definition of the state of equilibrium
must be given for isolated systems and the definition of mutual equilibrium between
closed systems put in contact comes as a consequence. The property of being in
mutual equilibrium is, by definition, denoted by saying that the two systems have the
same temperature. Empirical observation shows that the property of having the same
temperature, i.e., of being in mutual equilibrium, is transitive and this statement
is assumed to be true in general. This is the content of the Zeroth Principle of
Thermodynamics, and it is the necessary postulation which allows us to define the
concept of empirical temperature as a physical quantity.
Keywords Macroscopic systems · Macroscopic observer · Thermodynamic
state · Empirical temperature · Zeroth principle
1.1 Macroscopic Systems
The need to use the adjective macroscopic came about at the beginning of the twentieth Century, with the discovery of Quantum Physics and the development of the
atomic-molecular theory of matter. Since the desire to extend the laws of Classical
Physics to the atomic world proved to be destined to failure, it was necessary to
create a new theoretical context for the phenomena which take place in the atomicscale world; these phenomena define what we normally call the microscopic world.
Conversely, those phenomena that we can study within the theoretical context of
Classical Physics are normally called the macroscopic world.
Since we know that the size of an atom is very small, we could define as a macroscopic system a physical system which is made up of a large number of elementary
systems (e.g., atoms or molecules). This could be considered an acceptable definition
of a macroscopic system and this is precisely what we currently do when we want to
apply kinetic theories to the description of macroscopic systems. However, when it
comes to the physics of continuous systems this leaves us empty-handed. In this case,
we do not have the alternative of the microscopic system but we continue to use the
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
1 Macroscopic Systems and Empirical Temperature
theoretical context of Classical Physics (pre-quantum). It is preferable, therefore, to
change the point of view and decide that being macroscopic or microscopic is not a
property of the object being observed but indeed a way of observing and processing
on the part of the observer. A macroscopic system is, therefore, any physical system
which is observed, and studied, by a macroscopic observer.
1.2 Macroscopic Observer
A macroscopic observer is an observer, who works with tools having timescales and
spatial resolutions that we define as macroscopic. In the first instance, these spatial
and temporal resolutions are determined by our senses, meaning that they are linked
to our physiology. Starting from this first means of contact with the outside world,
the observer has created other research tools which, even though they work on space
and timescales that are very distant from those accessible to our sensory apparatus
(both much smaller or much larger), are, however, based on the Physics devised
from sensory observations. In this way we have created a theoretical context, i.e.,
a collection of theories and of observation tools that we now define as classic or
macroscopic: it is for this reason and only in these cases, that we may correctly make
reference to an extension (or sometimes a strengthening) of our sensory apparatus.
The need to characterize the macroscopic observer rises from the failure, as we
have seen, of the attempt to extend the classical theoretical context to atomic phenomena. This made it necessary to establish a new mechanics and new electromagnetism
and new observation tools, that is a new theoretical context.
We know that a mole of gas in normal conditions occupies a volume of 22.4 L,
and we know that it is composed of ∼6 × 1023 molecules. We can associate with
each molecule a share of the occupied volume and from this extract the order of
magnitude of the average distance between two molecules. We can also refer to the
first experiments in the diffraction of X-rays in a crystal to get a measure of the order
of magnitude of the interatomic distance in solids. In the various cases we obtain
that the typical dimension of the so-called “microscopic world” is
d micro ∼ 10−9 − 10−10 m .
Likewise, the elementary models of the atomic structure suggest that the movement
of electrons on the atomic scale is carried out with a periodicity whose timescale is
of the order of magnitude of the inverse of the frequency of the radiation emitted in
the transitions and therefore of the order:
τ micro ∼ 10−15 s .
The same can be said for the oscillation of atoms in crystal structures. These are of
the orders of magnitude that characterize the world we call microscopic.
1.2 Macroscopic Observer
Things are very different for the macroscopic world. We know that the resolving
power, d, of an optical instrument, that is, the minimum distance at which we can
distinguish two point sources is of the order of the wavelength used in the instrument.
If we use visible light, we have
d macro ∼ 5 × 10−7 m ,
so approximately a factor 1000 greater than the interatomic scale. This means that in
a volume that we can (with a certain amount of effort) distinctly see as macroscopic
observers, there are around ∼109 elementary objects characterizing the microscopic
Likewise, for the time duration of the measurements, we can compare the order
of magnitude of the duration of the macroscopic observations that are of the order
of τ macro ∼ 10−2 − 10−3 s, with the timescale of the motions on the atomic scale as
provided by our theories.
We can see that in the time taken to carry out a macroscopic observation, the
elementary constituents of the microscopic world complete an enormous number of
elementary oscillations. That is, in the time interval of our observation, we can only
take in configurations that in the atomic scale are static configurations (i.e., average
An inexperienced reader, reading in a hurry might say: with the development of
atomic physics we discovered that the macroscopic way of observing provides, in
reality, a very loose vision of the real world; why do we continue with it now that we
have much more refined observation tools? Behind the question lies the non-stated
conviction that from a detailed knowledge of the microscopic world we can, sooner
or later, become aware of the behavior on the larger scale of the macroscopic world.
This conviction, which is called reductionism, is flawed, and the failure is not due
to difficulties with calculations, which one day could be slowly overcome, but is
of a conceptual nature. We can agree that there is one reality but that it appears in
different forms through the different instruments we use to observe it.
Naturally, a very interesting problem arises of finding the link between macroscopic appearance and microscopic appearance. For example, interpreting pressure
in terms of the average value of the momentum transferred to the surface per unit of
time or temperature in terms of average kinetic energy of the molecules can certainly
be functional in a certain (very limited) number of situations. As we will see, pressure
and temperature are quantities which express the speed with which energy changes
with a change of volume or of entropy and they have, therefore, a much more general
meaning than that which can appear from the examples that we can extract from
kinetic theories.
1 Macroscopic Systems and Empirical Temperature
1.3 Thermodynamic State
With the term thermodynamic state we mean the set of the values of all the macroscopic properties of the system under consideration. Mass, volume, coefficient of
viscosity, index of refraction, density, chemical composition, energy, electrical resistivity, etc., are examples of state parameters. The number of properties adds up to
several dozens, but from our experimental investigation we have discovered many
laws describing mutual dependencies. A lower number will, therefore, be necessary
to know all the others. We will call the set of all the properties useful for describing
the system’s response to any stimulus, thermodynamic state. The minimum number
of appropriate properties required to define the state completely is a question that
cannot be answered now but that must be postponed until the theory has developed
sufficiently. This will be defined in Sect. 4.2. As we will see later, all of the properties
of the system, which we will also call state parameters, are defined in situations that
we will call states of equilibrium. For further details on this definition, we refer the
reader to Sect. 1.6.1.
1.4 The Concept of Empirical Temperature
The need to introduce a new state variable, the empirical temperature, derives from
the tactile perception of hotness and coldness and the finding that some physical
properties change accordingly. The introduction of an empirical temperature scale
makes it possible to determine whether two systems put in contact, are or not in a
condition of mutual equilibrium. In order for this condition to be objectively certified,
it is necessary that the relationship of mutual equilibrium between various bodies
enjoys the property of being transitive. This is confirmed in empirical observations
but must be assumed to be of general validity. This postulate is called the Zeroth
Principle of Thermodynamics. The choice of this classification is due to the fact that
this Principle should precede the First Principle. We refer shortly to the Celsius scale
as the most familiar example of the empirical temperature scale.
1.5 The Perception of Hotness and Coldness
Among our primitive sensory experiences, shared by all individuals, are the experiences of hotness and coldness. They are experiences that every person, from childhood, obtains through tactile contact with the object under observation. During this
contact, the observer registers a change in their psychological–physical being and
attributes this change to an interaction with the observed object. The classification in
terms of “hot” or “cold” reveals itself to be, in some cases, completely personal and
dependent upon the circumstances that the observer learns to consider as external
1.5 The Perception of Hotness and Coldness
to the observer–observed object relationship. In certain cases, on the other hand,
there is a general consensus between different individuals and an almost complete
independence from the external circumstances (in the sense that the evaluation is
identically repeated whatever change there might be to the external circumstances).
All of this drives us to investigate the laws which regulate such phenomena and to set
out new fields of research in the area of human physiology and psychophysiology.
However, this is not the aspect we want to deal with here because, as a first step, it is
useful to separate the observer from the observed, that is to attribute to the observed
object its own, as it were, objectivity. In the end, we will mention briefly the most
recent developments in Thermodynamics in which we will see that this separation
needs to be seriously reconsidered.
It is a common experience that different properties (physical quantities, state
parameters...) of an object change their value depending on whether is it hot or cold.
Let us look at some examples.
In studying the laws according to which some solid bodies change shape when
subject to outside stimuli or disturbances, we developed the theory of elasticity.
With weak disturbances, we see that the entity of the deformations is proportional
to the intensity of the force applied: a simple and often-mentioned case is that of the
dynamometer which, when suitably calibrated, can be used to measure the intensity of
forces applied in static situations. Remaining within the area of linear deformations,
we have defined a very important state parameter named elastic constant which sets
out, indeed, the law of proportionality between the intensity F of the applied force
and the extension of the deformation ,
F = −k .
The value of the elastic constant depends on the type of material the dynamometer
was built with and on its size. It can easily be seen in the laboratory that the value of
the elastic constant changes visibly depending on whether the spring (which basically
makes up the dynamometer) is hot or cold.
Likewise, let us consider the phenomenon of the conduction of electric current in
metals. If we consider wires (so that the problem can be treated as one dimensional)
we see that, within wide limits, the intensity of current I is proportional to the
difference in applied potential:
This is the well-known Ohm’s Law. We remark that it is referred to dynamic equilibrium configurations . Specifically, when the potential difference ψ is applied to a
conductor, its electrons in the conduction energy band experience an electric field,
and therefore a force. As a consequence they are accelerated but, as their average
velocity increases, the collisions with the ionic structure of the conductor manifest
themselves, on average, as a dissipative viscous force whose intensity increases as
the (average) velocity increases. When the two forces, the one due to the electric field
and the one due to dissipation, balance, the average velocity will be constant, and it
1 Macroscopic Systems and Empirical Temperature
will be called drift velocity. This is the situation named as “dynamic equilibrium”1
and this is the case in which Ohm’s law applies. We also know that R, which is called
electric resistance of that wire, depends on the material chosen, and that, for the same
material, it is proportional to the length l and inversely proportional to the area A of
the cross-section of the wire:
R = l/A ,
where is called resistivity of the conductor and depends on the material used and
not on its geometric form. We can easily observe that takes on a different value
depending on whether the metal is hot or cold. In the same way, we can verify
that many other properties of bodies change depending on whether they are hot or
cold such as the volume of a body, for example, the refractive index of a material,
the coefficient of viscosity, etc. Therefore, being hot or cold does not only concern
the physiology of the human body or, at the most, the tactile human body–object
interaction, but in general, we should expect some properties of the objects external
to us to have different values. In other words, the thermodynamic state of the system
is different depending on whether it is hot or cold. Therefore, being hot or cold reveals
the presence of an internal extra degree of freedom.2
1.6 The Empirical Temperature and the Zeroth Principle
of Thermodynamics
This section is devoted to the definition of empirical temperature. The concept of
temperature will find its complete formulation in Chap. 4 where it will be defined as
a physical quantity complementary to entropy but it is necessary to proceed step by
step. In this way not only the historical process is respected but also the conceptual
difference between empirical temperature and temperature will appear more clearly.
1.6.1 Equilibrium State
When two bodies are put into contact we notice, in general, that both of them undergo
some sort of modification; we see, that is, that some properties (or state parameters) of
each body vary. If the two bodies form an isolated system, meaning that they interact
amongst themselves but not with other bodies. This can be reasonably certified by
considering that we do not see any changes in the surrounding objects ascribable to
similar situation, will be encountered when the Einstein relation between the coefficient of
diffusion and the coefficient of mobility in a ionic solution will be demonstrated in Sect. 16.6.4.
2 The problem of the degrees of freedom for a macroscopic system will be extensively detailed in
Sect. 4.3.
1.6 The Empirical Temperature and the Zeroth Principle of Thermodynamics
an interaction with the first two.3 We can observe that the amplitude of these changes
decreases in time until it becomes imperceptible to measuring instruments. We will
say, then, that the two systems have reached a state of mutual equilibrium. To denote
this relational situation between the two systems we will also use the expression:
the two given systems have the same temperature. Be aware though that this simply
denotes a relationship (that is if they are put into contact nothing happens) between
the two systems but the temperature, as a physical quantity, has not yet been defined.
This definition would not be possible without the formulation of a fundamental
principle, called the Zeroth Principle of Thermodynamics.
1.6.2 The Zeroth Principle
This principle is suggested by empirical observations, whose results are then generalized. It confirms the transitive property of being in mutual equilibrium, that is, of
the property of having the same temperature. This states the following:
(1) Let us suppose that body B1 is in mutual equilibrium with body B2 (that is, it
has the same temperature as body B2 ). This means that when B1 and B2 are
put into contact, no change occurs;
(2) Let us suppose that the same body B1 is in mutual equilibrium with body B3
(has the same temperature as body B3 ). This means that when B1 and B3 are
put into contact, no change occurs;
(3) Then body B2 is in mutual equilibrium with body B3 (that is, it has the same
temperature as body B3 ) that is, if we put B2 and B3 into contact no change
This principle is necessary to be able to define the function of the thermometer and
therefore an empirical temperature scale as physical quantity.
To define one empirical temperature scale one proceeds in this way: a body is
chosen as a sample and one of its properties, whose value (on the scale on which
it has been defined) changes when the body goes from being hot to being cold, is
selected. The changes in this property are recorded on a new scale, which is arbitrarily
For example, one can observe the change in volume of a certain mass of mercury.
To make even a small variation in volume easily appreciable we will limit the free
surface of the mercury to the section of a capillary, that is, a small tube of constant
section with a very small area; in that way, a small variation in volume will manifest
itself through a noticeable change in the position of the surface of the mercury within
the capillary.
We will put this body in contact with a body B2 , we will wait for mutual equilibrium to be established and we will observe the position of the meniscus of the
mercury. Then, we will put the sample body in contact with body B3 and will carry
3 For
a discussion on the concept of causality see [1].
1 Macroscopic Systems and Empirical Temperature
out the same procedure. If the position, in the capillary, of the free surface of the
mercury coincides with the position previously observed, we will say that bodies B2
and B3 have the same temperature, that is, that if they are put into contact no change
will occur.
It is clear that the value of the volume (that is its measurement in cm 3 , for example)
in that state is not important but serves only to certify a relationship of equilibrium,
therefore, the different positions are noted by a number which orders them according
to an arbitrarily chosen convention. In this way we attribute, for the sake of convenience, a numerical value to the different positions of the meniscus and we define
an empirical temperature scale. We thought it appropriate to order the different positions with numbers increasing from cold toward hot. The instrument created and
employed in this way is to be called a thermometer.4
As is well known, the most common empirical temperature scale is the Celsius
scale. In the example briefly described earlier, the capillary is fixed carefully to a
rigid, rectilinear support and then the procedure is as follows: this object is put in
contact with boiling water at the pressure of one atmosphere, it will be seen that the
position of the meniscus moves. When the position of the meniscus does not move
anymore, meaning that the volume of the mercury does not change anymore, we can
say that the mercury and the boiling water are in mutual equilibrium, so have the
same temperature, and therefore we mark the position of the meniscus on the support
and we write the number 100. Then we bring the same object into contact with ice in
equilibrium with water in a liquid state at the pressure of one atmosphere. We wait
for the position of the meniscus to become stable and we mark its position on the
same support. In that position, we write the number 0 (zero) and then we divide the
straight segment between the two positions into 100 intervals of equal length and we
number them. We will call the instrument created in this way a thermometer and the
numerical scale the Celsius scale.
Obviously, we can create a large number of different empirical scales and the
property observed in order to define the empirical temperature scale can be a volume,
a length, an electrical resistance, a potential difference etc.
Naturally, all of these possible empirical temperature scales are different from
each other and equally acceptable. Some are more convenient than others depending
on the different conditions in which they work; for example, in very cold or very hot
situations mercury is impossible to use, so something else will need to be invented and
then the different segments linked up. Furthermore, the choice will need to be made
with respect to practicality (costs, cumbersomeness, etc.), to functionality (speed of
readiness, extent of the scale, etc.) and to reliability (durability and relative lack of
sensitivity to changes in external environment).
From now on, when we put the thermometer in contact with any body we will see
that after a certain amount of time the level of the meniscus will no longer change.
We will read the number that corresponds to its position and we will say that, in that
4 In
order that any instrument can be considered a thermometer, it is necessary that the object
observed has a much greater mass than the instrument itself.
1.6 The Empirical Temperature and the Zeroth Principle of Thermodynamics
moment, the thermal state of the body, understood as the heat level5 of the body
corresponds to that of the thermometer (mutual equilibrium) and this is recorded
by the number read. We can say that, in that state, the temperature of the body in
question has the numerical value read on the scale of the thermometer. In this way,
the property of being hot or cold, that changes the value of some properties of the
bodies, has been placed outside of our sensory apparatus and been made independent
of it. In other words, objective.
We will indicate the value of the empirical temperature, measured on an arbitrary
scale, with the letter θ while we will keep the denomination of temperature, and it
will be indicated with the symbol T , for the scale of temperatures which will be
defined when formulating the Second Principle. By pointing out this detail, we wish
to reiterate a fact: any θ scale serves only to establish whether two systems (let us say
S1 and S2 ) are in mutual equilibrium, that is, whether θS1 = θS2 . If θS1 = θS2 then we
should expect that, once an interaction has been created (for example, putting them
into contact with each other), changes in the states are produced but the numerical
values of θ cannot give us any other information, except in the few cases where a
purely empirical cataloging has already occurred.
5 These
expressions in speech marks are true heresies from a scientific point of view but they can
sometimes be found in regular language and in some (rare) texts.
Chapter 2
The First Principle of Thermodynamics
Abstract The first Principle defines, for every system, a property named energy
which is a conserved quantity and this means that its variations in a process, can
be due only to the interaction with the external world. The latter interactions are
divided into two groups. On one side, we consider all the interactions which are
described within some theoretical contexts developed up to now. We say that these
are the interactions controlled by the observer. In the second group, we place all
those interactions which are unknown to the observer or which are treated as such.
The cumulative effect of these unknown interactions gives rise to one term which
is currently called quantity of heat. After having defined the meaning of adiabatic
transformation, experimental evidence shows that in the latter case the amount of
work delivered by the interactions controlled by the observer in a change of state
depends only on the initial and final states and does not depend on the transformation
used. This defines energy and, as a consequence, the contribution of the unknown
interactions in a generic transformation, that is the amount of heat, is defined by the
difference between the variation of energy and the amount of work carried out in
that transformation. All this needs to be formulated for closed (with respect to mass)
Keywords First principle · Adiabatic systems · Energy · Heat (quantity of)
2.1 Introduction
The first Principle of Thermodynamics consists of the definition of two quantities,
fundamental for Physics: (i) energy, which will be indicated using the symbol U and
the (ii) quantity of heat, which will be indicated by d̂ Q or Q for infinitesimal or
finite transformations, respectively. While the energy is, by definition, a function of
the state of the system and therefore contributes to defining the state of the system,
the quantity of heat describes, in part, the type of interaction with the external world
to which the system is submitted and therefore, in general, its value depends on the
particular transformation carried out.
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
2 The First Principle of Thermodynamics
2.2 Closed Systems
We define closed systems as those systems that do not exchange mass with the outside
world. Unless stated otherwise, we will take into consideration only closed systems.
Let us consider a closed system that is initially in a state of equilibrium indicated
with A. To induce a change of state on this, we (the observer) have to carry out
actions on it, meaning that we have to interact with it. By interacting we will destroy
the state of equilibrium in which the system finds itself and, when the interaction is
over, the system will settle itself into a new state of equilibrium, which we will call
B. We can say that we have carried out the transformation from the initial state A to
the final state B.
There are some types of interaction for which we have a sufficient theoretical
context, meaning that we have theories within which we are able to completely
describe the interactions that we carry out.
We will say that these interactions are controlled by the observer.
We can apply forces and do this, basically, in two ways: by applying forces
distributed over the surface of the boundary, or forces distributed in the volume of
the system. We can also modify the state of the system creating or changing external
fields that might be electrical fields, magnetic fields, or gravitational fields. In these
latter cases, to carry out these actions, we have to move electric charges, switch on
or change electrical currents or move masses.
To quantify our actions, we have theories available that we consider adequate and
so we can say that these are the actions that we control. We can generalize what
we have learned studying mechanics and calculate, or measure, the amount of work
required to move the charges, change the intensities of the currents, move masses or
do work with other types of forces such as friction forces or surface forces, pressure
forces in particular.
2.3 Adiabatic Walls and Adiabatic Transformations
Let us consider the following example. Let us suppose, on a normal summer day,
we have a receptacle containing water and some ice cubes. We can put a whisk with
suitable blades into the water in the receptacle to mix the liquid in a very efficient
way. Note that if we mix the water energetically ice will melt fairly quickly. In this
situation, we can even find a correlation between the amount of work carried out on
the blades to make them turn and the corresponding quantity of ice which will have
We can measure this quantity of work measuring the moment of the forces applied
to the whisk, the angular velocity with which it turns, and the time interval during
which we made the blades turn, multiplying these three numbers. Therefore, we
attribute the change of state of the system to the action carried out and we quantify the
2.4 The Definition of Energy
latter by measuring the amount of work carried out. This is an example of interactions
controlled by the observer.
We can also not do anything and leave the receptacle on the table and come back
an hour later. We will see that a certain amount of the ice has melted just the same.
The observer will say that other interactions of the system with the outside world
exist that he does not control, i.e., that are not within any of the theoretical contexts
he created.
The need to be able to establish a stable connection between the actions carried
out by the observer and the changes in state seems destined to failure.
We can observe, however, that changing the material we made the receptacle with,
we can slow down the speed with which these uncontrolled interactions occur. If we
substitute a receptacle made of thin metal with one made of glass the time required for
a certain amount of ice to melt becomes significantly longer. If we then use a Dewar
flask (or even a normal thermos flask), the melting time becomes much longer. The
development of appropriate technologies has reduced the percentage of ice melted
after some days to a negligible amount.
Therefore, these materials have the property of slowing down, in an ever more
efficient way, the intensity of these unknown interactions. Imagining that we take
these empirical observations to the limit, we will call those (ideal) materials that are
completely impervious to these new interactions, adiabatic.
If a system is surrounded by adiabatic walls, the only way the external world
can interact and cause changes in state of the system (processes), is through using
electromagnetic or gravitational fields (volume and long-range forces) or by changing
the boundary with forces distributed over the surface that is by interactions controlled
by the experimenter (observer).
2.4 The Definition of Energy
Let us consider a closed and adiabatic system in a state of equilibrium A. We carry
out certain operations on the system and we see that, once these are completed, the
system is taken to another state of equilibrium, B. Obviously, once one way has been
found to reach state B, an infinite number of ways can be found to reach the same
result. Experience shows that once A and B are fixed, the total quantity of work that
the observer carries out, operating in different ways, always has the same value. This
. We will assume that this result has
quantity is indicated by with the symbol WA,B
universal value.
The formal expression which fixes this statement is the following: for every closed
and adiabatic system, a function exists which depends only on the state, called Energy,
that we will indicate with the symbol U , and that is defined by the following relation:
U (B) − U (A) = WA,B
2 The First Principle of Thermodynamics
From the definition we can see that energy, as a function of state, is defined except
for an arbitrary additive constant or, in other words, only the variations in energy
between two states of equilibrium are defined. In Sect. 2.4.1, we review the definition
of energy in situations resulting in familiar to any undergraduate in Physics.
2.4.1 Energy of Familiar Adiabatic Systems
The following examples are well known from Mechanics and Electrostatics. Here
we want to pick them up again using the language which characterizes the thermodynamic context, in which the concept of state and of the actions that the observer
(i.e., the “external world”) carries out on the system to induce a change in state are
brought to the forefront.
Energy of a Point Mass
It is the simplest system. It consists of what is normally called “point mass” and that,
in our language, can be described as follows: a point mass is a physical system which
is completely determined solely by the value of its mass m 0 and by the values of its
position and of its instantaneous velocity, respectively, r, v. Changes in state will be
determined by changes in some of the (six) state coordinates r, v. Let us suppose
that it is initially in the state A ≡ (rA , vA ) and that the desire is to take the system
to the predetermined final state B ≡ (rB , vB ), as shown in Fig. 2.1.
To obtain this change of state, we will have to apply appropriate forces on the
point whose resultant, in every instant, we shall indicate with F. By definition, the
total amount of work we will have to carry out will be given by
Fig. 2.1 Change of state for a point mass, from an initial state A ≡ (rA , vA ) to a final state
B ≡ (rB , vB ) along the path 2.4 The Definition of Energy
WA,B =
F · v dt ,
path where the integral is a curvilinear integral calculated along the specific trajectory described by the system under the action of the forces we applied.
(a) Newtonian case.
In the Newtonian context, that is, if the law of motion is
= m0
P being the momentum of the point of mass m 0 , it can be shown that
WA,B =
m 0 v22 − m 0 v12 .
We can carry out this change of state in an infinite number of ways by appropriately modifying the dependence of the value of the applied force F from the
position occupied instantaneously by the point mass. As can be seen, the total
amount of work carried out by us always takes on the same value, and this depends solely on the values of the state variables in A and in B. Or rather, this
depends only on the moduli of the initial and final velocities, therefore on one
state variable out of six. In the Newtonian context, the total amount of work
carried out is given by the variation, between the final state and the initial state,
of the function U :
U = m 0 v2 + constant ,
which is defined for any arbitrary constant. This means that there are infinite
functions U , which have the property of providing, with their variation, the
right value for W . These infinite functions differ among themselves due to the
differing value of the constant, but this is irrelevant because we are only interested
in variations in U . The quantity Eq. (2.5) is called energy of the point mass.
Commonly, this is also called kinetic energy but in this case the adjective sends us
off track: it makes sense to use adjectives when you want to distinguish between
different forms of energy but in this particularly simple case, the energy of the
system is completely described by that term in itself. Generally, the value of
the constant is set to zero, since it is natural to think that the kinetic energy is
null when the velocity is zero (point mass at rest). This has the same value as
choosing to set the potential energy equal to zero at infinity.
(b) Relativistic case.
In this case, the law of motion is a little different because it has to be expressed
in a four-dimensional form. It can be seen that
2 The First Principle of Thermodynamics
F · v = m 0 c2
dγ L
c being the light speed in vacuum, γ L = (1 − v2 /c2 )−1/2 the Lorentz factor1 and
m 0 the rest (Newtonian) mass. Therefore
WA,B =
(F · v) dt = m 0 c2 γ L 2 − m 0 c2 γ L 1 ,
path γ L 2 and γ L 1 being the values that the Lorentz factor of the point mass takes on
in the final state and in the initial state, respectively. In the relativistic context,
the same can be said in an almost identical fashion for the variation in energy:
the work is independent of the trajectory and is equal to the difference of the
function U :
U = m 0 γ L c2 + constant .
Even in the relativistic context, the energy is defined within an arbitrary constant.
In the last work of the annus mirabilis 1905, Einstein overcomes the uncertainty
in the expression of energy due to the arbitrary additive constant and fixes an
absolute value of energy in the form:
U = m 0 γ L c2 .
In doing this, he sets the energy of a body as equal to the sum of the energy at
rest m 0 c2 and of the kinetic energy U K = U − m 0 c2 . Obviously, the variations
in this last quantity (that now deserves the adjective kinetic) coincide with the
values of the quantity of work carried out, but Einstein’s postulation has many
other implications that we do not wish to discuss in this context.
Energy of a Condenser in Vacuum
Let us consider a plane, parallel plate empty capacitor in vacuum with electric permittivity 0 . We will indicate with the area of the plates, their distance with h
and with q the amount of charge deposited on them in a certain moment. We will
operate in order to change the state, assuming that the plates are non-deformable
1 The
particular form of the Lorentz factor derives from the postulate on the invariance of the
speed of light. This postulate affirms that a light signal is seen to propagate (in vacuum) with
the same speed when observed from any inertial frame of reference. This postulate, together with
the adoption of the Principle of Relativity, leads to a well-defined form of the laws of coordinate
transformations between two different frames of reference called Lorentz transformations. Two
popular consequences of these transformations are known as the length contraction and the time
dilatation. Both these effects scale with the Lorentz factor.
2.4 The Definition of Energy
and, specifically, at a constant distance from each other. In that case, the state is
solely determined by the value of the charge or, equivalently, by the intensity of the
electric field between the two plates. Let us suppose that the capacitor is in the initial
state qA and that we want to take it to the final state qB . In order to calculate the
amount of work, the observer will have to carry out to move an appropriate amount
of charge from one plate to the other we have to proceed via infinitesimal transformations. The infinitesimal amount of work that the outside world carries out to move
an infinitesimal quantity of charge dq is
d̂W = ψ dq ,
0 (2.11)
is the potential difference when the charge is q. The transition between qA to qB can
be carried out following many different procedures but the total work that we will
have to carry out will always have the value
WA,B =
1 h 2
qB − qA2 = h 0 E B2 − E A2 .
In calculating this result, we made some assumptions: (a) that the charge on the plates
is distributed uniformly, which is equivalent to disregard the so-called edge effects;
(b) that inside is a vacuum; (c) that the capacitor is not deformed during the process;
(d) that the different charging processes are slow enough.
Having expressed the amount of work as a function of the intensity of the electric
field allows us to describe the same result in different terms: if we consider as our
physical system the region of space, of volume VC = h , internal to the capacitor,
and consider as initial and final states the intensity of the electric field E A and E B ,
we can state that that region of space contains a certain the quantity of energy Ues (E)
associated only with the creation of the electrostatic field E which, therefore, we will
call energy of the electric field. Its expression is
Ues (E) =
0 (h ) E 2 + constant ,
within an arbitrary additive constant (which is generally set null, like for kinetic
Furthermore, since the field is perfectly homogeneous within the volume under
consideration, the energy too will be considered uniformly distributed in it with
u = 0 E 2 .
2 The First Principle of Thermodynamics
Energy of Point Charges at Rest
This example, at first glance similar to the previous one, will be dealt with in quite a
different way. Before, our system was a region of space within which we created an
electric field; the state variable was the intensity of the field itself and so we defined
the energy of the field. In this case, the system is made up of a specific distribution of
point charges qi at rest, and the state of the system is defined by their positions ri . To
modify the space distributions of the charges, we have to apply forces and calculate
the total amount of work we have to carry out. The positions of the various charges
in the initial configuration A are indicated by riA and in the final configuration B by
riB . We can demonstrate that the total amount of work carried out by the external
world to change the configuration from A to B equals
1 1 ⎣ qi q j qi q j ⎦
2 4π 0
i, j
i, j
and B =
| riA − r jA |
ri, j
| riB − r jB |
are the same for the final configuration B. In a similar way to the previous examples,
we can see that the amount of work carried out by us is given by the difference
between the final and the initial configurations of the function:
1 1 qi q j
+ constant .
2 4π 0 i, j ri, j
The function U is called energy of the configuration of charges. Since this function
depends only on the positions of the different charges, it is normally called potential
energy. Likewise, the same reasoning can be used for other frequently encountered
cases such as spring force and the energy in the Earth’s gravitational field. However,
getting back to the case of electrostatics, an important consideration needs to be
made. One particular, and very familiar, case of Eq. (2.17) occurs when, as the
starting configuration A you take the one where all the charges are at infinity and
a value for the energy U (∞) = 0 is attributed. With the arbitrary additive constant
thus determined, the expression for the energy of a given configuration becomes
1 1 qi q j
2 4π 0 i, j ri, j
To demonstrate this result, it is assumed that the force that we apply in every moment
to take each charge from infinity to the prescribed position is practically equal and
opposite to the electrostatic force carried out by the other charges already settled
2.5 Definition of Heat (Quantity of)
into their final positions. Only in this condition will the integral which provides
the total amount of the work carried out by us (and which will have an equal and
opposite value to the work carried out by the electrostatic forces) be independent of
the trajectories traveled by the charges on their paths from infinity. It is obviously an
asymptotic procedure, which is carried out only ideally with practically zero velocity
and which, therefore, takes place over an infinite lapse of time. In any procedure
conceived as realistic, in order to create the configuration in a finite period of time,
we would have to start from a system of charges set, at rest, far apart and submit them
to accelerations to take them to their final positions at rest. In a realistic situation,
therefore, we would not be able to ignore radiative phenomena and these would make
the amount of work carried out by us certainly dependent on the trajectory traveled
by the charges and on the way in which this trajectory is traveled. The same could
be said for the example of charging the capacitor: the process of charging has to take
place very slowly to be able to consider the electromagnetic field and the distribution
of the charges in an electrostatic configuration in every moment. As we well know,
in all real processes in which movements of charges are involved, magnetic fields
are formed, radiation is generated, and the electrostatic approach is seen for what
it is, that is a model conceived with a passage to the limit. We are reflecting on an
abstract situation that we consider the asymptotic situation towards which we have
real phenomena tend. This happens in all fields of Physics, however this procedure is
correct because the theoretical context enables us to estimate how much the imagined
process differs from the real process.
2.5 Definition of Heat (Quantity of)
We always consider a closed system in a state of equilibrium A and we operate
upon it in order to take it to a new state of equilibrium B. We will now operate in a
more general way, without a restriction of adiabatic walls. Experience shows that the
amount of work carried out by us depends on the particular transformation that we
have brought forth. We define as the amount of heat provided by the outside world
to the system, in that particular transformation, the difference between the change
of energy and the amount of work carried out by us in that particular transformation.
Formally, indicating this new quantity with the symbol Q A,B we have
Q A,B = [U (B) − U (A)] − WA,B = UA,B − WA,B .
From now on, the symbol applied to a state function will always indicate the
difference of the values of that function between the final and the initial state, so Eq.
(2.19) can usefully be written as
UA,B = WA,B + Q A,B .
2 The First Principle of Thermodynamics
In this expression, we can see that the variation of energy of a closed system in
a certain process is given by the sum of two contributions, quite distinct among
themselves, due to two distinct modalities of the interaction of the outside world with
the system. The first term (WA,B ) describes the interactions controlled by the observer,
therefore those which fall within part of a known theoretical context, referring to
which the observer is able to determine the amount of work carried out. The second
term, Q A,B brings together all of those modes of interaction of the outside world on the
system that are not controlled by the observer. It is useful to read Eq. (2.20) in this way:
in a transformation from a state A to a state B, both of equilibrium, a certain quantity
of energy is transferred from the outside world to the system and this happens in the
two ways we described. While the first one represents known ways, i.e., connected
to known theoretical contexts, the second includes all the other types of interactions
and therefore describes our deficit, or in other words our ignorance, in understanding
interactions. It should be noted that this definition of quantity of heat includes the
already known definition which comes from simple calorimetry but represents the
maximum generalization of it. In particular, it separates the concept of quantity
of heat transferred to the system from the concept of change of temperature. This
connection, on the other hand, is a constituent part of the calorimetric definition and
often causes much ambiguity and even errors in the study of successive developments
in Thermodynamics. This definition of quantity of heat is due to [2].
2.6 Infinitesimal Transformations
The definitions of energy Eq. (2.1) and of quantity of heat Eq. (2.19) were given for
finite transformations in which the initial and final states are states of equilibrium. We
can get these two states closer and closer together and so the quantities at play will
be smaller and smaller. We will call infinitesimal transformations the transformation
between very close states of equilibrium, though seen from the point of view of a
passage to the limit for zero amplitude of the transformation. In this sense, we will
consider the states of equilibrium of a macroscopic system as defined by continuous
variables and the relations between these quantities will define the functions to which
we will apply the theory of continuous and differentiable functions.
Both the First and the Second Principles will be formulated for infinitesimal
transformations and therefore the fundamental equations for Thermodynamics are
differential equations, as occurs for the fundamental equations of both Mechanics and
Electromagnetism. This procedure is necessary so that the definition of work, as we
have seen from the study of Mechanics, is given for infinitesimal transformations.
The main consequence of taking on this approach is that considering the balance
equation between variations in energy, amount of work, and amount of heat in a very
small change of state, is equivalent to considering this equation as the projection of
a differential equation. In that case, the property of energy as a function of state will
be formally expressed, requiring that the differential of the function which expresses
energy in function of the other state variables is an exact differential. As we will see
2.7 Formulation of the First Principle of Thermodynamics
throughout this whole work, it is from this property and from the analogous property
for entropy that a large quantity of results of absolute generality will be obtained.
On the other hand, for the work and the amount of heat they are just infinitesimal
quantities but they are not the differentials of a state function.
It is clear that the infinitesimal quantities which will appear in the differential
equations must be represented by different symbols according to whether they are
exact differentials or not. In Appendix A.5 we will summarize the different cases in
which different symbols will be necessary.
2.7 Formulation of the First Principle of Thermodynamics
For infinitesimal transformations, we will indicate the infinitesimal quantities of
work and of heat provided by the external world to the system with the symbols
d̂W and d̂ Q, respectively. We will use the symbol d to indicate an infinitesimal
quantity, which represents the variation of a function of state and therefore an exact
differential. For the First Principle of Thermodynamics, we can write the following
conclusive statement:
dU = d̂W + d̂ Q .
One last, but very important, comment. The definition of energy given in Eq. (2.1)
has to guarantee that it is defined for every possible state of equilibrium. The value
of energy in state X is defined as follows: you choose a reference state O and assign
an arbitrary value for energy to this state, for example, the value 0 (zero). In this way,
the value of the energy in state X will be defined by the measure (or calculation) of
the quantity of work that we carry out to take the system from the reference state
O to the generic state X in an adiabatic way. For this to be a good definition of
energy, it is necessary that any state X can be reached from the reference state O in
an adiabatic way. A careful reader might observe that it would be enough to be sure
that the two states O and X can be linked, in one sense or the other, by an adiabatic
transformation. In this case, by measuring the quantity of work carried out we could
either measure
[U (X) − U (O)] or [U (O) − U (X)] .
As it stands now, we cannot answer this question and therefore this definition of
energy is not solid. The Second Principle of Thermodynamics will answer this question.
Chapter 3
The Second Principle
of Thermodynamics
Abstract The impossibility of realizing the perpetual motion is universally accepted
as a fundamental principle of Physics. This postulate defines two main categories,
where physical processes can be placed: natural processes and unnatural processes.
The former includes all the observed processes and the hypothesized processes that do
not violate the fundamental principle and are, therefore, possible. The latter includes
all the hypothesized processes that violate the principle of the impossibility of perpetual motion, and therefore they cannot occur. This is the starting point for formulating
an evolutionary criterion for all-natural processes. To achieve this, a suitable mathematical tool must be developed. The fundamental step is the definition of entropy
and of absolute temperature. These are complementary quantities and constitute the
basis of the Second Principle, which must be formulated, first, in the frame of closed
systems. Before the extension of the fundamental equations of Thermodynamics to
continuous systems, the approximation of discontinuous systems and the problem
of the conversion of heat into work both for reversible and irreversible engines are
discussed. The coefficients of performance of refrigerators and of heat pumps are
defined and the problem of the maximum work obtainable from a given configuration
immersed in a given environment is briefly treated.
Keywords Natural processes · Unnatural processes · Quasi-static processes ·
Reversible processes · Second principle · Entropy · Temperature · Absolute
temperature · Thermal engines · Efficiency of an engine · Carnot cycles ·
Endoreversible engines · Coefficient of performance · Heat pumps ·
Refrigerators · Maximum work · Availability
3.1 Introduction
The Second Principle, which constitutes a fundamental principle of Physics, is often
referred to as the principle, which states “the impossibility of realizing the perpetual
motion.”1 Let us comment briefly on the origin of this denomination. Realizing
perpetual motion means to build a device that, once set in motion, would continue
1 See,
for example, the article by A. Einstein quoted in the Introduction to the present book.
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
3 The Second Principle of Thermodynamics
forever. In the past centuries, but also in our days, various attempts to realize perpetual motion have been accomplished. In purely mechanical systems, one has to
compensate for the energy dissipated by friction forces and then some energy must be
supplied. Therefore, either one does not believe any longer in the principle of energy
conservation or, if energy conservation is maintained, the device would stop running
when the energy source has been exhausted as it happens, for instance, with familiar
pendulum clocks. Such a formulation of the impossibility of perpetual motion would
not lead, then, to any element of novelty with respect to the principle of energy conservation already established. In other words, the acceptance that in real physical
world friction cannot be eliminated might, at most, suggest some metaphysical meditation but does not constitute any sort of introduction to a new fundamental principle
of physics.
The debate about the impossibility of creating perpetual motion changed profoundly in the nineteenth century when the issue of converting heat into mechanical
motion became of the utmost importance, at least initially, for practical reasons. It was
proved that we can extract heat from a source, use it to make a body to expand, and
use the expansion to produce mechanical work. Evidently, this opened the way to a
new perspective: use heat transfer as a positive work producer instead of considering
it as a negative energy-wasting interaction (friction is one kind of heat interaction).
Immediately “the economic question” arose: how much work can we produce per
Joule extracted from a heat source?
It is understood that this question refers to a repetitive, i.e., cyclical, way of
producing work. Indeed it is well known, for example, that by expanding isothermally
an ideal gas we can obtain 1 joule of work per each Joule of extracted heat but this
could not last forever because the expansion could not continue indefinitely. It was
soon realized that an entire conversion of heat into work was not obtainable in a
repetitive (cyclical) way even for an ideal, frictionless engine. This statement was
assumed to constitute a basic law, which regulates the heat–work transformation and
was formulated in two different but equivalent forms one by Rudolf Clausius and the
other by Lord Kelvin (William Thomson).
The former statement, by R. Clausius, is: Heat can never pass from a colder to a
warmer body without some other change, connected therewith. The latter, by Lord
Kelvin, is: It is impossible to construct an engine (cyclically working device), which
can produce work by absorbing energy in the form of heat from a single thermal
It is easy to prove that if one of the two formulations is falsified then the other is
falsified too, then the two formulations are equivalent.
In the new technical-scientific context that the Industrial Revolution determined,
the question of the impossibility of perpetual motion can be put in this way:
If we could construct an engine (cyclic transformation) with 100% efficiency,
i.e., in which the amount of produced work equals the amount of heat supplied to
the engine, we could return to the heat source that we used to operate the engine,
exactly the same amount of energy consumed to make the machine work and so it
could be kept running for eternity. Someone could observe, with reason, that this
device still needs the absence of friction but now the argument is quite different. The
3.1 Introduction
conservation of energy is not under discussion but we wonder whether the existence
of an ideal, frictionless device, which could realize perpetual motion, as proposed
above and to which real devices could approach indefinitely, can be conceived. The
postulation about the impossibility of realizing perpetual motion is equivalent to
stating the impossibility of obtaining a 100% efficiency engine even in the absence
of friction, and this fact changes the situation in a radical way:
It prescribes the impossibility that a particular class of processes can take place in
nature (either in the form of Clausius’s or in that of Kelvin’s statements for instance).
If this prohibition were confined into the particular field of the construction of thermal
machines, we could speak of an empirical law valid in a particular field of Physics
but it is immediately obvious that if a certain class of hypothetical processes are not
allowed then this could be the signature of the existence of a fundamental principle
that governs the feasibility of any process in nature.
Then, the compact expression impossibility of realizing perpetual motion well
represents this fundamental principle to which any (constructive) theory of Physics
must be submitted. For instance, it makes a reason for the fact that
(i) The macroscopic states of equilibrium we observe must resist against unavoidable perturbations, i.e., they must possess a form of stability in order to be
observable by a macroscopic observer;
(ii) A macroscopic system responds in a predictable way to external perturbations;
(iii) Apparently independent phenomena are deeply connected;
(iv) The chemical reactions proceed in certain directions;
(v) Matter and/or energy must flow with certain modalities according to the configuration determined in that instant;
(vi) The properties of the electromagnetic radiation in equilibrium with matter do
not depend from the properties of matter anywhere in the universe and so on . . .
As we shall see in Chap. 17, the Principle concerning the impossibility of realizing
perpetual motion is so vastly and deeply founded in Physics that the need to explain
the Maxwell–Szilard paradox, which tries to falsify it will push us to change our
view on the observer–observed relation. To make a conjecture become a Principle
of Physics, it is necessary that it be expressed in a complete way with a formal
mathematical language and this will be developed in this chapter. There are several
possible formulations but we will adopt the one given by E. A. Guggenheim in [3].
3.2 Natural and Unnatural Processes
We will call natural processes those processes, finite or infinitesimal, which can take
place in nature. Obviously, all the observed processes belong to this group. Similarly, we will call unnatural processes those processes, finite or infinitesimal, which
cannot take place in nature. The Second Principle of Thermodynamics establishes
the criterion, which allows us to fit any hypothetical process into one of these two
categories. In order to do this, we have to identify which is the property all-natural
3 The Second Principle of Thermodynamics
processes hold in common and, on the other side, the property all-unnatural processes
have in common. To be able to create the tool which allows us to decide whether an
imagined infinitesimal process should belong to one or the other of these two main
categories, it is necessary to conceive of, by abstraction, a third category of processes
called quasi-static processes.
3.3 Quasi-static Processes
Quasi-static processes are infinitesimal processes, imagined abstractly, that do not
fit either in the group of natural processes or in that of unnatural processes but, we
could say, they fit in the border between the two previously defined categories.
An example will help us to understand this categorization. Let us consider a
cylindrical receptacle placed vertically and containing water in a liquid state in the
bottom part and in the top part water vapor. The top part of the cylinder is closed by
a movable piston without friction. Furthermore, the water and the water vapor are at
the same uniform temperature.
By changing the load on the piston, we can change the pressure inside the receptacle as we please, within certain limits, indicating the value with p. Furthermore,
we will indicate the value of the vapor pressure of water at that temperature with ps .
Now, let us imagine the process which consists of transferring a small quantity of
matter from the liquid state to the vapor state.
If p < ps , the hypothesized process will be a natural process. This process is called
evaporation and could actually be observed. The opposite process, which consists
of a small quantity of water vapor turning into a liquid state, is called condensation
and cannot take place under the supposed conditions: it will be classified as an
unnatural process. Vice versa, if p > ps the formerly hypothesized process, that is
the evaporation, will be considered an unnatural process and, naturally, the opposite
process will be considered natural. If we get the condition p = ps the hypothesized
process can be considered, at its limit, a natural process, but its opposite (that is,
an infinitesimal condensation) can also be considered, at its limit, a natural process.
Therefore, the hypothesized process can be considered as belonging, as an extreme
situation, both to the natural and the unnatural processes.
The infinitesimal processes which have this property are called quasi-static processes. We can say that these ideal processes are both natural and unnatural at the
same time because also their opposite (that is, the same infinitesimal process with
a “change of sign”) is both a unnatural and natural process. For this reason, we can
affirm that they are abstract processes which fit in the border between natural and
unnatural processes. This example can be generalized. Let us consider an isolated
system which is prepared, initially, in a certain configuration. Generally, this system
will not be in a stationary state and we will observe some modifications, that is,
some processes, until all the properties of the system will have reached values that
are constant in time. We will say that the system is in a state of equilibrium and that
the processes that we have observed have made the system move from the initial
3.3 Quasi-static Processes
nonequilibrium configuration toward the state of equilibrium compatible with the
constraints. In an isolated system, natural processes move a configuration toward
the equilibrium state. Conversely, once this state has been reached, a process which
moves the system away from the state of equilibrium is to be considered an unnatural
An infinitesimal process which moves from a state of equilibrium will be, by
definition, a quasi-static process.
3.4 Reversible Processes
Let us return to the example of the evaporation or condensation of a small mass we
considered in Sect. 3.3 to define a quasi-static process. We know that the process of
evaporation or of condensation requires that a certain amount of heat is provided
to or removed from the cylinder by the outside environment, for example through
the bottom of the receptacle. If the temperature outside the cylinder is equal to the
temperature inside, the necessary transfer of the small quantity of heat will be, in turn,
a process which occurs between two systems in mutual equilibrium and therefore
will be a quasi-static process. In this case the hypothesized process of evaporation
(or likewise of condensation) will be called a reversible process. In conclusion, a
quasi-static process is an infinitesimal process which occurs in a system in a state
of equilibrium. If the system is also in equilibrium with the outside world, then the
process will be called reversible.
3.5 Formulation of the Second Principle: Definition
of S and T
If we consider a change of state, we can observe that this can come about both because
of interactions with the external world (that is, interactions in which the external
world transfers some energy by doing work and/or exchanging heat) and because of
modifications that happen inside the system but that are not linked to interactions
with the outside. As an example of the latter consider an isolated system in which
some internal constraint is removed. The distinction between transformations due
to interaction with the outside, and transformations due to internal processes are
extremely important for the formulation of the Second Principle which we chose to
write about here.2 The Second Principle also has to be formulated for closed systems
and infinitesimal transformations.
2 The
necessity of distinguishing between these two contributions is due to the fact that the quantity we are going to define through the Second Principle is a non-conserved quantity. Conserved
quantities can vary only because of interactions with the external world.
3 The Second Principle of Thermodynamics
For every physical system, a state variable called Entropy is defined and noted
with the symbol S, which has the following properties:
(1) S is a function of state.
(2) S is an extensive quantity.
(3) In an infinitesimal process, the variation of S is always given by the sum of two
contributions dS = d̂ e S + d̂ i S.
(4) d̂ e S measures the contribution to the variation of entropy, due to the interaction
of the system with the outside world. This term is given by
d̂ e S =
d̂ Q
where d̂ Q is the infinitesimal amount of heat transferred from the outside world
to the system and T is a new, intensive, state variable which depends only on
the empirical temperature of the system and therefore can be considered a new
temperature scale. This last statement requires a more in-depth explanation,
which we will pick up on again later.
(5) d̂ i S represents the contribution to the variation of entropy due to processes
which occur inside the system. For this term, we have
⎨ d̂ i S > 0
d̂ i S < 0
d̂ i S = 0
for natural processes
for unnatural processes
for quasi-static and for reversible processes.
Let us examine each point in more detail.
3.5.1 State Functions
Being a state function means that S is defined only for equilibrium states and that its
variation in an infinitesimal process is expressed by an exact differential. Its variation
in a finite process, like for energy, depends only on the values of the state parameters
in the initial and final states A and B and not on the particular choice of the process
which took the system from A to B, but differently from energy, entropy is not a
conserved quantity. This means that it can also change because of internal processes
(i.e., without any causal connection with the external world). For example, if a system
is isolated and an internal boundary exists, for example, a wall dividing the system
into two volumes with different densities or temperatures, removing this internal
restriction brings about a repositioning of the system into a new state of equilibrium
but this happens without the outside world “realizing it”. As we will see in Chap. 7,
3.5 Formulation of the Second Principle: Definition of S and T
the same occurs if inside an isolated system a phase3 transition occurs. This will bring
about, in general, a change (in this case an increase) in the value of the entropy but
certainly not of the energy. Like in the case of Energy, only the variations of Entropy
in a process are defined, so a reference state A can be chosen to assign an arbitrary
value of Entropy to, after which the value of S for any other state of equilibrium can
be determined.
3.5.2 Extensive and Intensive Quantities
Often, the property of extensiveness of a physical quantity is referred to with the
term additiveness. The intention with this choice is to highlight the rule: “the value
of the whole is equal to the sum of the values of the parts.” It is necessary to specify
what is meant by “whole” and “parts”.
The definition is the following: let us have a system and let V be the volume
occupied by it. Let us consider an arbitrary partition of the volume into two volumes
V1 and V2 such that V = V1 + V2 . A property E is said to be extensive if its value
relative to the volume V is
E = E1 + E2 ,
where E1 and E2 are the values of the quantity E calculated on V1 and V2 , respectively,
for any subdivision of the volume V . As we will see better in Chap. 16 this means
that E can be written as
E(t) =
e(r, t) dV,
where e will be called “density” of E at the point r = (x, y, z) and at the instant
t.4 The property called “being additive” should be understood as referring to the
additiveness of volumes. For example, the energy of a system, the entropy, the mass,
the electrical charge are extensive quantities.5
Intensive quantities are quantities defined in every point of the system such as
pressure, density, temperature, velocity, etc. Normally, it is a nonsense to talk about
temperature of a system or of density of a system unless we know from the start that
the temperature or the density is uniform. The same is true for velocity. As we will
see further on, all the intensive quantities are always defined as partial derivatives of
an extensive quantity with respect to another extensive quantity. In general, given an
intensive quantity x, it will be
3 By
thermodynamic phase we mean an homogeneous system, i.e., with uniform density.
definition ties the property of extensiveness to the metric being used in view of the generalization of the theory to a relativistic context.
5 Other authors, especially when writing on Engineering, use mass as a reference quantity and
therefore instead of “density” of E, “specific” quantities are used, that is the amount of E transported
by the unit of mass of the material.
4 This
3 The Second Principle of Thermodynamics
where x(r, t) is a function of spatial coordinate and time, while (...) means that the
partial derivatives are taken keeping constant some extensive variables other than E .
3.5.3 Measuring S
Like for energy, also for entropy only the variations in a transformation are defined,
therefore entropy is defined with an indetermination of arbitrary additive constant.
As far as measuring variations in entropy is concerned, we can observe that the term
d̂ i S is quantitatively defined only for quasi-static transformations (for which it will
be equal to zero). Therefore, if we refer to the definition, to measure a variation in
entropy we have to carry out a finite transformation made up of infinitesimal quasistatic transformations, that is, transformations made up of a continuous succession
of states of equilibrium:
S(B) − S(A) =
d̂ e S =
d̂ Q
where the integral is carried out along any quasi-static transformation. In this case,
and only in this case
⎨ d̂ i S = 0 ,
⎩ d̂ e S = d̂ Q = dS ,
so that d̂ e S becomes the exact differential dS. As we have seen, the term d̂ e S
determines quantitatively the value of the increase of entropy in the system caused by
an interaction with the outside world. This contribution is proportional to the quantity
of energy that the external world puts into the system by means of interactions which
are not controlled by the observer. An increase in entropy can, therefore, be associated
with the increase in our ignorance of the overall state of the system, that the process
has determined. Naturally, so that Eq. (3.6) can be considered, at least in principle,
an operative definition of entropy, it is necessary to have an instrument to measure
the temperature T independently. In any case, Eq. (3.6) constitutes an exemplary
“operative definition” from a conceptual point of view but quite dissatisfactory from
a truly operational side, i.e., experimental in the real sense of the word. We will see
in Chap. 5 how that problem will find an adequate solution.
3.5 Formulation of the Second Principle: Definition of S and T
3.5.4 The Absolute, or Thermodynamic, Temperature T
It is necessary, however, to highlight that the Second Principle postulates the existence
of two new state parameters, S and T . The coefficient of proportionality between
the change in entropy and the amount of injected heat is given by the reciprocal of
the value of a new physical quantity, T , whose denomination will be fully explained
in Sect. 3.6. Each is complementary to the other in the sense that we cannot define
one without defining the other also but the development of the theory must ensure
that at least one can be measured independently. Furthermore, while S is, like energy,
volume, mass, etc., an extensive state parameter and therefore representative of the
state of the system, T is an intensive parameter. Its reciprocal 1/T provides information on how rapidly the entropy of the system changes according to the energy
transferred from the outside world at a constant volume.
The necessity remains, however, for an operative definition of T which is independent of the measure of S. In Sect. 3.8, Chaps. 6 and 12 we will see how this can
be resolved in different ways, and, in particular, we will see that the ratio T2 /T1 is
determined in an absolute way, leaving us, therefore, with one degree of freedom
only: we can choose one reference state and assign to this an arbitrary value of T ,
after which the scale will be completely defined.
3.6 Discontinuous Systems Approximation
We have seen that energy U and entropy S are defined for states of equilibrium like,
on the other hand, all the state parameters. It might seem, therefore, that thermodynamics is unable to deal with systems not in equilibrium and, as a consequence,
with natural processes. The following example, though extremely simple, shows the
way often used to overcome this difficulty. The other possible way is passing to
continuous systems, but this requires new formal instruments which will be carried
out in Chap. 16.
Let us consider two isolated systems, schematically represented in Fig. 3.1, that
we will identify using Suffixes I and II, each one in a state of internal equilibrium.
The state variables that interest us, here, will have the values U I , U II , S I , S II , T I , T II .
Now we will allow them to come into contact, with a small part of their surfaces, for
example, linking them using a metal wire which allows for a small exchange of heat
among them. We will suppose that this slight interaction will not significantly alter
the state of equilibrium in which they find themselves in that particular moment. This
means, in other words, that we are able to estimate the error we make by maintaining
this supposition. Let us consider an infinitesimal process in which system I receives,
from the external world, an infinitesimal quantity of heat which we will indicate with
d̂ I Q. Likewise for system II.
Let us now apply all the propositions defining Entropy. For System I, we will have
dS I = d̂ e S I + d̂ i S I =
d̂ I Q
3 The Second Principle of Thermodynamics
Fig. 3.1 Two homogeneous systems (i.e., phases) each in a state of internal equilibrium. The thin
connection between the two represents a weak interaction between them. It is weak enough to permit
the maintenance of internal equilibria. The regions in each system, in which the state variables vary
significantly are small enough to be disregarded. For simplicity, let’s suppose that the overall system
is isolated
with d̂ i S I = 0 because system I is in a state of equilibrium and therefore the infinitesimal transformation is quasi-static.6
Likewise, we can write for system II:
dS II = d̂ e S II + d̂ i S II =
d̂ II Q
T II
with d̂ i S II = 0. Now let us consider the composite system formed by I and II
together. This system will have an entropy S = S I + S II due to extensiveness and as
a consequence for the entropy change of the composite system we will have
dS = dS I + dS II =
d̂ I Q
d̂ II Q
+ II .
Furthermore, since the global system is an isolated one (each of the two can only
interact with the other and not with third system), we will have d̂ Q = d̂ I Q + d̂ II Q =
0 from which it will be d̂ I Q = −d̂ II Q. Then
dS = d̂ II Q
− I
Also, for system I + II the relation
dS = d̂ e S + d̂ i S
will, obviously, be true. This system is isolated and therefore
d̂ e S = 0 .
6 For
the positioning of indexes “I” and “II”, see Appendix A.5.4.
3.6 Discontinuous Systems Approximation
dS = d̂ i S = d̂ II Q
− I
From the propositions 5 in Sect. 3.5, we have d̂ i S > 0 if the imagined process is a
natural one, while if the result is d̂ i S < 0 the imagined process will be unnatural.
In the first case (natural process), we will have the following two possibilities:
either d̂ II Q > 0 and T I > T II ,
or d̂ II Q < 0 and T II > T I .
In both cases, we can see that heat flows from the body with higher T to the body
with lower T .
If T I = T II instead, the imagined process will have as a consequence d̂ i S = 0.
This means, by definition, that the overall state of the system is a state of internal
equilibrium against heat transfer. So, in the same way as the empirical temperature
θ , T certifies the state of mutual equilibrium between two systems and, for this
reason, it may be called “temperature” but, while this completes the meaning of the
empirical temperatures, the absolute scale has the predictive function assigned to it
by the Second Principle.
3.6.1 Resume
The points which deserve to be highlighted are the following:
(1) We have neglected the properties of the border zone, which allows the interaction between the two systems. For example, we could reasonably assume that
the temperature changes continuously through that zone (along the wire in our
example) but if its mass is much smaller than the masses of the two systems, we
will be able to estimate the error that we commit in assuming a discontinuous
variation of the intensive parameters.
(2) We could, then, calculate for the overall system the contribution d̂ i S (the term
whose value is not quantitatively determined by the Second Principle) by turning
into internal, those contributions which, for the individual systems, were external.
(3) The Second Principle states that if two systems I and II are in mutual equilibrium
with respect to the transfer of a small quantity of heat, T I = T II (and vice versa)
must be true. For this reason, we are allowed to name as “temperature” the
new intensive quantity T which is introduced together with S by the Second
Principle. Remember that the concept of empirical temperature preexisted the
Second Principle and was created just to describe a relation of mutual thermal
equilibrium. The new scale of temperatures T covers the same function but tells
much more.
3 The Second Principle of Thermodynamics
(4) From Eq. (3.14) and by virtue of the Second Principle, we see that the new
quantity T prescribes the direction of the process. To be more precise, it is
just 1/T that has this general predictive power. No empirical temperature can
cover this role. Henceforth we shall call T simply “temperature” or “absolute
temperature”. When we need to refer to an empirical scale of temperatures we
will make explicit mention of.
(5) This example is a paradigmatic example: we had to call on all the propositions
that define the Second Principle. The idea of considering a nonequilibrium system as a composite system consisting of subsystems each in internal equilibrium
and weakly interacting will allow us to find a large amount of results. The alternative to this is a theory in the continua (see Chap. 16. For this reason, we call
this first modeling “Approximation of discontinuous systems”.
3.7 On the Predictive Power of Thermodynamics
The First Principle sets a relationship of balance, which must be respected, and which
therefore contributes to the predictive ability of Thermodynamics in the same way
that conservation of the momentum or of the angular momentum contribute to the
predictive power of Mechanics.
In the case of Thermodynamics, the evolution criterion is completely contained
in the Second Principle and, following the formulation given to it in this text, is
explicitly expressed by proposition (5) where it is stated that d̂ i S > 0 for all the
natural processes and d̂ i S < 0 for all the unnatural processes. Further, the separation
into processes internal to the system and processes due to interaction with the outside
is very functional towards the study of irreversible processes.
Within the approximation of discontinuous systems, it is essential to proceed in
two stages: first, the quantitative expressions of the variation in entropy are written
for each of the different interacting systems. In this way, the amounts of extensive
quantities exchanged in the interactions can be accounted for as terms of the d̂ e S
Afterward, the variation in entropy for the overall system is written, making use
of the property of extensiveness of the entropy and some terms that, for each part,
described interactions with the outside, now become internal processes to which we
will apply proposition (5), which contains the evolution criterion.
There are, basically, two types of predictions: on the direction of the processes and
on the strength of their mutual interference. They are completely general predictions
meaning that they are not linked to any type of modeling.
An isolated system can be formed by simple parts, each one in a state of internal
equilibrium but not necessarily in a state of equilibrium among themselves. In this
case we will call our system a “composite system.” So that the composite system
can be formed by simple parts in internal equilibrium but not in equilibrium among
themselves, it is necessary for there to be an adequate number of internal constraints
between the various simple parts that our system is the union of (for example, these
3.7 On the Predictive Power of Thermodynamics
could be fixed or mobile, adiabatic or diathermic walls, that might separate areas that
are not homogeneous among themselves or have different compositions, etc.). If any
of these internal restrictions are removed then processes that the restriction did not
allow before, become possible and will be active until a new state of equilibrium is
The predictive ability of thermodynamics stands in the ability to predict what new
state of equilibrium will be reached following the removal of an internal constraint
in an isolated system. Obviously, these will be natural processes so the value of the
total entropy of the system in the new state of equilibrium will have to be higher
than the value of the entropy at the beginning of the process. Let us now try to turn
the problem upside down: let us suppose that we have an isolated system that has
reached a state of internal equilibrium. If, for some reason, we imagine that a different
composite configuration is created (that we could always imagine creating by adding
some kind of internal constraint, for example, by creating a structure made out of
small cells), we will obtain a configuration in which the total entropy (which for the
property of extensiveness is the sum of the values of the entropy in each small cell)
will be lower than when we started. In this case, the Second Principle states that the
transition from the state of equilibrium to a composite state of equilibrium would be
an unnatural process.
This is the meaning of the current affirmation according to which “entropy has
a maximum”. Naturally, this is a relative maximum. This provides a fundamental
criterion of stability of the states of equilibrium: in an isolated system, the stable
or metastable states of equilibrium occur at the configurations corresponding to a
(relative) maximum entropy.
This reasoning can be generalized to systems, which are not isolated but subject to
a number of external constraints which is equal to the number of macroscopic degrees
of freedom. For example, in the environment in which we live, it is very common to
impose on the system restrictions on the temperature (constant temperature transformations), on the external pressure (isobaric transformations) or on the total volume
(isochoric transformations) etc. In these cases, the stable or metastable states of equilibrium correspond to points of minimum of other thermodynamic potentials such
as Free Energy, Enthalpy, or the Gibbs potential (naturally, which of these is chosen
depends on the nature of the external restrictions). Therefore, in a certain sense, we
can affirm that the Second Principle justifies the existence of the macroscopic states
that we observe.
One last comment, of a very general nature, comes from studying the relationships
relative to proposition (5) in Sect. 3.5 The only case in which the theory allows us
to create precise quantitative predictions is in the case of quasi-static, or reversible,
transformations. Therefore, in all of the transformations that actually take place, that
is in the natural ones, the theory only predicts inequalities. Therefore, the expected
value for a physical quantity in a natural process will have to be of the type: it cannot
be higher than or it cannot be lower than. This could be considered a limitation such
as to make thermodynamics less “useful” compared to other theories, but a careful
examination of the question will turn the situation on its head.
3 The Second Principle of Thermodynamics
Is it possible to get specific quantitative predictions from thermodynamics? It certainly is possible, but this would imply either treating the processes as quasi-static
or “modeling” in some way the system. The latter, for example, could involve a
particular treatment of the physical properties of the boundaries, a particular modelization of the flow of the state parameters through the boundaries (for example, in
a discontinuous approximation, or assuming that the flows occur, in the transition
regions, according to profiles which “make sense”), the assumption of certain state
equations, and so on.
This is exactly what we would carry out when we operate in the fields of mechanics or electromagnetism, or in any other theoretical context which enables us to make
specific predictions. Even from a statistical mechanics point of view, we can make
certain predictions but, in this case, we are dealing with a discipline which is intrinsically a modelization. As one moves away from the peculiarities of a model and one
starts investigating the dynamics of the processes in a general sense, the possibility to
make specific predictions diminishes and one arrives, in the most general situation,
at inequalities to which all of the modelizations will have to submit themselves.
3.8 Efficiency of Thermal Engines
Let’s now turn our attention, in an overall view, to the problem of the efficiency of
a thermal engine that is an engine that uses the energy drawn in the form of heat
transfer from external sources and produces mechanical work. The latter can be used
immediately or can be accumulated in a “work reservoir”, for example, by lifting a
weight or putting a flywheel in rotation. With a very rough expression, we will say
that this machine has turned a certain amount of heat into work. Sometimes, we talk
about transformation of heat into work improperly. If we refer to Eq. (2.20), for a
transformation from the initial state A to the final state B we have
UA,B = WA,B + Q A,B .
Then, we may speak safely of transformation of heat into work only when UA,B = 0
because only in these cases we have the quantitative definite relation Q A,B = −WA,B .
In general, the two situations in which the required condition occurs, are either in
a cyclic transformation or in a nonequilibrium steady state. In this section, we will
deal only with thermal engines, that is tools that perform transformations that can
be repeated indefinitely and are, therefore, cyclical changes while the latter case
will be treated in Sect. 15.2. Obviously, we are interested only in those cyclical
transformations for which
Wtot, cyc < 0 ,
i.e., in engines which supply external world with positive work which can be used
or stored somewhere. Suppose we have n bodies, that we call thermostats or heat
sources, of very large heat capacities and relaxation times short compared to the
3.8 Efficiency of Thermal Engines
timescales of the processes and let Ti with i = 1...n be their temperatures. Let us
accept the constraint that our thermal engine performs cyclic transformations in
which it exchanges heat only with these thermostats (all of them or only some of
them) so that the cycle will be composed by isotherms with the sources at Ti and
adiabatic transformations. In some of the isotherms, the engine will receive positive
quantities of heat from the sources and in the remaining ones negative quantities of
heat. Let us enumerate the former with index i and the latter with index j. We define
efficiency of the engine the quantity
−Wtot, cyc
i Qi
where −Wtot, cyc is the total amount of work done by the engine in one cycle (that is
the algebraic sum of all the amounts of work done in every single transformation)
and the sum of the heat quantities Q i is extended only to the transformations of type
i. As we numbered with the index j those transformations in which the machine
absorbs a negative quantity of heat from the respective sources, let’s denote, for
convenience, with Q j the absolute value of these quantities of heat and then we will
write −Wtot, cyc = i Q i − j Q j and the expression for the efficiency will be
η =1−
We want to find which one is the cyclic transformation to which corresponds the
maximum efficiency consistent with the assigned constraint. We begin with one
cycle which produces positive work and let us see how it can be modified in order to
increase the efficiency. Consider the composite system formed by the n sources and
by the engine. By hypothesis, this constitutes a thermally isolated system. Consider a
transformation in which the engine has completed a cycle and calculate the variation
in total entropy of the composite system. We will have
S comp = S eng + S sour .
In our case S eng = 0 (cyclic transformation) and
S sour = −
As the composite system is an adiabatic system (no part of it exchanges heat with the
outside world) the transformation is a transformation due solely to internal processes
so, for the cyclic transformation, we can write
S comp = −
≥ 0,
3 The Second Principle of Thermodynamics
where we shall have the sign of inequality for all engines running natural transformations and the sign of equality if the cyclic transformation is a completely reversible
transformation.We modify the initial cycle and make all the transformations quasistatic (running them very slowly) and taking care to modify them so that the heat
exchanges take place at the same temperature. Of course, by modifying the transformations in this way, the amplitude of them (both for the adiabatic and the isothermal
ones), will change and the amounts of heat exchanged, will change. So, Eq. (3.23)
will become
The changes in the heat quantities will be such that either some Q j will be lower
or some Q i will be higher or both. In all these cases, the efficiency Eq. (3.19) will
The first result is that whatever the cycle is, it will be convenient to perform it
reversibly. The efficiency can be improved further if we modify the adiabats in such a
way to substitute the various transformations of type j, in such a way that we use only
the source with the lowest temperature among those denoted with T j . Let’s denote
by T j,min its temperature. In Eq. (3.24) the summation on the left will be replaced by
a single term we indicate by Q out /T j,min and write
Q out
T j,min
Q out =
T j,min
Since for all j, we shall have
T j,min
< 1,
except the case in which the ratio of the temperatures is 1, we shall have
Q out <
Qj .
In the expression of the efficiency, the summation j Q j will be replaced by the
single term Q out and, hence, the efficiency will increase.
3.8 Efficiency of Thermal Engines
Similarly we can use, among the sources of type i, only the one that has the highest
temperature Ti,max . Then, we will have
Q in
Q in =
and we shall have
Q in >
Qi .
After inserting this result in Eq. (3.19), we see that the efficiency will rise again
T j,min
η =1−
The last possible step will be to choose as T j,min and as Ti,max , respectively, the lowest
and the highest temperatures among the available ones and the maximum efficiency
will be:
η =1−
As we see the maximum efficiency will be achieved by using only the extreme
temperatures and will be given by Eq. (3.33). It must also be emphasized that in this
demonstration there is no trace of the peculiar nature of the material used for running
the engine. This reversible cycle is, therefore, composed of two isotherms and two
adiabats and is called Carnot cycle operating between the two given temperatures.
3.9 Carnot Cycles
Let’s consider a thermal engine operating a Carnot cycles between T1 and T2 with
T1 > T2 , as shown in Fig. 3.2. Let AB be the reversible isothermal expansion at T1 ,
BC the reversible adiabatic expansion from state B at T1 to state C at the intercept
with the isotherm at T2 , the remaining state D is consequently defined. The four
transformations are completely defined if we specify two of the four states A, B, C
and D.
From the point of view of the performance of the engine, having fixed the two
temperatures, it is enough to choose either the amount of heat Q 1 absorbed by the
engine from the source at T1 or the amount of heat Q 2 given off at T2 , or the amount
of work W produced in one cycle. In general, we may write
3 The Second Principle of Thermodynamics
Fig. 3.2 A reversible Carnot cycle is represented. Solid lines show that every intermediate state is
an equilibrium state and then the four transformations are quasi-static. Transformations AB and CD
represent an isothermal expansion and compression, respectively, only if the coefficient of thermal
expansion of the substance employed by the engine is positive. For this latter comment see Eq. (5.14)
and Problems 5.2 and 5.3
W = Q1 − Q2 ,
S univ = −
If the cycle is reversible (Carnot cycle) we shall have S univ = 0 and
η =1−
If the cycle is, for any reason, not completely reversible we shall have
S univ > 0 ,
Q2 > Q1
Since the work performed per cycle is W = Q 1 − Q 2 „ we have
W < Q1 1 −
=η < 1−
3.9 Carnot Cycles
As we can see, given the two temperatures, the relevant quantities are Q 1 and Q 2
or one of the two amounts of heat and the amount of work W produced, while in
the Carnot case only one is independent owing to entropy conservation. Both for
reversible and irreversible cycles, we may look at the engine just as a device which
absorbs some entropy from source T1 and transfers some other amount of entropy to
source T2 regardless the particular nature of the fluid used to operate them.
In a Carnot cycle, these two amounts of entropy are just equal while if some
processes in the engine are not reversible, the engine transfers to the colder source
more entropy then the one taken from the hotter one. This can be described by saying
that the engine must expel toward the colder source the entropy taken from T1 , plus
the entropy that it produces in each cycle owing to the irreversibility of some of its
Let us call Sprod the amount of entropy produced by the irreversibilities in the
engine, per cycle. Since the entropy variation of the engine per cycle has to be zero,
we have to impose that, in every cycle, the entropy injected into the engine by source
T1 plus the entropy generated by the engine itself must be expelled to the outside
(i.e., the source at T2 ). Let’s write the expressions of the entropy input per cycle
Sin = Q 1 /T1 and of the entropy output Sout = Q 2 /T2 and require
which leads to
Sout = Sin + Sprod ,
+ Sprod .
To some extent, this relation anticipates the key point in the theory of irreversible
processes which will be based on the quantitative study of the entropy produced by
irreversible processes. In the reversible case Sprod = 0 and the previous formulas are
recovered. In the irreversible case, the details of the particular engine are irrelevant
and what matters is just the entropy produced per cycle. Since the latter must be
positive (this is equivalent to the condition d̂ i S > 0) we see from Eq. (3.43) that the
efficiency has to decrease in natural engines. For instance, if we keep Q 1 fixed the
quantity of heat given off to the low temperature will be given by
Q2 = Q1
+ T2 S prod .
Since the product Q 1 ( T2 /T1 ) is the amount of heat that would be released to the
low-temperature source by a reversible engine, we see that Q 2 increases by the
amount T2 S prod . It follows that the amount of work produced by the irreversible
engine W irr decreases:
W irr = W rev − T2 S prod .
Similarly, if we keep Q 2 (i.e., the quantity of heat given away to the refrigerator at
T2 ) fixed, the quantity of heat extracted per cycle from the hot source must decrease
to the value Q 1 given by
3 The Second Principle of Thermodynamics
Q1 = Q2
− T1 S prod ,
and the quantity of work per cycle decreases. In all cases, the efficiency η decreases.
For instance, if we keep Q 1 fixed the efficiency will be
T2 S prod
η = 1−
Notice that while for Carnot (reversible) cycles the efficiency depends on the two temperatures only, in the case of irreversible (say natural) cycles the efficiency depends,
in general, also on the extent of the isotherms.
3.10 On the Determination of the New Scale of
Temperature T
The measurement of the efficiency of a Carnot cycle provides a possible measurement
of the ratio between the two temperatures in the absolute scale. As we have seen
in Sect. 3.8, this ratio is independent of the nature of the material used to operate
the machine as long as it is operated reversibly and, therefore, it is determined in an
absolute way. We can say that the efficiency of a Carnot engine is a universal function
of the working temperatures provided that these are measured in the particular scale
defined by the Second Principle. There are other ways to define experimentally this
scale of temperature and we will consider, as examples, the case of the black-body
radiation and the measurement of the virial coefficients of a gas. The latter is a
practical way that can be realized in laboratory with great accuracy. As we will see
in both cases, it is the ratio T1 /T2 that is determined, in nature, in an absolute manner
and may be measured experimentally. For the complete definition of the scale, we
have the freedom to choose, arbitrarily, the scale value at one point and then the
absolute scale is completely defined.
3.11 The Carnot Engine and Endoreversible Engines
Consider the case in which we have only two heat sources at temperatures T1 and T2
with T1 > T2 . The thermal engine with the maximum efficiency will operate with two
isotherms at the given temperatures and with two adiabats, all them being reversible
and its efficiency is
η =1−
3.11 The Carnot Engine and Endoreversible Engines
To ensure that the transformations in the engine are close to the corresponding
reversible transformations, we will have to “put to zero” the friction in the mechanical parts, realize expansions and compressions in a timescale very long compared to
the relaxation time of the fluid and make sure that the heat exchanges always occur
between bodies at the same temperature (or very near to it). This implies that the
time taken to complete one cycle will become very long. In the limit of a reversible
cycle, the time taken will be infinite and therefore the power of the machine will be
zero. The issue concerning the maximum efficiency, raised in Sect. 3.9, has now to
be changed: we want to design an engine not with the highest efficiency but which
has the maximum power for the same boundary conditions. This problem has been
treated by H. Callen in [4] and we report briefly his results. For a more extensive
treatment, see also [5]. The boundary condition will be the same as in the Carnot
engine case that is the use of the same two thermostats, but it is easy to convince
ourselves that the main problem has to do with the isothermal transformations that
is with those interactions in which the engine exchanges heat with the two sources
at T1 and T2 .
Indeed, while the relaxation times of the fluid can be made relatively very short7
and the mechanical parts can be suitably lubricated, the characteristic times for heat
exchanges face severe limitations. We shall assume that the mechanical interactions
are reversible and that the changes of state of the fluid can be treated as a succession
of internal equilibrium states, while the irreversibility of the cycle can be confined to
the processes of heat exchange. More precisely, we assume that the heat exchanges
with two given sources at T1 and T2 are realized when the engine is at the temperatures
Tw and Tc , respectively, such that
T1 > Tw > Tc > T2 .
Here Tw stands for “warm temperature” and is chosen to be lower than T1 in order to
speed up the transfer of Q 1 from the source to the engine and similarly for Tc which
stands for “cold temperature”.
If we want to accelerate the processes of heat exchange, we must lower Tw and
increase Tc as much as possible but, in doing so, the efficiency will tend rapidly to
Let us look for the best compromise between the rapidity and the efficiency of
operation. Let κw and κc be the thermal conductances of the devices exchanging heat
with T1 and T2 , respectively:
[conductivity] × [area]
Then, the time intervals tw and tc , spent by the engine to exchange the required
amounts of heat with the given sources at T1 and T2 , are given, respectively, by
7 The relaxation time for expansion or compression is of the order of the size of the container divided
by the speed of sound; designing appropriate forms of the cylinders this time can be further reduced.
3 The Second Principle of Thermodynamics
= κw (T1 − Tw ) ,
= κc (Tc − T2 ) .
If we neglect the time intervals relative to the adiabats, the time required per cycle
will be
t = tw + tc =
κw (T1 − Tw ) κc (Tc − T2 )
The entropy variations Sw and Sc of the engine in the two isotherms at Tw and
Tc are, respectively:
−Q 2
Sc =
Sw =
and since Scyc = 0 and Sad = 0 we get
Q2 = Q1
The time interval per cycle can be written as follows:
t = Q 1
Tc /Tw
κw (T1 − Tw ) κc (Tc − T2 )
In fact, the engine performs a Carnot cycle (reversible) between the temperatures Tw
and Tc and will produce, in each cycle, the amount of work:
W = Q1 − Q2 = Q1 1 −
Then, after substitution of Q 1 from Eq. (3.58) in Eq. (3.57), we can write the following
expression for the time t spent in each cycle and for the power P of the engine:
t = W
Tw − Tc
Tc /Tw
κw (T1 − Tw ) κc (Tc − T2 )
κw (T1 − Tw ) (Tw − Tc ) κc (Tc − T2 ) (Tw − Tc )
3.11 The Carnot Engine and Endoreversible Engines
Now the problem is to choose the temperatures Tw and Tc so as to maximize the
power of the engine. H. Callen finds [4]
Tw = c T1 ,
Tc = c T2 ,
κw T1 + κc T2
κw + κc
T1 − T2
= κw κc √
√ .
κw + κc
Then, the maximum power will be
The efficiency of this endoreversible engine maximized in power will be
• The Carnot engine has maximum efficiency but zero power. The most critical
points in the operation of engines are in heat transfer along the isotherms.
• Endoreversible engines perform transformations which may be considered internally quasi-static. Irreversibility is confined in the heat exchanges with the sources
at T1 and T2 . In order to accelerate the heat exchanges ,the isotherms are performed
when the engine is at Tw and Tc according to Eq. (3.49).
• The temperatures which maximize the power of the engine are given by Eq. (3.61)
• The time per cycle is given by Eq. (3.59). It decreases if the conductance increases
hence it is convenient to use fluids with large thermal conductivity and large
surfaces for heat exchange.
• The (maximum) power is given by Eq. (3.60). It increases with increasing thermal
• The efficiency is given by Eq. (3.64). It is always less then the efficiency of a Carnot
engine working between the same sources, but it is remarkable that the efficiency
is independent from the thermal conductances of the heat exchangers.
3.12 Coefficient of Performance (COP)
In Sect. 3.8, we considered engines designed to produce some positive work and for
this purpose they have to absorb heat from the hot source and deliver a part of it to
the cold one. In this section, we want to consider the reversed situation in which the
3 The Second Principle of Thermodynamics
engine is operated in order to absorb some heat from a cold source and transfer it,
together with some amount of mechanical work, to a hot source. Let Tc and Th be the
temperatures of the cold and of the hot source, respectively, (Th > Tc ). Moreover,
we denote with Q c and Q h the absolute values of the quantities of heat exchanged
with the cold and with the hot source, respectively, and with W the amount of work
supplied to the engine in one cycle. From the First Principle, we have
Qh = Qc + W .
and if the processes are all reversible the Eq. (3.36) between Q c and Q h holds
The Coefficient Of Performance, shortly denoted by COP, is defined by the ration of
the effect we want to produce and the amount of work required.
3.12.1 Refrigerator
The engine working in the above manner is called refrigerator when the purpose
is to take away heat from the cold source in order to maintain it at a given temperature
Tc . It is the case of a freezer where the cold source is the cell whose temperature,
owing to non-perfect insulation, tends to raise its temperature and the hot source is,
for instance, the kitchen. The refrigerator can also be used in order to keep the room
fresh and deliver heat outside (hot source in summer). In these cases, the effect we
want to obtain is the removal from the cold source of as much Q c as we can per joule
of work employed. The COP is defined by
COPrefr =
In the case of an ideal refrigerator we can use Eq. (3.36) and express the COP just in
terms of the two temperatures:
COPrefr,id =
Qh − Qc
Th − Tc
3.12.2 Heat Pump
The same engine can be used in order to take heat from the cold source, for instance,
the exterior of a house in wintertime, and deliver it, together with the work supplied
3.12 Coefficient of Performance (COP)
to the engine, to the “hot source” which is the interior of the house. In this case, the
effect we aim at is to obtain as much Q h as we can for a given W so that the COP of
a heat of pump is defined by
COPhp =
For the ideal heat pump, we have
COPhp,id =
Th − Tc
In both cases for real refrigerators or heat pumps, the COP is less then the COP of
the corresponding ideal engine. To show this it is sufficient to look at the Second
Principle. The composite system formed by the two sources together with the engine
is a thermally isolated system (the source of work is considered reversible) and then
in every cycle the total entropy must increase. The entropy variation for the engine
is zero and for the two heat sources we have:
S = −
> 0.
If we make use of Eqs. (3.69) and (3.70) in Eq. (3.71), it is straightforward to prove
that for both the refrigerators and the heat pumps, the COP relative to real engines
is smaller than the corresponding ideal one.
Example 3.1 The use of heat pumps is generally encouraged by Governments as part
of an energy-saving policy. Suppose that a house is subjected to a heat dispersion to
the outside due to imperfect thermal insulation. If you want to keep it at a constant
temperature, you have to introduce an amount of energy, within the hour, equal to
the quantity that is dissipated outside of the same time interval.
If we use an electric heater it is clear that for each joule introduced in the house
we will have to consume Weh = 1 J of electricity. Let’s see what can be the energy
consumption of a heat pump for each joule introduced. If the room is kept at a
temperature of 22 ◦ C and if the external environment is at a temperature of 6 ◦ C,
in the case of an ideal heat pump the energy consumption, Whp,id for each joule
produced, will be
Whp,id =
= 0.054 J .
For commercial heat pumps, the C O P ranges between 3 and 5. If we assume the
value COP 4 for every joule consumed the heat pump delivers 4 J into the house.
We may consider the electric heater as a heat pump with COP = 1 or, in other
words, a heat pump working with a hypothetical environment at T = 0 K.
3 The Second Principle of Thermodynamics
3.13 Availability and Maximum Work
We discuss, in the light of the First and Second Principles of thermodynamics, the
following problem. Let us consider a system S , which is closed and undergoes a
transformation from the initial A to the final state B. We impose the constraint that
in this transformation the system exchanges heat only with one external body B and
does work on a mechanical system M . The body B is characterized by a relaxation
time very short with respect to the interactions timescale so the changes of state
to which it is subjected can be considered quasi-static. Similarly, the mechanical
system will be virtually without friction in all its moving parts so that each process
would be considered to be reversible. This change of state can be realized through
many different operations so that we want to calculate the maximum amount of
work obtainable in this change of state of the system with the constraints that we
have specified.
Maximum Work and Available Energy
Given the initial and final states A and B, the energy and entropy changes have the
same values for the different possible processes and let U and S be their values,
respectively. Let us begin by choosing one process as starting point and denote by
W the amount of work done by the System S on M and by Q the quantity of heat
supplied by the system S to the body B (d̂ W and d̂ Q for infinitesimal process,
respectively). From the First Principle, we get
− U = W + Q .
Since U is fixed, if we want to obtain the maximum value for W we have to look
for the process with the minimum value for Q. Assume, in the first instance, that
the body B has a very large heat capacity, that is, it behaves as a thermostat and be
T0 its temperature. Since S and B do not exchange heat with other systems, the
composite system S + B is a thermally isolated system. Then the change in entropy
of the composite system will be
S tot = S + SB ,
which becomes
S tot = S +
≥ 0.
From (3.75)
Q ≥ −T0 S ,
so that the minimum value for Q will be Q = −T0 S and this occurs when the
interaction between S and B will be reversible. In this case, the amount of work
done by S on the mechanical system M will be
3.13 Availability and Maximum Work
Wmax = −U + T0 S .
Therefore, all transformations from the state A to the state B carried reversibly and
with the above constraint will provide the same amount of work given by Eq. (3.77)
and this is the maximum value. Note that in this case also the system S , in its
interaction with B, will have the same temperature T0 .
Under the above hypothesis, Eq. (3.77) can be written in the following form:
Wmax = −U + T0 S = − (UB − UA ) + T0 (SB − SA ) ,
= − (UB − T0 SB ) + (UA − T0 SA ) ,
= − ΩB + ΩA ,
= − Ω ,
where we set
Ω = U − T0 S .
In this form, the maximum work appears as the difference of a property (state function) of the system S relative to an environment with a certain temperature and is
called available energy.
Available Work
In many applications, environments have also the characteristic property of acting a
constant pressure p0 on S . In these cases, the system S will do work on M but
also against the pressure of the environment. The latter contribution is equal to
Wenv = p0 (VB − VA ) ,
then the maximum amount of work that will be done on M will be
Wmax = Wmax − p0 (VB − VA ) = −Ω − p0 (VB − VA )
= −(U − T0 S + p0 V )
= − ,
where we posed
= U + p0 V − T0 S .
This is a property (state function) of the system S but relatively to an environment
characterized by values p0 and T0 of pressure and temperature and is called available
work of the system in a given environment.
3 The Second Principle of Thermodynamics
As a corollary to what we discussed above, we can now answer the following question: given a system S set at a certain initial state A, inside an environment characterized by p0 and T0 , what is the maximum amount of work that we can get from S
and net of the work done on the environment? The starting point is to recognize that
if S is in complete equilibrium with the environment we can no longer get any work;
This state of the system S is called dead state. Then if we denote by U0 , V0 , S0 , p0 ,
respectively, the values of energy, volume, entropy, and pressure of the system in the
dead state we shall have
Wmax = −(U0 − U + p0 V0 − p0 V − T0 S0 + T0 S) = − 0 .
Let us put
= − 0 ,
Φ being called availability and it depends on the initial state of the system and on
the choice of the environment in which it is immersed.
3.1 A refrigeration unit keeps a freezer cell at temperature T1 = 250 K and transfers
heat to the environment at temperature T2 = 300 K. The compressor is situated outside the cell and produces useful work at the rate of 90% of the energy absorbed from
electricity while the remaining 10% is dissipated by heat transfer to the environment.
The freezer cell is separated from the environment by imperfect walls, which allow
the quantity Q 1 = 4.2 × 107 J of heat to enter the cell every hour. The coefficient of
performance of the unit is COP 3. Find
The power of the compressor;
The total quantity of heat transferred to the environment per hour;
The entropy variation, per hour, of the cell and of the environment;
The power of the compressor if the refrigerator worked as a reversible engine.
3.2 A thermal engine operates having heat interactions with two thermostats at temperatures T1 = 470 K and T2 = 300 K. In each cycle, the increase of the entropy of
the Universe is Suniv = 0.14 cal K−1 and the engine transfers to the lower temperature source Q 2 = 200 cal. Determine the quantity of work produced by the engine
per cycle and its efficiency.
3.3 A thermal engine operates Carnot cycles between T1 and T2 with T1 > T2 .
Let AB be the reversible isothermal expansion at T1 , BC the reversible adiabatic
expansion from state B at T1 to state C at the intercept with the isotherm at T2 .
3.13 Availability and Maximum Work
The engine absorbs Q 1 = 400 kJ per cycle from the source at high temperature and
releases Q 2 = 300 kJ to the one at low temperature.
(a) Calculate the efficiency ηrev .
At a certain moment during the adiabatic expansion BC, the engine exhibits a malfunction which causes this transformation to be irreversible and the consequence is
that the isotherm T2 is intercepted at another point say C . The rest of the cycle proceeds reversibly as before. Show that the the efficiency must be lower. In particular, if
in order to reach state C from state C the engine gives off to the source an additional
quantity of heat Q C C = 40 kJ.
(b) Calculate the efficiency of the engine in these new conditions.
3.4 An engine operates exchanging heat with three thermostats at temperatures T1 ,
T2 and T3 with T1 > T2 > T3 . The engine releases heat only to the heat source at T3
and absorbs heat from the other two.
(a) Find the maximum work produced as a function of the amounts of heat withdrawn
from T1 and T2 .
(b) Show that the maximum efficiency depends only on the ratio Q 1 /Q 2 of the
quantities of heat withdrawn from the two sources at T1 and T2 .
3.5 An engine operates exchanging heat with three thermostats at temperatures T1 ,
T2 and T3 with T1 > T2 > T3 . The engine releases heat only to the heat source at T2
and absorbs heat from the other two. Let’s denote with Q 1 , Q 2 and Q 3 the absolute
values of the heat quantities exchanged, per cycle, with the three sources respectively.
Determine the maximum value of the work done by the engine, and, in the case of
a reversible engine, for which value of the ratio ξ = Q 3 /Q 1 the amount of work
produced is zero.
3.6 An engine operates exchanging heat with three thermostats at temperatures T1 ,
T2 and T3 with T1 > T2 > T3 . The engine releases heat only to the heat source at T1
and absorbs heat from the other two. Let’s denote with Q 1 , Q 2 and Q 3 the absolute
values of the heat quantities exchanged, per cycle, with the three sources respectively.
(a) Prove that the work done by the engine is always negative.
(b) The engine is then used as a heat pump. Determine its COP.
3.7 Two bodies with thermal capacities C1 and C2 , supposed to be constant in the
temperature intervals we are considering, have initial temperatures T1 and T2 respectively (T1 > T2 ). Determine the maximum amount of work that can be produced
by an engine working in a “cyclic mode”. By “cyclic mode” we mean that at the
beginning and at the end of the operations the engine is in the same state.
3.8 A reversible engine operates between two thermostats at temperatures T1 =
500 K and T2 with T2 < T1 . In every cycle, the engine absorbs 103 kJ from T1 and
produces 600 kJ in mechanical work.
3 The Second Principle of Thermodynamics
(a) Determine the temperature T2 of the second thermostat and the amount of heat
given to it per cycle Q 2 .
(b) Suppose, that due to aging the engine works with an efficiency equal to 0.75
times its maximum value but still we want to extract from the hot source at T1
the same amount of heat and to produce the same amount of work per cycle. At
what temperature should the engine transfer the amount of heat Q 2 ?
3.9 A reversible engine operates between a source at T1 = 500 K and a body with
constant heat capacity C = 104 J K−1 at the initial temperature Ti . Overall, the engine
absorbs Q 1 = 103 kJ from the source at T1 and produces W = 600 kJ as mechanical
work after having completed an integer number of cycles. Determine the amount of
heat Q 2 released to the body and its initial temperature Ti .
3.10 Find the maximum amount of work that can be extracted from one ton of
water at a temperature of 80 ◦ C in an environment at 20 ◦ C. Neglect changes in water
volume and in the specific heat of water.
3.11 An endoreversible engine operates between two heat reservoirs at T1 = 480 K
and T2 = 300 K in the maximum power modality. The entropy of the Universe
increases in proportion to the work produced. Determine the entropy increase per
joule produced.
3.12 Let’s have two identical vessels each one containing 10 L of water at θ 20 ◦ C.
A refrigerator operates between the two vessels cooling one of the two and heating the
other. The vessels have a negligible heat capacity. The machine is kept operating for
10 minutes and we see that the temperature of the cold body is θ 8 ◦ C. Determine
the power of the machine.
3.13 A reversible engine operates using three heat reservoirs at temperatures T1 =
1000 K, T2 = 500 K and T3 = 200 K. In every cycle, the engine absorbs Q 1 = 1000 J
from reservoir at T1 and produces on the environment a quantity of work W = 600 J.
Find the quantities of heat absorbed by the engine from the other two reservoirs.
3.14 A system undergoes an isothermal, quasi-static transformation from the state
A to the final state B, at the temperature θ = 150 ◦ C. In this transformation the work
done on the system amounts to WI = 212 J. Then the system is brought back from
state B to state A by means of an adiabatic transformation in which the system gives
back a quantity of work |WII | = 100 J. Determine
(a) The variation of entropy SB − SA in the first transformation;
(b) Is the second process quasi-static?
3.15 A reversible engine operates between a thermostat at temperature T1 = 600 K
and a vessel containing ice and water in mutual equilibrium at the pressure of one
atmosphere. The initial masses of water and of ice are, respectively, m w = 210 g and
m ice = 30 g. The heat of fusion of ice is 334 J g−1 . After some time, we stop the
engine and we see that in the vessel we have water at the temperature T2 = 293.16 ◦ C.
Determine the amount of work delivered by the engine (neglect water evaporation).
Chapter 4
The Fundamental Relation and the
Thermodynamic Potentials
Abstract For each thermodynamic system, it is possible to determine a relation
between energy, entropy, and the work parameters (the volume only in the case of
simple systems) that is called the Fundamental Relation of the system. We start first
with closed systems with no chemical reactions for which the Equilibrium State
Postulate determines, in general, the number of degrees of freedom and then it is
generalized to open systems with variable chemical composition. The Fundamental
Relation describes the set of all stable and metastable equilibrium states that the
system can reach and the geometrical properties of the surface described by it determines the conditions of stability of equilibrium states. It can be represented in various
forms according to the external constraints. The representations commonly used, in
addition to that of Energy and of Entropy, are the Free Energy, the Enthalpy, and
the Gibbs Potential and the general properties of isothermal, isobaric and isochoric
transformations are discussed. The definition of the chemical potential is given and
its physical meaning as the thermodynamic potential responsible of phase equilibria
is shown.
Keywords State postulate · Fundamental relation of thermodynamics · Open
systems · Free energy · Enthalpy · Gibbs potential · Stability of equilibrium
states · Isothermal compressibility
4.1 Introduction
In the previous chapter, we defined the first three Principles of Thermodynamics and,
in particular, we defined two fundamental quantities such as energy U and entropy
S. Their fundamental property is that they are both state functions (the former is a
conserved state function while the latter is a non conserved one) and we have seen, in
some examples, the relevance of this. In order to construct a solid formal assessment
it is necessary, at this point, to go deeper into the concepts of “equilibrium state”
and of “degrees of freedom” for a macroscopic system. Until now we could manage
with a primitive definition of an equilibrium state as the stationary macroscopic configuration which is spontaneously reached in an isolated system. Every macroscopic
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
4 The Fundamental Relation and the Thermodynamic Potentials
quantity that the observer determines experimentally, is defined for equilibrium states
and then the set of all macroscopic properties was assumed as the formal definition
of the state of a macroscopic configuration. We deal with a set of numbers, some
referring to extensive quantities and some to intensive quantities as, for example,
volume, temperature, mass, pressure, energy, entropy, coefficient of viscosity, electrical conductivity, magnetic permeability, chemical composition, and many others.
In general, the state is composed of several tens of numbers depending on the system
we are considering, but we know, partly from experimental observations and partly
owing to the theoretical context we are using, that many state parameters are linked
among themselves by quantitative relations. It follows that a much lower number of
parameters is sufficient to define the state, since the other quantities are deducible by
means of well-known relations. The reduction of the number of parameters necessary
to identify the state could appear, at a first glance, merely dictated by a reason of
practical convenience but, on the contrary, is a matter of fundamental interest. It is
necessary to establish a criterion for determining:
1. The number of degrees of freedom of a system and that is, how many state
parameters are necessary and sufficient to uniquely determine the thermodynamic
2. What is one possible choice of independent parameters. Once we have found one
possible choice, this will not be unique but the identification of at least one of
them is necessary and the mutual independence among these state parameters
must be guaranteed. Obviously, it will not be enough that we do not know the
existence of mutual dependencies.
All this must be possible regardless of the fact that our knowledge of the different
properties of the system and of the relationships between them is complete or not.
This result is guaranteed by the so-called Equilibrium State Postulate of more briefly
State Postulate [6]. We start limiting ourselves to consider closed systems without
chemical reactions. The general formulation will be given in Sect. 4.3.1.
4.2 The Equilibrium State Postulate for Closed Systems
with No Chemical Reactions
We consider an equilibrium state A and an infinitesimal transformation. From the
First Principle, we have
dU = d̂W + d̂ Q .
It is necessary to carry out some considerations about the term d̂W which represents
the amount of work that we do on the system (that is the term that we control). The
definition comes from Mechanics where we have seen that, for a small displacement
of the point where a force F is applied, the amount of work done is defined by
d̂ W = F · ds .
4.2 The Equilibrium State Postulate for Closed Systems with No Chemical Reactions
From this, it is immediate to define the amount of work done in a small deformation of
an extended body and, with subsequent generalizations, we write the expression for
compressible fluids. In the latter case, under the conditions required by hydrostatics
and fluid dynamics, we can write the expression for the work done in the form
d̂W = − p dV ,
where p is the value of the pressure of the fluid and V its volume. For the moment, as
we consider a fluid in a state of equilibrium, the value of the pressure of the fluid will
coincide with the value of the equivalent pressure1 exerted from the outside world.
Let us consider other contexts in which we can determine the amount of work we do
on the system for small changes of state.
We have previously seen that the amount of work done in order to vary the intensity
of the electric field within the plates of a capacitor is given by
d̂W = ψ dq ,
where ψ is the potential difference between the plates and q is the free charge
deposited on the plates.
If we consider a surface layer (the topic will be discussed in a dedicated Chap. 9),
in order to vary, by an infinitesimal amount, the area of the surface we will have to
do the work
d̂W = σ d ,
where σ is the surface tension and the area of the surface. In all the examples that
we can cite, it emerges a common feature: the amount of work we have to do, by
means of different modes of action, in order to produce a small change of state in the
system is always given by the product of the small change of an extensive quantity
times the value of an associated intensive quantity. In general, the amount of work
we (external world) have to do on the system will be written in the form
d̂ W =
xi δEi ,
where Ei is the ith extensive quantity on which the observer operates creating interactions and xi is the associated intensive quantity. Indeed, as we shall demonstrate
in Chap. 14 where irreversible processes will be extensively treated, it is the difference of these intensive quantities between systems not in mutual equilibrium, that
will play the role of generalized forces. In this context, the extensive quantities Ei
will be called “quasi-static work parameters” and they represent the k-independent
1 With the term “equivalent pressure”, we mean to cover those situations in which, in addition to the
external pressure in the strict sense (for example, exerted by the atmosphere on a movable piston)
other forces applied to the mobile piston act such as a weight determined on a horizontal piston of
the given area.
4 The Fundamental Relation and the Thermodynamic Potentials
modes the observer adopts to interact with the system in a controlled manner and
produce changes on it. As we know there are other ways of interaction, not controlled
by the observer. All of them are grouped in one general term designated by d̂ Q and
then for any small change in a closed system we have
dU = d̂ Q +
xi dEi .
We know that, for quasi-static transformation, the d̂ Q term can be expressed as the
product of an intensive quantity times the infinitesimal variation of an appropriate
extensive quantity. In this case, we write
d̂ Q = T dS .
We can conclude that for quasi-static transformations or for small transformations
starting from a state of equilibrium, we can write
dU = T dS +
xi dEi
or the equivalent form
dS =
dU −
xi dEi .
T i=1
These arguments suggest the following statement.
Statement of the Equilibrium State Postulate
Observable equilibrium states are determined by all interactions of the system with
the outside world. If the interactions with the outside world take place by means of k
independent work parameters Ei with i = 1, 2..., k, the equilibrium states (stable or
metastable) will be uniquely determined by the k quasi-static work parameters and
the energy U . As a consequence, the number of degrees of freedom is2 (k + 1).
4.2.1 Simple Systems
We call simple system a system in which the interaction with the external world
occurs only via one single work parameter.
2 For
a wider discussion on this topic, see [6].
4.2 The Equilibrium State Postulate for Closed Systems with No Chemical Reactions
The most frequent case for us is the case of a compressible fluid in which the only
form of work can be written as d̂ W = − pdV . In this case, Eq. (4.9) becomes
dU = T dS − pdV .
Equivalently, we can express the small entropy change that follows a small change
of volume and energy, in a quasi-static process:
dS =
dU + dV
This shows that a simple system is provided with two degrees of freedom, in other
words that, given the values for the volume and energy, entropy and all the other
properties, are uniquely determined.
4.3 The Fundamental Relation
Relations given by Eqs. (4.9) and (4.10) are expressed in terms of small variations of
extensive quantities. Now, we transform this relation among small quantities into a
differential equations. Formally, this is equivalent to say that for every macroscopic
system there is a function, called the Fundamental Relation for that system and that
we will write in the form
U = U (S, Ei )
or equivalently
S = S(U, Ei ) ,
whose differential forms are, respectively:
dU = T dS +
xi dEi
dS =
dU −
xi dEi .
T i=1
Thanks to the State Postulate we know that the system has (k + 1) degrees of freedom
and that one possible choice of independent parameters is provided by the k work
parameters Ei plus the energy U . If we write the Fundamental Equation3 in the form
Eqs. (4.13) and (4.15), we will say that we are in the energy representation, whereas
if we write it in the form Eqs. (4.14) and (4.16) we will say that we have adopted the
3 This
denomination is due to Gibbs and, as we shall see, it is absolutely appropriate.
4 The Fundamental Relation and the Thermodynamic Potentials
entropy representation. All informations on the properties of the system under our
observation, are contained, as we shall see, in its Fundamental Relation.
The State Postulate and its formalizations Eq. (4.15) or Eq. (4.16) lead us to state
the fundamental thermodynamic point of view:
• The state of the system is determined by a set of extensive quantities.
• The number of necessary and sufficient extensive quantities (i.e. degrees of freedom of the system) is set by the Equilibrium State Postulate and is equal to the
number k of independent work parameters, plus one. The mutual independence
of the work parameters is due to the attitude of the observer, in other words is
established by the choice, made by the experimenter, on how to operate.
• The outside world induces changes in the state of the system by transferring to it
extensive quantities. This is a general criterion for characterizing the interactions.
• These interactions can take place in “controlled” modes, that is, within known
theoretical contexts. These modes are realized operating with (i.e. transferring
from the outside to the system) the extensive parameters Ei .
• These interactions also occur through modes different than those characterized
in the previous point and which are not controlled by the observer. They are all
included in transfer of entropy S; Now we understand that this is consistent with
our definition of the transferred amount of heat as the mode of interaction that
includes all those who are not covered by the predefined theoretical contexts.
• In the light of what we have seen in this subsection, we see that it would be more
correct to speak of entropy transfer instead of heat transfer because entropy is
possessed by the systems and can, therefore, be exchanged (in contrast to heat). In
spite of this, there remains in the language of physics the heat transfer expression
born with calorimetry at a time when the terms “heat” and “energy” indicated
substantially different quantities. This expression is still so deeply rooted both in
the natural and in the specific (scientific) languages, that we will continue to use
it solely for reasons of functionality.
Among the extensive quantities that systems exchange during interactions (i.e. Ei
and S), some are conserved quantities some are non conserved quantities.
In addition the required condition that the state parameters are extensive quantities
implies that the Fundamental Relation is a homogeneous function of degree one.
U (λS, λEi ) = λ U (S, Ei ) ,
where λ is an arbitrary, real positive number. By using the Euler’s theorem (see
Appendix A.3) for homogeneous functions of first degree, it is possible to rewrite
the Fundamental Relation in the integral form
U (S, Ei ) =
S +
k ∂U
S, E j=i
Ei .
By considering Eqs. (4.13) and (4.11), then Eq. (4.18) becomes
4.3 The Fundamental Relation
U (S, V ) = T S − pV ,
∂S V
∂V S
T =
The relation given in Eq. (4.20) represents the definition of temperature, which
has been extensively detailed from Sect. 3.5 to Sect. 3.10. Conversely, Eq. (4.21)
needs to be properly commented. The term p was introduced in Mechanics as the
expression of “force per unit area”. In Eq. (4.21), p stands for a partial derivative
but will still be called “pressure”. As we did for temperature, it will be necessary
to justify this denomination, and this will be done in Sect. 4.3.3. These are the most
general definitions of the variables known under the names temperature and pressure, respectively, and of course their definition will be the same also in the more
general case in which the work parameters are in any number provided they are kept
4.3.1 The General Case for Open Systems with Variable
Composition: The Chemical Potential
The principles of Thermodynamics have been formulated for closed systems in the
absence of chemical reactions. We recall that the definition of amount of heat would
be meaningless for an open system but within this limitation the experimental observations have provided an abundant quantity of direct indications that led the observer
to the formulation that we have discussed up to this moment. In the case of open systems or otherwise of variable composition, the observer will operate generalizations
and only after, empirical observations will show whether there are contradictions.
Let us suppose that there are m components and that k of them, with k ≤ m are
independent. For example, if there are no chemical reactions it will be k = m, while
if there are p chemical reactions it will be k = m − p. In general the choice of the
k independent components to be taken as independent, is not unique. In order to
describe the quantity of matter for each component we could use the mass m γ or the
mole number n γ . In this treatise we will use the latter.4
From now on, for simplicity of writing, we shall limit ourselves to the case in which
there is only one work parameter and that this is the volume V . The Fundamental
Relation for each system will now be written, in its most general form in the energy
representation, as
4 In
many texts designed for applications in engineering, however, it is useful to take the mass as a
state variable.
4 The Fundamental Relation and the Thermodynamic Potentials
U = U (S, V, n 1 , n 2 , . . . , n k )
and the Fundamental Equation will be written in the form
dU = T dS − pdV +
μ γ dn γ ,
γ =1
T =
∂ S V, n γ
∂ V S, n γ
as in Eqs. (4.20) and (4.21) of closed systems without chemical reactions. The new
term that is now being introduced is
μγ =
∂n γ
S, V, n γ =γ
and it is called the chemical potential of component γ . This is an intensive state
function and, as we shall see Chaps. 7 and 14, plays a very important role in regulating
both phase transitions and chemical reactions.
It must be pointed out that Eq. (4.26) is a definition of chemical potential which is
unquestionable from the formal point of view but absolutely poor from the physical
(i.e. operational) point of view. Indeed if we had to measure such a thermodynamic
potential following its definition we should measure the fraction U/ n γ keeping
constant the volume and the abundance of the other components and this is, in
principle, easy but we should do the same also for entropy which is an extensive
quantity as n γ and this is, in principle, quite complicated.
We will see that it will be more appropriate to adopt different definitions for μ γ that
do not suffer from this difficulty by referring to thermodynamical potentials different
from energy (Free Energy or the Gibbs potential). In the more general case we will
have, for the Fundamental Relation in the energy representation, the expression
U = U (S, Ei , n γ ) where Ei are the work parameters as previously defined. Some
authors prefer to use the same notation for the work parameters Ei , and the mole
numbers n γ (in any case they are all extensive quantities). In that case the terms of
type μ γ dn γ are called “chemical work” to standardize the naming to the terms of the
type xi dEi . In this way the chemical potentials acquire the function of “generalized
forces” (as the xi ) and this is correct as we will see in the study of phase equilibria
and chemical reactions.
4.3 The Fundamental Relation
4.3.2 Other Thermodynamic Potentials
The thermodynamic point of view, outlined in this section, and according to which
the interaction of the observer with the systems (and thus also of the systems with
each other) can be viewed as transfers of extensive quantities, is conceptually correct, elegant and provide the fundamental point of view for the study of irreversible
processes. It will be just the variations of extensive quantities that will be interpreted
as generalized flows and that will give the name to individual processes and among
which we will study the modes of mutual interference. However, in the study of
the thermodynamics of equilibrium states it is appropriate to define new extensive
quantities such that they express explicitly the constraints imposed by observer in
the laboratory. For instance we can easily control the temperature or the pressure
of a system5 but we will have the need for new representations of the Fundamental Equation, which explicitly take into account the constraints on the temperature
and/or pressure.
One possibility could be to express the Fundamental Relation, both in the energy
and in the entropy representations, as functions of temperature and pressure but,
in order to do that, we have to express, for instance, the volume as a function of
temperature and pressure i.e. we have to introduce the equation of state. In other
words we must introduce informations on the particular system we are dealing with
and so, while Eq. (4.22) is a relation of absolute generality, its expression as a function
of a different choice of independent variables would be of particular validity.
From this comes the opportunity to introduce new thermodynamic potentials
which have the same power of the Fundamental Relation but that they are expressed
in a natural way as a function of the state of T and p variable. Potentials that serve
the purpose are
F =U −TS,
H = U + pV ,
G = U + pV − T S ,
where F is the Free Energy or Helmoltz Potential, H is the Enthalpy or Heat Function,
and G the Gibbs Potential or Free Enthalpy. Their utility is clear if we write the
changes in an infinitesimal process
dF = −SdT − pdV +
μ γ dn γ ,
γ =1
dH = T dS + V d p +
μ γ dn γ ,
γ =1
5 And
to do this the observer should ensure an efficient flow of extensive quantities such as energy
or volume.
4 The Fundamental Relation and the Thermodynamic Potentials
dG = −SdT + V d p +
μ γ dn γ .
γ =1
From their differential expressions we see that each of the three potentials is associated in a natural way to a suitable choice of the state variables and precisely:
F = F(T, V, n γ ) ,
H = H (S, p, n γ ) ,
G = G(T, p, n γ ) .
These associations, that we called “natural”, mean that the differential Eqs. (4.30)–
(4.32) are of general validity exactly as Eq. (4.22) and, consequently, also Eqs. (4.33)–
(4.35) have the same rank of the Fundamental Relations. They are the Fundamental
Relation of the system under observation but in different representations. From the
previous relations we can write different, but equivalent, definitions of the chemical
μγ =
∂n γ
∂n γ
∂n γ
T, V, n γ =γ
S, p, n γ =γ
T, p, n γ =γ
As we see the definitions of the chemical potential in the Free Energy and in the
Gibbs potential representations, do not suffer the “defect”, only on the operational
level, it is suffering from the original definition in the Energy representation. We
will see how the examination of the main properties of these new thermodynamic
potentials can permanently solve this aspect and, above all, allow an understanding
of the physical meaning of the chemical potential.
4.3.3 The Free Energy and Isothermal Processes in Closed
From the Second Principle
d̂ Q = T dS − T d̂ i S
and for an isothermal transformation:
d̂ Q = d(T S) − T d̂ i S .
4.3 The Fundamental Relation
Therefore, in the case of isothermal transformations the Second Principle establishes
that, for all natural isothermal transformations:
d̂ Q < d(T S) .
We go back to the First Principle for closed systems and without chemical reactions
and for an infinitesimal transformation:
dU = d̂ W + d̂ Q
we get, for natural isothermal transformations:
dU < d̂W + d(T S)
and then
dF < d̂W (natural isotherms) ,
dF > d̂W (unnatural isotherms) ,
dF = d̂W (quasi-static isotherms) .
The previous relations mean that if we operate on a system to make it perform a
isothermal transformation, the amount of work that we have to spend is equal to the
change in free energy for quasi-static transformations or larger. The free energy
change is, therefore, the minimum value of the amount of work required. If d̂W = 0,
as it may happens for a system maintained at constant volume, Eqs. (4.44)–(4.46)
dF < 0 (natural processes) ,
dF > 0 (unnatural processes) ,
dF = 0 (quasi-static processes) .
We can say that Eqs. (4.44)–(4.46) represent the Second Principle limited to isothermal transformations.
Why p Can Be Called Pressure?
Consider a cylinder subdivided into two parts by a piston which can move without
friction. This cylinder is held at a constant temperature T and at a constant (total)
volume V . The piston divides the system into two compartments inside each of which
is contained a homogeneous fluid (phase). The quantities for each compartment will
be indicated with symbols α and β. We shall have: V α + V β = V and T α = T β = T .
We consider an infinitesimal process which consists in a displacement of the piston
4 The Fundamental Relation and the Thermodynamic Potentials
by an infinitesimal amount. For each compartment, we will have
dF α = − p α dV α ,
dF = − p dV .
Since the volume of the whole cylinder has a constant value, it will be dV α = −dV β ,
and we have
dF = dF α + dF β = (− p α + p β ) dV α .
The hypothesized process will be a quasi-static process if dF = 0 and this means
that the overall system (α + β) will be in equilibrium with respect to that process.
This happens if
pα = pβ .
The hypothesized process will be a natural one if
(− p α + p β ) dV α < 0
and this leads to the two possible conditions
either dV α > 0 and p α > p β ,
or dV < 0 and p < p .
The condition given by Eq. (4.53) is called hydrostatic equilibrium. The two cases
described by Eqs. (4.55) and (4.56) are summed up in the rule: the compartment at
higher pressure increases in volume. This seems obvious if we limit ourselves to the
mechanical definition of pressure, and this proves that the intensive quantity defined
in Eq. (4.25) deserves the name “pressure”.
Physical Meaning of the Chemical Potential
Consider two phases (homogeneous systems) denoted by the symbols α and β. The
two phases are kept at the same temperature T α = T β = constant but they are not
necessarily at the same pressure as would be, for example, if they were separated
by a membrane which allows for the flow of some component while maintaining a
pressure difference. In addition we suppose that chemical reactions are absent and
that the two phases are in a state of internal equilibrium. They are not necessarily in
mutual equilibrium in the sense that they can exchange matter between them and only
between them (the overall system is a closed system). Imagine that an infinitesimal
amount of matter migrates from phase α to phase β. We want to know under what
conditions this process is allowed (natural process) or forbidden (unnatural process).
Since this is an isothermal process we write the Free Energy change for the different
systems in play. For systems α and β we have respectively:
4.3 The Fundamental Relation
dF α = − p α dV α +
μαγ dn αγ ,
μβγ dn βγ .
γ =1
dF β = − p β dV β +
γ =1
This writing is justified because both systems undergo a quasi-static transformation
as they are in a state of internal equilibrium. Now, consider the system consisting in
the union of the two systems α and β, and denote with F its Free Energy. We shall
F = Fα + Fβ ,
dF = − p α dV α +
μαγ dn αγ − p β dV β +
γ =1
μβγ dn βγ .
γ =1
For the overall system the total work done on it from the outside will be the sum of
the amount of work performed on the α system plus the amount of work done on the
β system. Furthermore, since it is closed we will have
n αγ + n βγ = const,
dn αγ + dn βγ = 0 .
Then Eq. (4.60) becomes
dF = d̂W +
(μαγ − μβγ ) dn αγ .
γ =1
If the two systems are in equilibrium with respect to the hypothetical transfer of
matter, it must be
dF = d̂W
and then
(μαγ − μβγ ) dn αγ = 0 .
γ =1
Since the sum is extended only to a set of independent components, and as the
equilibrium condition must hold for any virtual process, then for every γ it must be
μαγ = μβγ .
If the transformation is assumed to be a natural process
4 The Fundamental Relation and the Thermodynamic Potentials
dF < d̂W,
(μαγ − μβγ ) dn αγ < 0,
γ =1
and this for every γ
(μαγ − μβγ ) dn αγ < 0.
The expression given by Eq. (4.66) defines the physical meaning of the chemical
potential μ γ of a specific component: it expresses the equilibrium condition with
respect to the transfer of matter between the two systems. As we will see in greater
detail below, the chemical potential rules all those processes that involve changes
in the composition for example the chemical reactions. The chemical potential is
an intensive quantity, is defined point by point, and therefore depends on intensive
quantities. In general it will be written in the form
μ γ = μ γ (T, p, C1 C2 , . . . , Ck ),
where Ci = n i /n i=1,2,...,k are the molar concentrations of the ith component, being
n i , n the number of moles of ith the component and the total number of moles in
the mixture, respectively. We point out that in general the chemical potential of a
specific component depends also on the presence of the other components. From Eq.
(4.69) we see that if for component γ the condition of equilibrium is not satisfied,
then we may have one of the two possibilities:
either μαγ > μβγ
and dn αγ < 0,
dn αγ
> 0.
In any case, the rule is that component γ migrates from the phase with higher chemical potential towards the one with lower chemical potential. We see now that the
chemical potential gradient acts, in some way, to “force” the process of mass transfer.
Something very similar happens, as we shall see in greater detail below, for chemical
reactions and this justifies the choice of the name.
4.3.4 The Enthalpy and Isobaric Processes
The situation in which we observe transformations at constant pressure is frequent
in nature and easy to realize in the laboratory. The Fundamental Relation in the
Enthalpy representation for closed systems, with constant composition and for isobaric transformations reads
dH = T dS
4.3 The Fundamental Relation
so if we transfer a small amount of heat to a system that is initially in a state of
equilibrium, and we do this at a constant pressure, we will have:
d̂ Q = dH .
This means that the amount of heat supplied to the system gives the measure of the
enthalpy change. For example if we supply heat by the use of an electrical resistance.
We can measure the current intensity I and the potential difference ψ across the
resistance. The amount of heat given off by the resistance in a time interval t is easily
obtained after multiplication of these three numbers so we can write:
ψ I dt = dH
and we see, then, that the measure of the enthalpy change at constant pressure may
be is relatively simple in some cases, and this justifies the designation (little used
now) of heat function. If we have a closed system at constant composition, in a state
of equilibrium and consider an infinitesimal (quasi-static) transformation at constant
pressure, we observe, in correspondence to the supply of a small quantity of heat
δ Q, a temperature variation which we will denote with δT . From Eq. (4.74) we have
= Cp =
The quantity C p is called heat capacity at constant pressure. It will be called specific
heat at constant pressure or molar heat at constant pressure if we refer to the unit
mass or to one mole, respectively.
4.3.5 The Gibbs Potential and Isothermal and Isobaric
Let us go back to the Fundamental Relation in the energy representation U =
U (S, V, n γ ). For Euler’s theorem on homogeneous functions of first degree (see
Appendix A.3), we can write
U = T S− pV +
μ γ nγ ,
γ =1
γ =1
μ γ nγ .
4 The Fundamental Relation and the Thermodynamic Potentials
Recalling Eq. (4.32), for an infinitesimal transformation at constant p and T we may
dG =
μ γ dn γ .
γ =1
While the chemical potentials are properties of the individual components, the Gibbs
potential is a property of the overall system and hence we cannot speak of the chemical potential of the overall system. The measurement of G is reconducted to the
measurement of the energy U and of the entropy S and, in principle, this is assured
by the two principles that define these quantities. For a chemically pure phase, i.e.,
composed of a single component, Eq. (4.78) simply becomes
G = μn,
Gm = μ ,
that is, the chemical potential coincides with the value of the Gibbs potential per
mole. It is useful, for further applications, to differentiate Eq. (4.81)
dμ = −Sm dT + Vm d p.
From (4.82) we obtain two important relations which give us a deeper insight on
the structure of the chemical potential for chemically pure substances or for ideal
= −Sm
∂T p
= Vm .
The measurement of the Gibbs potential does not allow us to go back to the
values of the individual chemical potentials in a mixed system however, the phase
equilibrium condition (4.66) gives us an indication of how to proceed, as shown e.g.
in Fig. 4.1.
We put the composed system in contact, through a membrane permeable only
to component γ , with a small cylinder. We know that, at equilibrium, the chemical
potential of the component γ in the composed system will be equal to that of the
substance contained in the small cylinder. The latter is a chemically pure phase and,
in principle, we could measure
G m = Um + pVm − T Sm .
In practice we need only to measure the pressure of equilibrium inside the small
cylinder (the temperature will be the same).
4.3 The Fundamental Relation
Fig. 4.1 A way to measure the chemical potential of one component, when it is part of a mixture
when we know the Gibbs potential of that component when it is a pure substance, is depicted. The
mixture is kept at a fixed pressure and temperature in cylinder A. The small cylinder B communicates
with A through a membrane permeable only to the component we are interested in, and a small
sample of it is extracted. When the equilibrium with the mixture is established the chemical potential
in the two cylinders will be the same. In B we have a pure substance and by measuring the pressure
of the sample in B we can find the value of G m for the pure substance and then the chemical potential
of the component in the mixture
4.3.6 The Stability Problem in a Thermodynamical System
In the study of Mechanics we have already encountered the problem of the stability of
equilibrium states. In that context, after defining what is meant by equilibrium state
through the equations of motion, we first distinguish between stable, metastable and
unstable equilibrium situations. For the stable and metastable states, the requirement
is that there exists a neighborhood of the equilibrium configuration, of non-zero
radius, such that for any perturbation of the configuration that is maintained within
this neighborhood, the system returns towards the equilibrium configuration. Conversely, the state of equilibrium will be unstable if there is at least one type of
disturbance, of whatsoever small amplitude, which causes the abandonment of the
equilibrium configuration by the system.
In the thermodynamical context, the problem can be put in similar terms, from a
formal point of view, but the outcome of this analysis will be a little different.
In Mechanics we consider an equilibrium configuration and we apply the principle
of virtual work in order to test the stability condition. Similarly in the Thermodynamical theory we consider a hypothetical equilibrium state and we require that every
“small” transformation, moving from it, should be an unnatural process.
The relevance of carrying on an exhaustive treatment on the stability conditions
of thermodynamical states of equilibrium is twofold. On one side we know that some
equilibrium states can be foreseen by equations of state obtained from some models,
then it is important to discriminate between stable and therefore observable states and
unstable, non-observable states. As an example consider the van der Waals equation
of state which will be treated in Chap. 8.
On the other the stability conditions dictate the geometrical structure of the Fundamental Relation and this allows absolute general predictions about the physical
properties of the system.
4 The Fundamental Relation and the Thermodynamic Potentials
In this subsection, we shall be concerned with closed systems, in the absence of
chemical reactions and, for simplicity of writing, let’s consider phases (homogeneous
systems) with only one work parameter V .
Consider the system in an equilibrium state A. As we know from the State Principle, the system has two degrees of freedom and then, any equilibrium state needs
two constraints to be determined. It is natural to consider as external constraints the
ones we normally encounter in our observations or in the laboratory experiments that
is constraints on volume, pressure, temperature and on the adiabaticity of the walls.
Fixed two of these constraints, one corresponding thermodynamic configuration
called state A, candidate to be a stable state, is determined by the set of values of the
extensive quantities and by their distribution point by point.
Suppose that this configuration suffers a “small” modification, respectful of externally imposed constraints, due to some spontaneous fluctuation or to some external
disturbance. In all generality we can describe the onset of the perturbed configuration, assuming that it is determined by a small flux of some extensive quantity
between different regions within the system, and let’s call this new configuration,
i.e. the perturbed state, state A .
If the transition from the perturbed state A to state A is a natural one, internal
processes will set on and bring back the perturbed state to the original one. If this
happens for every perturbed state within a neighborhood of state A, the latter will be
a stable or metastable equilibrium state.
Equivalently we can say that the stability conditions require that the transition
from state A to the perturbed state A is an unnatural process for every state A
within a non null neighborhood of state A.
The classification of the virtual process regarded as a natural or unnatural process
is always determined by the Second Principle and, ultimately, we can say in short by
the rule “d̂ i S > 0” but, as we shall see, the formal expression of this rule is different
depending on the different constraints that we will consider.6
Of course depending on the nature of external constraints it will be appropriate
to express the Fundamental Relation in the most suitable representation according
to the constraints.
We shall proceed in this way: first we select a pair of external constraints and we
will find which pair of state variables must be kept constant in order to give a formal
expression to the constraints. Let us describe a variety of perturbed states A’ around
state A by means of a parameter ξ which we will call “internal degree of freedom”
and define ξ so that the equilibrium state A corresponds to the value ξ e = 0. With this
formalism the Fundamental Relation in the entropy representation may be written
S = S(U, V, ξ ) .
6 The
rule of the “entropy increase” is an expression often used in a very rough manner. It can
be considered correct strictly for isolated systems. In general, for systems in interaction with the
external world this rule takes different expressions as we remarked in Sect. 4.3.2 all resulting from
the requirement d̂ i S > 0, which is the only fundamental condition.
4.3 The Fundamental Relation
When the Fundamental Relation will be written in other representations, the perturbed
states will be described by a continuous internal degree of freedom ξ in a similar
4.3.7 Adiabatic Systems
We examine two possibilities for the second constraint: at constant volume or at
constant pressure.
Adiabatic Systems with Constant Volume
The mathematical expression of these two constraints is
dU = 0 ,
dV = 0 .
The requirement that all states in a neighborhood of the state A are not reachable
by a natural process is that for all of them the contribution due to internal processes
d̂ i S < 0 and then, as d̂ e S = 0, it will be dS < 0. State A must be a state of relative
maximum entropy
S = S max .
Adiabatic Systems at Constant Pressure
Let’s find the mathematical expression for these two constraints. We may write the
infinitesimal work as
d̂W = − pdV = −d( pV )
and then
d̂ Q = dU − d̂W = dU + d( pV ) .
Therefore, the formal expression will be
dH = 0 ,
dp = 0,
and, as in the previous case, the stability will be ensured if the state A will be a point
of relative maximum for the entropy:
S = S max .
4 The Fundamental Relation and the Thermodynamic Potentials
4.3.8 Systems at Constant Temperature
As we have demonstrated, Eqs. (4.44)–(4.46) are equivalent to the Second Principle
restricted to the case of isothermal transformations. From these, we will derive the
stability conditions after we have also specified the second constraint.
Systems at Constant Temperature and Volume
Given the constraints dT = 0 and dV = 0, for any infinitesimal transformation it
will be d̂W = 0. Therefore, the requirement that all small transformations from state
A to the perturbed state A should be unnatural transformations, will be ensured by
the condition (see Eqs. (4.47)–(4.49))
δ F > 0 (unnatural transformations) .
Alternatively, the condition of stability requires that the state A is a point of relative
minimum for the free energy
dT = 0 ⎪
dV = 0
stable equilibrium at constant T, V .
F = F min
Systems at Constant Temperature and Pressure
If the external pressure is maintained at the constant value p the infinitesimal amount
of work done by external world on the system will be
d̂ W = − pdV = −d( pV ) .
In this case Eqs. (4.44)–(4.46) will be written as
dF < −d( pV ) ,
dF > −d( pV ) ,
dF = −d( pV ) .
or, alternatively:
d(F + pV ) < 0
d(F + pV ) > 0
d(F + pV ) = 0
(natural transformations) ,
(unnatural transformations) ,
(quasi-static transformations ) .
4.3 The Fundamental Relation
Then, the requirement that all infinitesimal transformations that move from the state
of equilibrium A are unnatural implies that it always must be dG > 0. The expression
of the condition of stability, in this case, is
dT = 0
dp = 0
G = G min
stable equilibrium at constant T, p .
Therefore, G must exhibit a relative minimum in correspondence of the stable equilibrium state.
4.3.9 Systems at Constant Entropy
Previously we saw that the stability condition for adiabatic systems is ensured by
points with relative maximum for entropy. In this subsection we want to complete the
scheme of possible different choices of external constraints imposing the condition
that the system is constrained to have constant entropy. From the point of view of
the experimental counterpart this is certainly an abstract issue because while we
can easily imagine building tools to regulate the volume, pressure, temperature or
energy, we do not have instruments that can be considered good entropy regulators.
In spite of this it is certainly interesting to complete the scheme of possible pairs of
parameters to keep constant (among those we have chosen at the beginning of this
section as the most significant). In fact, these two possible new choices will convert
a point of maximum entropy into a point of minimum energy or enthalpy.
To obtain this result we consider Eq. (4.86) and we apply identity Eq. (A.8)
keeping the volume constant, we have
V, ξ
V, S
= −1 .
V, U
which, by considering Eq. (4.24), may be written as
= −T
This relation shows that in the points where the member on the right, (∂ S/∂ξ )V,U ,
is zero, also the member on the left, (∂U/∂ξ )V,S is null, and that the two derivatives
have always opposite sign. By recalling Sect. 4.3.7, this implies that, if the entropy
is maximum in the point ξ = 0 at constant U , in the same point, at constant S, we
4 The Fundamental Relation and the Thermodynamic Potentials
dS = 0 ⎪
dV = 0
stable equilibrium at constant S, V .
U = U min
that is, if one fixes the volume and the entropy then the stability is ensured by a
point of minimum energy. In this way we recover the criterion that is most familiar
from Mechanics. To complete the picture, we fix the values of entropy and pressure. The Fundamental Relation, written in natural variables entropy and pressure
is the Enthalpy and the stable state will be characterized by a minimum of such
thermodynamic potential.
dS = 0
dp = 0
stable equilibrium at constant S, p .
H = H min
All of these relations show the geometric properties the Fundamental Relation. These
properties involve the existence of symmetries, which may be investigated experimentally.
Let us consider some important consequences deriving from the stability requirements of equilibrium states.
4.3.10 The Isothermal Compressibility
Consider a phase maintained at constant temperature and volume V . Its Fundamental
Relation will be properly written in the Free Energy representation
F = F(T, V ) .
We consider this system as the composed of two parts I and II with equal volume
V I = V II = V /2. The Free Energy for each part will be
= F II T,
= F(T, V ) .
F I T,
Now imagine that an internal disturbance occurs in such a way that the volume of
one of the two halves, for example, the part I, increases by a small amount δV at
the expense of the volume of the second part. As the total volume is constrained to a
constant value the volume of the second part will decrease by the same amount; we
could say that a small amount of volume is transferred from the second half to the
first. The Free Energies of the two parties will vary accordingly:
4.3 The Fundamental Relation
1 ∂2 F
= F T,
δV +
(δV )2 + · · ·
2 ∂2V
1 ∂2 F
T, − δV = F
δV +
(δV ) + · · ·
2 ∂2V
In the perturbed state the total Free Energy will have the value
T, + δV
F =F
T, + δV
T, − δV
If the initial state V I = V II = V /2 was a state of stable or metastable equilibrium,
the Free Energy of the perturbed state will be increased:
T, + δV
T, − δV
− F
> 0,
∂2 F
< 0.
and, recalling (4.21) we will have
We define Coefficient of Isothermal Compressibility χT the quantity
1 ∂V
χT = −
∂p T
This coefficient measures the percentage change in volume caused by a unit pressure change when all is made at a constant temperature.
By virtue of inequality Eq. (4.116), which as we have seen is absolutely general,
the coefficient of isothermal compressibility is always positive for all states of stable
or metastable equilibrium. If a theoretical model gives negative values for χT for
some equilibrium states those states cannot be observable (see the case of the van
der Waals model in Chap. 8). We define χ S Coefficient of adiabatic compressibility
1 ∂V
χS = −
∂p S
As it can be seen this coefficient measures the percentage of volume change per
unit pressure change when this is done adiabatically. As in the case of the coefficient
of isothermal compressibility, the inequality
4 The Fundamental Relation and the Thermodynamic Potentials
< 0.
shows that the coefficient χ S will always be a positive number for all stable or
metastable equilibrium states.
4.3.11 The Dependence of Entropy on Temperature
Another important consequence that derives from the application of the stability conditions of equilibrium states for closed phases with no chemical reactions, concerns
the way in which the entropy changes as the temperature varies. Technically the
operations to be performed are similar to those illustrated in the previous case but
here we will change the constraints imposed from outside. As in the previous case
suppose to divide the system “halfway” with respect to some extensive quantity and
we will imagine a disturbance where a small (infinitesimal) transfer of the extensive
quantity between the two parties takes place.
In this case, we consider a phase with total entropy and volume fixed, at the
values S and V , in the equilibrium state A. We know that if the state A is a stable or
metastable state, energy should have, locally, a minimum value. Imagine to divide
the system into two parts of volume V /2. All values of the extensive quantities will,
in the unperturbed state A, will be 1/2 of their values for the entire system (energy
and entropy). Now imagine to be a small transformation in which one part transfers
to the other a small amount of entropy (for example, suppose that we realize the
transfer of a small amount of heat between the two parties).
The entropy of the first half will pass from the value S/2 to the value (S/2 + δS)
while that of the second part will go to the value (S/2 − δS) thus keeping constant
the total entropy.
This virtual process will change the total energy value (we use the same notation
used in the previous case):
V S
1 ∂ 2U
I V S
, + δS = U
δS +
(δS)2 + · · · .
∂S V
2 ∂2S V
2 V
, − δS = U II
U II
δS +
(δS)2 + · · · .
∂S V
2 ∂2S V
The sum of these two quantities will give us the new value of the total energy and this
will have to be higher than U (V, S) i.e. the one corresponding to the initial stable
equilibrium state: This leads to the condition
∂ 2U
> 0.
4.3 The Fundamental Relation
Recalling Eq. (4.20), we get
> 0.
This condition is very general and descends from the stability properties of the
equilibrium states. The importance of this result is immediately visible because, as
we shall see later, this derivative is immediately connected to the heat capacity at
constant volume.
4.3.12 Other Consequences from the Stability Conditions
We can follow the same path made in the two previous cases, by varying the type
of constraints and the extensive quantity that is transferred between the two parties
in which one imagines to divide the system. For example if the constraint is on
pressure, we may refer to the Fundamental Relation in the Enthalpy representation
and to imagine a small transfer of entropy between one half and the other. It’s easy
to verify that the condition that in the initial state the Enthalpy has a minimum value
leads to the result
2 ∂ H
> 0,
∂2S p
which implies
> 0.
Similarly to the previous case this partial derivative is closely linked to the thermal
capacity at constant pressure C p . This condition is not, however, independent of the
preceding one because, as it will be proved later, it will always be
C p > CV .
By analogy with the previous examples, it is easy to prove that
< 0.
Chapter 5
Maxwell Relations
Abstract All information we have concerning a thermodynamical system, including
the equation of state, is contained in its Fundamental Relation. This Relation depicts,
in the space of thermodynamical configurations, a hypersurface which is formed by
all the equilibrium states of the system. The fact that in any transformation the initial
and the final points (at least) must belong to this surface poses several correlations
among the variations of the state parameters of the system. Therefore the existence
of the Fundamental Relation allows us to predict the response of the system to any
external perturbation and this can be achieved by writing the differentials of the
Fundamental Relation in its various representations. We call Maxwell’s Relations
the relations we obtain by applying the Schwarz theorem to the partial derivatives of
the Fundamental Relation. As an example, the general relation between the specific
heats at constant volume and pressure is obtained together with the general relations
between isothermal and adiabatic compressibilities. It can be shown that the Fundamental Relation is completely determined by the knowledge of three independent
parameters like, for instance, the specific heat at constant pressure, the coefficient
of isothermal compressibility and the coefficient of thermal expansion. Measuring
the latter three parameters the dependence of entropy on temperature, pressure and
volume is known, and the same applies to other representations of the Fundamental
Keywords Maxwell relations · Heat capacities at constant volume and constant
pressure · Coefficients of compressibility · Coefficient of thermal expansion ·
Entropy · Measuring entropy
5.1 Introduction
As we have seen in Chap. 4 every system has an associated Fundamental Relation
and this defines the k + 1 dimensional surface (k is the number of independent work
parameters) in the space of thermodynamic configurations, formed by all the states
in stable or metastable equilibrium. This implies that any thermodynamic process
may be represented by a connection between two points of this surface that represent
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
5 Maxwell Relations
the initial and final configurations (which are, necessarily, equilibrium states). This
connection will, in turn, be representable by a continuous solid line on this surface
if the transformation is a quasi-static transformation, otherwise may be represented
by small disconnected continuous segments on this surface if there are only some
quasi-static parts, or by isolated points or, as frequently happens, only by the initial
and final points.
Whatever the situation is, the need for the points to belong to a given surface
determines quantitative relations among the changes of the system properties that
have general validity and are, in fact, a consequence of the existence of a determined
Fundamental Relation.
In order to explore, at least partially, such a very large amount of correlations
it is convenient to start considering infinitesimal transformations that move from
equilibrium states (quasi-static). To do this we will write the differentials of the
Fundamental Relation in its most significant representations.
Since the differentials we are going to write are exact differentials, we will be
able to apply the Schwarz theorem on the equality of the cross partial derivatives.
Maxwell relations are the relations that we can get from the application of
Schwarz’s theorem. The number of relations that we can obtain in this way, is very
large and not all of them will be of immediate utility. We will just examine some.
5.2 Some Properties of Materials
In this section, we define some parameters of materials which are very important both
for immediate technological applications and, as we shall see, to achieve a complete
knowledge of the Fundamental Relation of each system.
Certainly, we can say that the Second Principle operationally defines the entropy
but it is equally true that the measurement of its variation through the integral
d̂ Q
along a quasi-static transformation, acceptable in principle, is always of very little
practical use.
Conversely, it is relatively easy and certainly very useful to measure how much
the volume of a body expands or decreases when its temperature is increasing at
constant pressure, or to what extent its volume decreases when the pressure acting
on it increases. The same can be said for the heat capacities i.e. for the measure of
the temperature increase caused by the transfer of a certain amount of heat under
certain restrictions. We’ll see how the knowledge of these parameters (we could call
them “technological parameters”) provides complete information on the entropy of
each macroscopic system and then on all the thermodynamic potentials and this is
equivalent to the knowledge of the Fundamental Relation of the system.
5.2 Some Properties of Materials
From the Fundamental Relation, we can get the so-called Equation of State, that
is the relation among pressure, volume, and temperature which holds at equilibrium
states and which can, in all generality, be written in the form
V = V ( p, T )
p = p(V, T ).
or, by making pressure explicit
After differentiating Eq. (5.2),we have
dV =
dp +
We define the coefficient of isothermal compressibility χT by the quantity:
χT = −
and the coefficient of thermal expansion α
The differential expression of the equation of state Eq. (5.2) becomes
= −χT d p + α dT.
and similarly for Eq. (5.3) we get
dp = −
d(ln V ) +
The first example of Maxwell Relations can be obtained by cross-differentiating
Eq. (5.7):
∂T p
∂p T
from which we see that the compressibility and the thermal expansion coefficients
are not completely independent of each other.
5 Maxwell Relations
5.3 The Volume and Pressure Dependance of Entropy
When dealing with isothermal transformations it is convenient to express the Fundamental Relation in the representation of the Free Energy and in that of the Gibbs
dF = −S dT − p dV ,
dG = −S dT + V d p ,
and by using the Schwarz’s theorem we get
∂V T
∂T V
∂p T
∂T p
From Eqs. (5.8) and (5.6), we obtain the general relations:
∂V T
= −α V .
∂p T
If we select one suitable reference state at a given temperature and we assign to that
state an arbitrary value S0 to the entropy, if we know, even merely graphically, the
values of α and of χT we know the value of entropy in any other state at the same
temperature. In order to obtain a complete knowledge of entropy we need to know
its dependence on temperature.
5.4 The Heat Capacities and the Temperature Dependance
of Entropy
An elementary and intuitive definition of heat capacity of a body, or of specific heat
of a material, was given in thermology and was bound to a particular choice of a
scale of empirical temperature, usually the Celsius scale. It is defined as the ratio of
the quantity of heat transferred to the body and the consequent temperature increase.
Most frequently this process is conceived either at constant volume or at constant
C V, p =
δϑ V, p
5.4 The Heat Capacities and the Temperature Dependance of Entropy
This definition can be useful for some special technical applications but is totally
inadequate from the thermodynamic point of view where we are interested in reaching, through the measures of these parameters, a complete knowledge of the Fundamental Relation and this result will be achieved after we will have a comprehensive
knowledge of entropy.
For this purpose, after Eqs. (5.14) and (5.15) that provide us with the knowledge
of entropy at constant temperature, the thermal capacity will give us the temperature
We will consider, for the moment, only the two cases of processes: at constant
pressure and constant volume.
5.4.1 The Heat Capacity at Constant Pressure
If a system is maintained at constant pressure, we know that the quantity of heat
supplied in an infinitesimal process produces a change of enthalpy given by the
d̂ Q = dH = T dS.
We define C p , heat capacity at constant pressure by the relation:
Cp =
From the definition (5.18), it is clear that the energy transferred in the form of heat
will be distributed partly on the increase of the energy of the system and in part
will be used to do work against the mechanism that ensures constant pressure. For
example, as we shall see, the so-called latent heat in a phase transition (that occurs
at a constant pressure) is shared between these two items. If we refer to a unit mass
system, C p will be called specific heat at constant pressure; if we will refer to one
mole it will be called molar heat at constant pressure.
5.4.2 The Heat Capacity at Constant Volume
If the system is constrained to maintain a constant volume, the supply of a small
amount of heat will result in an equivalent increase of its energy. Then, we define as
heat capacity at constant volume the quantity:
CV =
5 Maxwell Relations
As before C V will be called specific heat at constant volume or molar heat at constant
volume depending on whether it refers to the unit mass or to one mole, respectively.
Generally, the mechanical-statistical models provide predictions for the specific
heat at constant volume because they allow to calculate the total energy of the system. The comparison with the experiment, to validate the model, is, however, quite
complicated because it is difficult to vary the temperature of a body and to maintain
the volume constant while it is much easier to vary its temperature while maintaining
the pressure constant. It is generally preferred to measure the specific heat at constant
pressure and recover from it the specific heat at constant volume. We, therefore, need
a general, i.e., independent of the modeling, relationship between the two.
5.4.3 The Relation Between C p and C V
We have to find the relationship between the derivative of the entropy with respect to
temperature made at constant pressure and that made at constant volume. We simply
apply Eq. (A.15) with w = S, x = T , y = V and z = p and we obtain
From Eqs. (5.14) and (5.6) we obtain
and after multiplying by T we obtain the relation we were looking for
C p = CV +
α2 V T
This relation is of absolute generality. Since we know that for states in stable or
metastable equilibrium χT > 0 we will always have
C p > CV .
As can be seen the two specific heats, the coefficient of compressibility and the
coefficient of thermal expansion are mutually bound by one general relation and
then only three of these may be chosen as independent. From the knowledge of
three of these, we have a complete knowledge of the dependence of the entropy
on the pressure, volume and temperature, and then we can say we have a complete
knowledge of the Fundamental Relation.
5.4 The Heat Capacities and the Temperature Dependance of Entropy
5.4.4 The Adiabatic Compressibility Coefficient
Another parameter, very important in applications, is the coefficient of adiabatic
compressibility χ S . It measures the percentage change in volume following a unit
change of pressure when this is done adiabatically. Formally
χS = −
It is rather easy to show that, in all generality, the coefficient of adiabatic compressibility is related in a simple way to the coefficient of isothermal compressibility and
the specific heats.
We have just to apply Eq. (A.8) once with the variables ( p, V, T ) and, again, with
the variables ( p, V, S):
∂T p
= −
∂p T
∂p V
= −
∂p S
∂V p
and hence
∂V p ∂T p
∂T p
∂p T
= = .
∂p S
∂p V ∂T V
∂T V
If we recall Eqs. (5.18) and (5.19) we obtain
χS =
χT .
As we see the coefficient of adiabatic compressibility is always smaller than the
one at constant temperature and is always positive for states of stable or metastable
5 Maxwell Relations
5.4.5 The Equations of the Adiabatic Transformations
The result given in Eq. (5.29) is of the utmost generality. In order to find the equation of an adiabatic (quasi-static) transformation we have to integrate the following
differential equation:
1 dV
χT .
V dp
where the symbol of total derivative denotes the derivative taken along the adiabatic
curve. This relation can be used to calculate any small changes in volume for small
variations of pressure if we know C V /C p and χT for a given value of the pressure and
temperature. In order to integrate Eq. (5.30) we need to know the dependence of C p ,
C V and χT on p and on V . For a system in which χT p −1 with good approximation
(e.g. dilute gases above the critical temperature Chap. 6), we may write Eq. (5.30) as
d (ln V )
d (ln p)
and if, in the temperature and pressure intervals we are considering, we may treat
C p and C V as constants, we get
pV γ = constant ,
where with the symbol γ we have denoted the ratio C p /C V .
5.5 Concluding Remarks and the Role of C p , α, χT
We have defined five parameters which describe the response of a system to the two
most frequent stresses, i.e., to heating and compression under the different conditions
which usually occur in the laboratory.
We saw that only three of these coefficients are independent of each other and the
general agreement is to choose as independent coefficients C p , α and χT .
When we tabulated these three parameters, the Fundamental Relation, and then
all thermodynamic potentials, are known and we can easily make predictions about
what will be the response of the system to any external disturbance.
Consider, for example, to be interested in knowing what will be the increase of
pressure in a system held at constant volume, if the temperature is increased by dT
(of course we are always talking about closed systems). Formally, we are looking for
dp =
5.5 Concluding Remarks and the Role of C p , α, χT
We can answer the question because we have seen from Eq. (5.8) that
∂T V
but this gives us the opportunity for a further comment.
In any problem like this in which we want to predict the effect of a certain action
performed from outside, and in accordance with certain assigned constraints, the
solution requires the knowledge of the partial derivatives of a parameter with respect
to another parameter given some constraints.
We can see that any partial derivative can be expressed in terms of the three
independent coefficients C p , α and χT .1 We will not develop a general technique
to bring any partial derivative to the above three but let it be sufficient, in order to
support the above statement, to have a look to the expression of all the thermodynamic
potentials as a function of these three factors.
First, as already seen in Eqs. (5.14), (5.15) and (5.18), or equivalently Eq. (5.19),
we can consider the entropy completely determined apart from an additive constant
(this will, in turn, be determined by the Third Law). As for the other potentials we
will just consider Energy and Free Energy; for the others, it will be a question of a
little gymnastics with the partial derivatives.
Separating the dependence on the pressure and that on temperature and volume,
for Energy we may write
= V ( χT p − α T )
dU = C V dT +
− p dV
from which we write the dependence of the Energy U from the volume at constant
−p .
∂V T
For Free Energy F we have
= χT pV ,
dF = −S dT − p dV .
Let’s see some examples where we may determine the response of a system under
certain external stresses.
1 Also
these three coefficients are, after all, three partial derivatives.
5 Maxwell Relations
5.5.1 Isothermal Processes
A closed system goes through an isothermal transformation from the initial pressure
pi to the final pressure pf . Write the expressions for the amount of heat that it absorbs
and the amount of work that must be made on the system in this process.
For an infinitesimal process where the variation in the pressure is d p the two
quantities will be given by the following expressions:
d p = −α T V d p ,
d̂ W = − p dV = − p
d p = p V χT d p ,
∂p T
d̂ Q = T
where in the first equation we made use of Eq. (5.15) (as we can see, by adding the
two results we get, how it should happen, Eq. (5.35). For the same quantities but in
a process in which the pressure change is not small enough, we have to write the
integral expression, if we suppose the process to be quasi-static:
Q = −T
W =
α V dp ,
pV χT d p .
Obviously in order to use these relations we need to know the dependence of α, χT
and the volume as a function of pressure at the given temperature. If we consider
the average values of the coefficient of thermal expansion and the coefficient of
isothermal compressibility in the range of pressure we are considering we may write
Q = −α T
W = χT pf
V dp ,
( pV ) d p .
5.5.2 Free Expansion
If we consider a rigid container divided into two parts by a septum. One part, of
volume V , contains n moles of gas at temperature T while the other side is empty.
The septum is removed and the gas expands freely to occupy the entire volume
5.5 Concluding Remarks and the Role of C p , α, χT
and reach a new state of equilibrium at a new temperature and pressure. Further,
we suppose that the process is fast enough to be considered adiabatic. This is what
generally is meant by free expansion and, for instance, we know that for an ideal
gas the temperature will remain constant and the pressure can be obtained from the
equation of state. It is interesting to ask what is the final temperature in the case of a
real gas.
Since, in general, the coefficients of thermal expansion, the molar heats and the
adiabatic compressibility depend on pressure and temperature, we cannot give the
answer for finite transformations but if we consider an infinitesimal transformation
in which the volume varies by a quantity a small amount dV , we can get some
interesting indications.
For the assumed conditions (free and adiabatic expansion) energy will maintain
a constant value. The temperature variation can be written as
dT =
Making use of Eq. (A.8) we can express the partial derivative we are interested in, in
the form:
1 ∂U
∂V U
CV ∂ V T
where C V is the heat capacity at constant volume. From Eq. (5.37) we obtain
dT =
T dV.
This result is valid for small transformations, that is, transformations which can be
treated as infinitesimal and quasi-static i.e. starting from an equilibrium state, but can
not be used for finite transformations when these may not be treated as quasi-static.
For a dilute gas above the critical temperature (i.e., an ideal gas approximation) we
may put
χT .
and we find that the temperature remains constant.
5.5.3 Pressure Drop in Free Expansion
A system is subject to a small free expansion and we measure the temperature variation dT . Let us find the pressure variation. Of course the answer can be given provided
that we know the Fundamental Relation of the system and this is equivalent to know
5 Maxwell Relations
α, χT , C p . Similarly to what was done in Sect. 5.5.2. We express the quantity we
are looking for in the following way:
dp =
We have to express the partial derivative as a function of the “technical parameters”.
With reference to Eq. (A.8) we have
∂T p
= −
∂T U
∂p T
The denominator is given by Eq. (5.35) while, for the numerator, it is necessary to take
a further step. By using the identity U = H − pV , and calculating the derivative,
we obtain
= n Cp − p V α .
∂T p
Following the same procedure as in the previous examples we obtain
dp = −
n Cp − p V α
V ( χT p − α T )
5.5.4 Temperature–Pressure Variations in Adiabatic
In a situation in a certain sense “complementary” to that treated in the previous cases,
we consider the temperature variation in an adiabatic process as a function of the
pressure variation. For infinitesimal processes, we may write
dT =
and, making use of Eqs. (A.8) and (5.15) that gives us the dependence of entropy on
pressure at constant temperature, we get
dT =
α T dp .
n Cp
5.5 Concluding Remarks and the Role of C p , α, χT
5.5.5 Temperature–Volume Variations in Adiabatic
A system is subject to a volume variation dV along an adiabatic transformation,
starting from a given equilibrium state. Determine the variation of the temperature:
dT =
and with reference to Eqs. (A.8) and (5.12) we have
∂V T
= −
∂V S
χT C V
∂T V
and finally we obtain
dT = −
dV .
χT C V
It easy to verify that for an ideal gas the above Eq. (5.59), after integration, leads to
T V CV = constant .
Taking into account that, for dilute gases
C p − CV = R
and posing
γ =
the Eq. (5.60) is often written as
T V (γ −1) = constant .
5.1 The specific heat of Platinum at constant pressure may be expressed by the
empirical formula
5 Maxwell Relations
C p = a + bT +
with a = 123 J Kg−1 K−1 , b = 28.7 × 10−3 J Kg−1 K−2 and d = 2.15 J Kg−1 K. Determine the enthalpy and the entropy variations in an isobaric transformation from the
initial temperature T1 = 280 K to the final temperature T2 = 1370 K. The mass is
0.1 Kg.
5.2 Let us consider 100 g of water at a temperature of 20 ◦ C, undergoing an isothermal transformation in which the pressure varies from an initial value pi to the final
value pf . At this temperature and in this range of pressures we have the following
average values: χT = 0.48 × 10−4 atm−1 and α = 0.2 × 10−3 K−1 . We are interested in calculating the amount of work done by the outside, the amount of heat
supplied to the water and the variation of energy of the water. Consider the special
case where the water expands passing from the pressure pi = 40 atm to the final
pressure pf = 1 atm.
5.3 In analogy with the case treated in the previous problem, 100 g of water are
brought to the temperature ϑ 0.2 ◦ C, and undergo an isothermal compression from
the initial pressure 1 atm to the pressure 40 atm. In this condition the coefficient
of isothermal compressibility is, on average, χT = 0.46 × 10−4 atm−1 and the
average coefficient of thermal expansion α = −0.64 × 10−4 K−1 . Determine the
amount of heat absorbed by the water from the external world.
5.4 A homogeneous solid with mass m = 0.5 Kg is compressed at constant temperature T0 = 100 K and quasi-statically from the initial pressure p0 = 1 bar to the
final pressure pf = 5 × 103 bar. The equation of state can be approximated by V ∗ =
V0∗ − ap + bT where V ∗ is the specific volume, with a = 8 × 10−16 m4 Kg−2 s2 and
b = 3.5 × 10−9 m Kg−1 K−1 . Its specific energy is given by U ∗ = U0∗ + cT − bpT .
Determine the amount of heat Q and the amount of work W transferred to the solid
by the external world in this transformation.
5.5 The same solid considered in the previous exercise is subject to a rapid compression in the same pressure interval. The transformation can be treated as an irreversible adiabatic compression and at the end of the transformation the temperature
of the solid is Tf = 300 K. The constant c in the equation for the specific energy is
c = 20 J Kg−1 K−1 . Determine: (a) the amount of work done on the solid; (b) the
specific heats of the solid at constant volume C V and at constant pressure C p and (c)
the entropy variation of the solid.
5.6 A cube made of Copper is compressed reversibly and isothermally from the
initial pressure p1 = 1 bar to the final pressure p2 = 103 bar. The temperature is
T = 300 K. Determine: (i) the amount of work done on the solid; (ii) the amount
of heat given to the solid; (iii) the free energy and the energy variations of the
copper. The following data can be used: Coefficient of thermal compressibility α =
5.0 × 10−5 K−1 , Coefficient of isothermal compressibility χT = 8.6 × 10−12 Pa−1
and Specific volume V ∗ = 1.14 × 10−4 m3 Kg−1 and the mass is m = 3 Kg.
Part II
Chapter 6
General Properties of Gaseous Systems
Abstract This chapter deals with the general properties of gases starting from their
isothermal behavior and the definition of the virial coefficients. The Joule–Thomson
experiment (throttling experiment) is discussed together with the subsequent calorimetric measurements which prove the proportionality of the first virial coefficient to
the absolute temperature T . The latter fact allows us to have a further tool for defining
the absolute scale once an arbitrary value for T at one reference state is chosen. The
Joule–Thomson coefficient and the inversion curve are defined and the problem of
gases liquefaction is shortly outlined. The expression of the chemical potential for
dilute gases is also shown. The heat capacities of gases are widely discussed starting
from the theorem of energy equipartition.
Keywords Gases · Virial coefficients · Throttling · Temperature scale ·
Joule–Thomson coefficient · Inversion temperature · Liquefaction · Low
temperatures · Chemical potential of gases · Heat capacities · Energy equipartition
6.1 Isothermal Behavior of Gases
It is convenient to start from the Boyle–Mariotte’s law. We take a certain amount of
some gas contained in a cylinder with a movable piston. We keep the temperature
constant and measure the volume for different values of the pressure. Boyle–Mariotte
have verified that, for every gas maintained at a given temperature, the product ( pV )
tends to a constant value when the pressure becomes lower and lower. We set this
law with the formula:
lim ( pV ) = A(T ) .
The quantity A(T ) depends on the temperature but also depends on the type of
selected gas. We can also state that it will be proportional to the quantity of gas
because if we double the amount of gas at the same temperature and pressure, the
volume will double. Keeping the temperature to a fixed value, we can expand the
product ( pV ) in a power series of the pressure p:
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
6 General Properties of Gaseous Systems
pV = A(T ) + B(T ) p + C(T ) p 2 + · · · .
The quantities A(T ), B(T ), C(T ) are named first, second, third virial coefficients
respectively. As we have already seen for the first coefficient A, they all depend on
the temperature and on the type of gas; similarly they are proportional to the amount
of gas. The latter could be measured by weight or by number of moles for instance.
For the moment we are interested in highlighting their temperature dependence.
6.2 The First Virial Coefficient for Gases
Experimentally, it can be shown that the first virial coefficient is proportional to the
(absolute) temperature T . This fact is of fundamental importance and therefore it
is necessary, in the following, to examine with some detail how it is possible, by
means of experimental observations, to establish this relation of proportionality. The
possibility of using gases in the so-called “perfect gas thermometers”, still used in
many treaties, is based on this result.
The experimental evidence derives from the combination of two experiments: the
first is the Joule–Thomson experiment which is preparatory to a second calorimetric
6.2.1 The Joule–Thomson Experiment
The Joule–Thomson experiment, also called throttling experiment, consists of forcing
a certain amount of gas to pass, adiabatically, from an initial state p1 , V1 , θ1 to a final
state p2 , V2 , θ2 flowing through a capillary or a porous separator (throttle). The initial
and final temperatures are denoted, in this particular case, by their values measured
in some empirical scale θ because the experimental observations we are discussing
here are devoted to define, operationally, the absolute scale.
Two cylinders communicate through a throttle as shown schematically in Fig. 6.1.
Initially the gas is in cylinder 1 and is prepared at arbitrary pressure p1 , and temperature θ1 while cylinder 2 is empty and the piston is regulated in such a way to
maintain a constant arbitrary pressure p2 (with p2 < p1 ) when the gas flows in. The
two cylinders are thermally insulated and the gas is forced to pass from cylinder 1
to cylinder 2 under the described conditions. In this experiment we have three free
parameters: p1 , θ1 , that is the initial state of the gas, and p2 . We measure the final
temperature θ2 . In this way, for every given initial state we can represent on a (θ, p)
plane the value of the final temperature θ2 as a function of the final pressure p2 .
The reason for adopting this device (throttle) is to ensure a pressure jump between
the gases in the two cylinders so that the intermediate states traversed by the gas during
the process, can always be described as composed by two parts each one in internal
6.2 The First Virial Coefficient for Gases
Fig. 6.1 Two cylinders communicate by means of a throttle. a Initially, the gas is in container 1
and is pushed toward container 2 acting with a constant pressure p1 . The gas is prepared at an
initial temperature θ1 and occupies the volume V1 . b In container 2 the piston is moved ensuring a
constant pressure p2 and the gas is pushed to flow into container 2. c After the throttling, the gas is
all in container 2 where it occupies a volume V2 at temperature θ2 . The two cylinders are thermally
equilibrium with a discontinuous variation of pressure. In this way the amount of
work done on the gas in the whole process can be easily calculated.
If we denote by W1 and W2 the amount of work performed, on cylinder 1 and
2 respectively, we can easily write W1 = p1 V1 and W2 = − p2 V2 where V1 and V2
are, the volumes occupied by the gas when it is all in cylinder 1 (beginning) and in
6 General Properties of Gaseous Systems
cylinder 2 (end of the throttling). The total amount of work done on the gas in this
process is
W = W1 + W2 = p1 V1 − p2 V2 .
The transformation
( p1 , V1 , θ1 ) → ( p2 , V2 , θ2 .)
is an adiabatic transformation and then, by he First Principle, we have U = W .
This means that U ( p2 , θ2 ) − U ( p1 , θ1 ) = p1 V1 − p2 V2 and then:
H ( p1 , θ1 ) = H ( p2 , θ2 ) .
For every initial state ( p1 , θ1 ), we can draw the curve θ = θ ( p) by measuring the
final temperature θ2 for different values of the final pressure p2 . We obtain a curve
which is the locus of the states having the same enthalpy as the initial state ( p1 , θ1 ).
The construction of these isoenthalpic curves constitute a preparatory stage to
the calorimetric measurements that will show the proportionality of the first virial
coefficient to the temperature.
6.2.2 Some Thermodynamic Potentials for Gases
From the equation of state Eq. (6.2), we can get some relevant information on the
thermodynamic potentials of a gas. We start from the general relations:
V =
H =G+T S =G−T
If we make use of Eq. (6.2) written in the form V = (A/ p) + B + C p + . . . we
can write the variation of the Gibbs between two states at the same temperature. By
integrating Eq. (6.6) at constant temperature we obtain
G = G ( p, θ ) − G p † , θ =
= A (T ) ln † + B (T ) p − p † + C (T ) p 2 − p † + . . . .
From Eq. (6.7), the Enthalpy difference H between two states at the same temperature can be written in form
H = H ( p, θ ) − H p † , θ = G − T
6.2 The First Virial Coefficient for Gases
By substituting Eq. (6.8) in Eq. (6.9) and differentiating respect to the temperature
we obtain
d A (T )
ln † +
H ( p, θ ) − H p † , θ = A (T ) − T
dC (T ) 2
dB (T ) p − p† +
C (T ) − T
p − p† + . . .
+ B (T ) − T
the last expression the total derivatives of the virial coefficients appear because are
a function of temperature only.
6.2.3 Calorimetric Measurements for the Determination
of the First Virial Coefficient
In order to obtain more information about the virial coefficients and, in particular,
on their dependence on temperature, we will measure the enthalpy change Eq. (6.10)
through simple calorimetric measurements.
We know that the measurement of the quantity of heat supplied to a system in a
transformation is equal to its enthalpy change only if the transformation is at constant
pressure while in Eq. (6.10) we have the expression of the enthalpy variation between
two states at the same temperature and at different pressures. It is at this point that
the experiment of Joule–Thomson comes to the aid.
Suppose that in the Joule–Thomson expansion the gas becomes colder i.e. θ2 < θ1 .
Let’s take the gas in cylinder 2 and raise its temperature to the initial temperature θ1
at the constant pressure p2 . If we denote with Q the amount of heat supplied to the
gas in this transformation, we will have
Q = H ( p2 , θ1 ) − H ( p2 , θ2 ) ,
and, by making use of the isoenthalpic curves Eq. (6.5) obtained by the Joule–
Thomson experiment, we will get
Q = H ( p2 , θ1 ) − H ( p1 , θ1 ) .
If, on the contrary, the gas in the Joule–Thomson expansion becomes hotter i.e.
θ2 > θ1 , then we may take back the gas to the initial state and then raise, at constant
pressure p1 , its temperature to the measured value θ2 . In such a case the amount of
heat delivered to the gas will be equal to the following enthalpy difference:
Q = H ( p1 , θ2 ) − H ( p1 , θ1 ) ,
and by using, once again, the isoenthalpic curves of Eq. (6.5) we get
6 General Properties of Gaseous Systems
Q = H ( p1 , θ2 ) − H ( p2 , θ2 ) .
In both cases, the measurement of the quantity of heat provides us with an experimental measurement of the expression given in Eq. (6.10).
The relevant fact is the presence of the logarithmic term in this expression and this
means that, for a given temperature, the amount of heat Q must exhibit a divergence
when the final pressure tends to zero. This divergence is not manifested and the
experimental data are soon arranged, for each temperature, at a constant value.
This fact has the consequence that the logarithmic term must be absent in the
power expansion and this implies that the coefficient of the logarithmic term must
be identically zero at all temperatures, i.e.,
and this is equivalent to state that the first virial coefficient must be proportional to
the absolute temperature:
A (T ) ∝ T .
6.3 Definition of the Temperature Scale by Means of Gases
To completely define the scale of absolute temperature, we proceed follows: we select
a certain amount of a given gas, and we measure the volume at different pressures
keeping it at a constant temperature whose value is denoted by θ1 in some empirical
scale as, for instance, the Celsius scale. We calculate the product ( pV ) for different
pressures and we extrapolate the value of this product to pressures tending to zero.
This extrapolated value will give us a measurement of the first virial coefficient A1
of that gas at the empirical temperature θ1
A1 = lim ( pV )θ1 .
We proceed in a similar way along another isotherm at the empirical temperature θ2 .
We shall obtain the new value for the first virial coefficient:
A2 = lim ( pV )θ2 .
Then the ratio of the two first virial coefficients will be
Since this ratio does not depend either on the amount or on the kind of the selected
gas, we may state that it is determined in nature in an absolute way. We are left with
6.3 Definition of the Temperature Scale by Means of Gases
the freedom to set the value of the temperature in an arbitrary state after which the
scale will be completely determined. As we know, we agreed to assign the value
T = 273.15 K at the melting point of distilled water at the pressure p0 = 1 atm.
More rigorously, we choose the value:
T = 273.16 K
in correspondence to the triple point of water (see Sect. 7.3). The latter formulation
is much preferable because the triple point is a configuration with zero degrees of
freedom while the liquid–solid equilibrium temperature depends on the pressure.
Why Have We Adopted This Convention?
Before understanding the deeper meaning of the concept of absolute temperature,
there was a general consensus about the use of the empirical temperature defined by
the Celsius scale. Indeed much work on the measurement of specific heats, thermal
expansion coefficients and, in general, a large amount of calorimetric measurements
had been cataloged using that empirical temperature scale. It was therefore convenient
that the translation of the experimental data in the new temperature scale was as
simple as possible.
For this purpose, it is convenient to make sure that the unit change in temperature,
i.e., the degree, was the same in passing from one scale to another. Since the unit
in the Celsius scale is defined as the hundredth part of the mercury volume change
between the boiling point and the melting point of water at atmosphere, it is clear
that the best choice was to manage so that even in the new scale the difference of the
values of T between the same two reference states, was equal to 100.
In order to obtain this result, we measure the first virial coefficient for a certain
amount of a certain gas at a temperature θ1 = 100 ◦ C and the temperature θ2 = 0 ◦ C.
We will get two values that we denote, respectively, with A1 and A2 .
The ratio between the two experimentally measured values is
= 1.366 .
and then from Eq. (6.19) we set
This number is absolute in the sense that it does not depend on any convention. Now
we impose that the difference between the two values is
T1 − T2 = 100 .
6 General Properties of Gaseous Systems
The solution of the system composed by the two Eqs. (6.22) and (6.23) gives the
T1 = 373.15 T2 = 273.15 .
We have imposed that the temperature difference between the boiling point of water
and the ice melting point at the pressure of 1 atmosphere, is equal to 100 both in
absolute scale and in the Celsius scale and that is all.
For instance, if we say (as we currently do) that the difference in absolute temperature between, for instance, θ1 = 53 ◦ C and θ2 = 20 ◦ C is T = 33 K, this means
that we neglect the temperature dependence of the coefficient of thermal expansion
of mercury in the interval 0 ◦ C ≤ θ ≤ 100 ◦ C. Having this in mind we may write
T = 273.15 + θ ,
and this is the most convenient relationship between the Celsius scale and the absolute
scale but it is subject to the approximation mentioned before. It is clear that the further
we move away from the range in which the Celsius scale was defined, i.e. the range
0–100 ◦ C, the more meaningless relationship Eq. (6.25) becomes.
The current assertion that “absolute zero is −273.15 Celsius degrees” is a statement that might make sense only if we consider Eq. (6.25) as the relation that defines
the Celsius scale out of the interval in which it was initially defined, otherwise it
makes no sense.
6.3.1 Other Determinations of the Temperature Scale
There are different routes to determine the absolute temperature scale but the one
previously discussed is the closest to the so-called “perfect gas thermometer”.
An alternative route was already considered when we discussed the issue of the
maximum efficiency of a heat engine. On that occasion we have demonstrated, as a
consequence of the Second Principle, that the maximum efficiency of a heat engine
that operates between two thermostats is given by
ηmax = 1 −
where T1 and T2 are the values of the temperatures (respectively, of the “hot” source
and of the “cold” source) measured in the new absolute scale and hence, if we measure
(asymptotically) this maximum efficiency, we obtain a measure of the ratio (T2 /T1 ).
Another way would be to measure the energy density of the black-body radiation
or, equivalently, the measurement of the radiation pressure at thermodynamic equilibrium. From thermodynamics, it results that these quantities are proportional to the
fourth power of the temperature measured in absolute scale, as it will be shown in
Sect. 12.4:
6.3 Definition of the Temperature Scale by Means of Gases
= 14 .
Also, in this case, the measurement of the ratio between the pressures at two different
temperatures will provide an absolute measure of the ratio between the absolute
temperature values. It remains then to make use of the arbitrariness that is left to us,
in analogy with the procedure outlined in the case of the gas thermometer, in order
to determine the scale completely.
6.4 The Universal Constant of Gases
We have seen, with a solid experimental evidence, that the first virial coefficient A
for gases is proportional to the absolute temperature.
Since the volume is an extensive quantity, the first virial coefficient must also be
proportional to the quantity of gas contained in the vessel. If we choose to measure
the amount of gas through the number of moles n we can write
A=nRT .
The practice shows that, with this choice, the proportionality coefficient R has the
same value for all gases. For this reason, it is called the universal gas constant. The
value of the universal gas constant is
R = 8.31 J mol−1 K−1 = 0.082 atm L mol−1 K−1 = 1.987 cal mol−1 K−1 . (6.29)
6.5 The Joule–Thomson Coefficient
The Joule–Thomson experiment reproduces, under controlled conditions, the adiabatic expansion of a gas against a constant pressure (as it could be, e.g., the expansion
of a gas contained in a cylinder or the expansion of steam at high pressure in a turbine,
against the atmospheric pressure). In general the temperature of the gas undergoing
such an adiabatic expansion will change. It may increase or decrease depending on
the initial state as it will discussed in Sect. 6.6. For instance, given the initial temperature, if the density of the gas is high enough the temperature will increase under
expansion while for relatively low initial pressure the temperature will decrease and
this technique can be used to lower the gas temperature as described in Sect. 6.6.1.
We have seen that this process can be well approximated by an expansion at constant enthalpy, then it is appropriate to define the so called Joule–Thomson coefficient
as the change in temperature per unit change of pressure in a isoenthalpic process.
Making use of Eq. (A.8) applied to the function H = H ( p, T ) we get
6 General Properties of Gaseous Systems
Let’s use Eq. (6.7) to calculate (∂ H /∂ p)T in the second member of Eq. (6.30):
∂ p∂ T
= V − T αV = V (1 − αT ) ,
where α is the coefficient of thermal expansion. Recalling the definition of heat
capacity at constant pressure C p = (∂ H /∂ T ) p we obtain the expression for the
Joule–Thomson coefficient:
CH =
∂p H
As we can easily see C H is an intensive quantity positive or negative according to the
sign of the quantity (α − 1/T ). In the former case the gas under expansion cools, in
the latter it becomes hotter.
In general, to be able to make predictions about the behavior of a gas in this kind
of expansion, it is necessary to assume a particular equation of state. For a gas which
obeys the equation of state pV = RT (ideal gas), one immediately obtains
and it follows that in this case the Joule–Thomson coefficient is identically zero. For
such a gas, the temperature will not change in the isoenthalpic expansion and this
implies that p1 V1 = p2 V2 . Since we must have H1 = H2 , it follows that
U1 = U2 .
This shows that, for a system with this equation of state, the energy depends only on
the temperature.
We see then that the cooling or heating of a gas in response to an expansion
depends on being
α≷ .
If we adopt a phenomenological equation of state as shown in Eq. (6.43) it will be
the second virial coefficient B (T ), or more precisely its temperature dependence,
to decide the matter (calling into play also the third virial coefficient, the algebra
is complicated a little bit, and the condition will also depend on pressure). In this
approximation we see that we can write in all generality
αV =
6.5 The Joule–Thomson Coefficient
where B = B (T ) is the second virial coefficient per mole. Replacing the previous
expression in Eq. (6.32) we write
C H C p = nT
− n B (T ) = −n B + nT
obtaining at the end
CH =
n 2
6.6 The Inversion Curve
From the general introductory discussion made in the previous section, we have seen
that for all gases, if we represent the thermodynamic states in a plane ( p, T ), the
α ( p, T ) =
defines the locus of the points (thermodynamic states) in which the Joule–Thomson
coefficient is zero. This means that, if we prepare the gas in an initial state at a given
temperature, (α − 1/T ) and therefore C H change sign while the pressure varies. The
discussion may be somehow detailed after we have made a choice on the equation of
state to be adopted but in general we expect that, at a given temperature, C H will be
negative for the high pressures and positive for the lower pressures. Indeed at high
pressures, that is at relatively high densities, the repulsion between nuclei dominates
in the sense that the average potential energy between the molecules is positive as
it is made clear in Fig. 8.1 which shows the two essential features of intermolecular
interactions (at short and large distance) valid for every equation of state. We expect
that, under expansion, the average distance between molecules increases and then
the average potential energy decreases in favor of an increase of the average kinetic
Conversely, at low pressure, the intermolecular potential energy is dominated by
the attractive part of the interaction and the average potential energy will be negative.
Under expansion, the potential energy will increase at the expense of average kinetic
At ordinary temperatures and pressures for hydrogen and, more in general, for the
noble gases, the attractive interaction is very weak and the deviation from the ideal
gas equation of state is due mainly to the repulsive part.
The curve described by Eq. (6.39) is the locus of the points where the behavior of
the gas, subject to an isoenthalpic expansion, changes sign and for this reason it is
called inversion curve.
6 General Properties of Gaseous Systems
Fig. 6.2 a States at constant enthalpy downstream the throttling of the gas starting from the state
p1 , T1 before throttling. The maximum of the curve defines the inversion temperature Ti , and
pressure pi , at a fixed enthalpy H . b Dashed line is the inversion curve of a given gas obtained by
the collection of different inversion temperatures and pressures for different enthalpies (different
initial states). According to the definition of C H given in Eq. (6.32), positive (negative) derivative
(∂ T /∂ p) H accounts for gas cooling (heating)
In Fig. 6.2a it is outlined the procedure to determine the inversion temperature. For
a given initial state ( p1 , T1 ) upstream the throttling, we select different final pressures
p2 , p2 , p2 , · · · < p1 downstream the throttling, and we measure T2 , T2 , T2 , . . . .
The collection of these final states determines a curve T = T ( p) which is at constant enthalpy showing a maximum value at ( pi , Ti ) H . At this specific state both
[α( p, T ) − 1/T ] and C H ( p, T ) change sign. By repeating the procedures for different initial states, we obtain a family of curves. Each of them being at constant
H whose value is fixed by the choice of the initial state. Hence, we determine a
collection of different inversion points ( pi , Ti ) forming the inversion curve either in
the (T, p) or ( p, T ) plane (see Fig. 6.2b).
6.6.1 Liquefaction of Gases and the Attainability of Low
For every gas, there is a temperature Tcr , known as critical temperature, so that
the liquefaction of a gas is possible only below Tcr . This argument will be treated
extensively in Chap. 7. To liquify the gas it is necessary to bring it below the critical
temperature and then compress it isothermally. If we already possess bodies at a
sufficiently low temperature there is no problem at least in principle, but if the lowest
available temperature is above the critical temperature of the gas, the latter cannot
be liquefied by a simple isothermal compression. It is necessary to lower further its
temperature, and this can be done with an isoenthalpic expansion according to the
procedure outlined in Sect. 6.5. In general, for obvious technical reasons, it will be
convenient to expand the gas against the atmospheric pressure p0 then our problem
can be set in this way: given the initial temperature T0 , how shall we select the initial
6.6 The Inversion Curve
pressure p such that we obtain the maximum temperature drop in one isoenthalpic
expansion against the atmospheric pressure? If we denote by T the value of the
temperature of the gas after the expansion, we may write
p0 ∂T
dp .
T − T0 =
∂p H
In the above Eq. (6.40), p0 and T0 are given and then we may consider the value of
the final temperature as a function only of the initial pressure p chosen for expansion,
formally T = T ( p). The derivative of this function with respect to p leads to
∂p H
and then the minimum value for T will be obtained if we choose the initial value of
the pressure p in such a way to have dT/d p = 0. In other words in such a way that
= 0.
This means that the maximum effect will be obtained if we start from the inversion
curve at the given temperature T0 . Let us consider, as an example, the following
Example 6.1
Nitrogen. The critical temperature of Nitrogen is about 126.2 K so it was possible
to liquefy it by using thermostats at relatively higher temperatures.
Hydrogen. The critical values for Hydrogen are pcr 13 atm and Tcr 33.18 K.
In Fig. 6.3, the inversion curve for Hydrogen is represented in a plane ( p, T ) and we
see that the maximum pressure drop is obtained at a temperature of about 64–65 K
which is almost twice its critical temperature. At this temperature, the pressure on
the inversion curve of Hydrogen is about 160 atm and this would be the situation
for maximum cooling effect in one single expansion. In practice, it is convenient to
work at T 72 K and p 140 atm. This temperature can be reached by evaporating
liquid nitrogen at low pressure.1
Helium. The gas with the lowest critical temperature is Helium (He) with Tcr 5.3 K and pcr 2.26 atm.
Resuming the mechanism to obtain low temperatures is the following: we start
from easily accessible temperatures and we liquefy a gas having a relatively high
critical temperature. By evaporating the obtained liquid below the saturation pressure,
we can lower its temperature and then liquefy other gases having a lower critical
temperature. By iterating the process, we arrive at liquid Helium. With this technique,
1 Low
pressure means, in this case, pressure below the saturation pressure at that temperature (see
Sect. 7.2.1).
Fig. 6.3 Inversion curve for
Hydrogen. Data are taken
from Sommerfeld,
and Statistical
Mechanics-Lectures on
Theoretical Physics, [7]
6 General Properties of Gaseous Systems
we can reach temperatures of the order of a fraction of one degree. To proceed
further toward the attainment of lower temperatures it is necessary to resort to the
adiabatic demagnetization of some paramagnetic substances. The issue will be briefly
discussed in Sect. 11.7.
6.7 A Simple Approximation of the Isothermal Behavior
of Gases
Let’s start from the virial expansion given by Eq. 6.2 and let us write an expansion
for the volume, in the form:
V = Ap −1 + B + C p + · · · .
For temperatures well below the critical temperature (roughly T 0.65 Tcr ), the
contribution of the third virial coefficient is negligible up to pressures of more than
100 atmospheres and then, for our purposes, it will be sufficiently accurate to consider
the equation of state up to the second virial coefficient:
Vm =
+ B (T ) .
As regards the temperature dependence of the second virial coefficient experimental
observations agree well with a relationship of the type:
B (T ) = b −
6.7 A Simple Approximation of the Isothermal Behavior of Gases
in which a and b are phenomenological constants depending on the particular gas.
This choice of the symbols a, b to indicate these two constants is not accidental but
find explanation when we discuss the properties of the van der Waals equation of
state in Chap. 8.
Meanwhile, we note that for every gas there is a temperature, called “Boyle temperature” TB , at which the second virial coefficient vanishes. If the pressure is not
too high, so that Eq. (6.43) holds, we obtain the following expression for the Boyle
temperature in terms of the two phenomenological constants a and b:
TB =
At this temperature the gas seems to be optimally described by the equation of ideal
gases but we must be aware that many important properties of the gas depend also
on the derivative with respect to temperature of the second virial coefficient and the
latter is by no means zero. One example is the Joule–Thompson coefficient.
6.8 The Chemical Potential in Diluted Gases
It is convenient treat the case of a single component gas separately from the general
case of a gas mixture.
Case of One Component Gas
Recalling that the Gibbs function per mole of a chemically pure phase coincides
with the chemical potential of that component, we may integrate Eq. (6.6) with the
approximated equation of state discussed in Eq. (6.43). Since Eq. (6.6) is a partial
differential equation, it may be integrated by choosing first a reference pressure p †
and then integrate at constant temperature. We obtain
μ ( p, T ) = RT ln
+ B (T ) p − p † + μ† (T ) ,
where μ† (T ) is the chemical potential at the reference pressure p † is a function of
temperature only to be determined. If we take the reference pressure p † low enough,
the product B (T ) p † will be very small and can be neglected so that a very good
approximation for the chemical potential of a gas can be written as
μ ( p, T ) = RT ln
+ B (T ) p + μ† (T ) .
In this expression, the term B (T ) p describes fairly well the correction to the chemical potential of an ideal gas due to the second virial coefficient B (T ).
6 General Properties of Gaseous Systems
As far as the function μ† (T ) is concerned, if the reference pressure p † is low, it can
be modeled fairly well. It depends on the microscopic nature of the gas and it results
that different gases can be grouped into four categories: monoatomic molecules,
diatomic molecules, linear polyatomic and nonlinear polyatomic molecules (the same
grouping which is very appropriate in the study of the heat capacities of gases treated
in Sect. 6.9). The subject is dealt with in an exhaustive way in [3] to which the reader
is addressed for a complete and in-depth discussion of the topic.
For an ideal gas, the chemical potential may written as
μ ( p, T ) = μ† (T ) + RT ln
Case of a Multi Component Gas
Suppose that the gas is formed by a mixture of several components γ which are
present with a molar concentration Cγ , given by
Cγ =
Σρ n ρ
and let’s limit ourselves to the case of gases whose equation of state, when considered
individually, can be well described by the ideal gas equation. Moreover, if the different
gases are weakly interacting with each other, the Dalton law holds and the partial
pressure pγ of each component is
pγ = C γ p
and for each component γ we can write the chemical potential in the form:
+ RT ln Cγ .
μγ ( p, T ) = μ†γ (T ) + RT ln
By making use of Eq. (6.50) we obtain
μγ ( p, T ) = μ†γ (T ) + RT ln
The above Eq. (6.51) can also be written in the following form:
μγ ( p, T ) = ηγ ( p, t) + RT ln Cγ ,
which will be used in the following chapters. This relation is important because it
highlights the dependence of the chemical potential of each component on its (molar)
6.9 Molar Heat at Constant Volume for Dilute Gases
6.9 Molar Heat at Constant Volume for Dilute Gases
The study of the molar heats of dilute gases offers an important opportunity to verify
in what way and to what level of depth the measurements by the macroscopic observer
offer fundamental information on the structure of the microscopic world.
The bridge between macroscopic observations and the atomic-molecular theory
of matter is based on the mechanical-statistical model. In particular it is rooted on
the theorem of equipartition of energy which shall be briefly discussed shortly after.
First of all it is appropriate to start from the definition of the heat capacity given
in Eq. (5.19) and let us assume that the gas is sufficiently diluted that the mutual
interaction between different molecules can be disregarded. In this approximation,
the total energy of the system can be written as the sum of the energies of the
individual particles
ε(i) ,
where ε(i) is the total energy of one of the N elementary constituents of the gas
(atoms or molecules). Within this approximation, in a small transformation at constant volume, the amount of energy injected into the system in order to produce an
increase of temperature, will be distributed among the elementary constituents. It is,
then, necessary to examine the structure of ε(i) from the point of view of its capability
of absorbing energy.
6.9.1 Microscopic Degrees of Freedom
Along the entire section, in order to describe the structure of ε(i) , we will use the
Hamiltonian formalism because this is useful both for an introduction to the energy
equipartition theorem and for a discussion of the role of quantum mechanics. Let us
consider, first, the simplest case of monoatomic molecules for which the total energy
is entirely given by the kinetic energy of a point mass
ε(i) = εtra
p 2y
+ z ,
where m is the mass of the molecule and px , p y and pz are the three components of
the momentum.
As a next step toward the more general case, let’s consider a rigid polyatomic
molecule. By “rigid” we mean that the individual atoms form a rigid structure or, in
other words, we assume that oscillations of atoms near their equilibrium positions
are absent.
In this case, the total kinetic energy of the molecule can be split into the sum of
the translation term analogous to that in Eq. (6.54), where the velocity is the center of
6 General Properties of Gaseous Systems
mass (c.m.) velocity, plus the kinetic energy in the c.m. frame which can be written
= 1 + 2 + 3 ,
2 I1
2 I2
2 I3
where L 1 , L 2 , and L 3 are the projection of the angular momentum along three
principal axes of inertia and I1 , I2 , and I3 are the corresponding three principal
moments of inertia. Obviously, for a one-dimensional molecule, as, for instance,
diatomic molecules, the moment of inertia along the molecular axis will be considered
null and then the energy stored in the rotational modes in Eq. (6.55) will consist of
only two terms.
More in general, if the rigidity assumption is released, we have to take into account
oscillatory motions of the atoms around their equilibrium positions. If the amplitude
of these oscillations is not too large, they can be described as harmonic oscillations.
In the simple case of a diatomic molecule we have only one oscillatory mode, along
the line connecting the two atoms, with a given fundamental pulsation ω0 . The energy
, can be written as
of the oscillatory motion, εvib
pq + ωq2 q 2 ,
where q is called normal coordinate of oscillation (in the simplest model of two
oscillating masses, as we are considering here as a starting point, it is the distance
between them), pq = q̇ is the corresponding generalized momentum and ωq is the
harmonic oscillation frequency relative to the q coordinate. In general, for polyatomic
molecules, the description of the oscillatory motions is a little bit more complicated
and deserves some careful discussion. In general, for any structure subject to possible
oscillating motions, it is possible to identify a set of k fundamental modes, each having
an energy:
εα =
p + ωq,α
2 q,α
with the following properties:
(a) Every possible oscillatory motion can be expressed as a linear combination of
these fundamental modes;
(b) The fundamental modes are mutually independent.
The choice of these fundamental modes is not unique but their number is univocally determined; it depends on the number of atoms and defines the so-called number
of oscillatory degrees of freedom of the molecule. However, it can also depend on
the spatial form of the molecule. For example, for the same number of atoms, the
number of fundamental oscillatory modes for a linear molecule exceeds by one unit
the number of oscillatory degrees of freedom for a nonlinear molecule. The reason
for this is the following: the number of coordinates necessary and sufficient to determine the spatial configuration of a set of n points is 3n. Three of them are used for the
6.9 Molar Heat at Constant Volume for Dilute Gases
position of the center of mass and, for a nonlinear structure, three more are used for
the angular coordinates of the “quasi-rigid” configuration. The oscillatory degrees
of freedom left are then 3n − 6. If the molecule is linear only two angular coordinates are required and then 3n − 5 oscillatory modes are necessary for a complete
description. As we shall see in Sect. 6.9.3, this significantly changes the macroscopic
property we are discussing in this section.
Let us consider the case of water H2 O, a polyatomic non linear molecule with
n = 3. For this molecule the number of vibrational degrees of freedom is k = 3. Two
main vibrational modes can be identified: stretching and bending vibrations. Within
the stretching modes it is the length of the bonds to be changed. This can occur either
by symmetric stretching (ss) where the hydrogen bonds change their length in phase
(both bonds vibrate in and out together) or asymmetric stretching (as) in which one
bond is shortened as the other bond elongated. Conversely, within bending modes
it is the angle between the hydrogen bonds to be changed, as if they would be the
blades of a scissor. For this reason bending modes of water molecule are commonly
referred as scissoring modes. By infrared spectroscopy it is possible to measure the
energy of these vibrational modes in terms of the modes’ wavenumber ν̃ = 1/λ,
λ being the wavelength of a given mode. Both the stretching modes are found at
ν̃ss,as 4000 cm−1 corresponding to ωss,as = hc/λss,as 0.5 eV. Conversely, the
bending mode (scissoring) is found at ν̃ = 2000 cm−1 corresponding to an energy of
ωbend 0.25 eV. Compared to the thermal energy k B T 1/40 eV at T = 300 K,
the energy of all these vibrational modes ωss,as , ωbend are greater at least by one
order of magnitude. Therefore, at room temperature the vibrational modes of the
water molecule results to be essentially frozen. In general the contribution of oscillations to the total energy of the molecule will be written as
+ ωq,α
qα2 ,
where the summation is performed over the selected fundamental oscillatory modes,
designed by the index α according to the brief discussion above [8].
Summing up, in general, for polyatomic molecules and in the absence of external
fields, the total energy of the single elementary constituent may be written as
+ εrot
+ εvib
ε(i) = εtra
and if we make use of Eqs. (6.54), (6.55) plus the appropriate number of oscillatory
terms like those described in Eq. (6.56), we see that ε(i) is the sum of terms in which
the generalized coordinates and the conjugate momenta appear quadratically and for
this reason they are called also harmonic.
The brief discussion carried out to this point aims at exploring all the possible
relevant contributions to the expression of the total energy of a molecule, coming
from the analysis of its atomic structure. Indeed, we know that also the single atom, in
turn, is a system endowed with an internal structure and this would give rise to further
6 General Properties of Gaseous Systems
terms in the expression of the total energy. These further internal degrees of freedom,
are linked to the existence of the electronic component but their contribution to the
variations of energy, becomes significant when the temperatures, and therefore the
amount of energy involved in the intermolecular collisions, are close to the energy of
dissociation of the molecules. We shall confine ourselves to much lower temperatures
and their contribution to the total energy will be considered as a constant. It follows
that, for temperature variations in the interval we are interested in here, they will not
contribute to energy variations.
6.9.2 Energy Equipartition
The theorem of energy equipartition constitutes a milestone in the mechanicalstatistical theory of matter and allows us to establish a very important connection
between the molar heat capacities at constant volume and the number of microscopic
modes among which the energy of the molecule can be distributed. This important
result is valid provided that the following conditions are satisfied:
(1) The terms in Eq. (6.58) must be harmonic, meaning that they all depend quadratically from the hamiltonian conjugated coordinates p, q. In Eq. (6.58), we have
written the energy as the sum of different contributions each depending either
on one coordinate or on one conjugated momentum. For instance, the term εtra
depends quadratically on the three momenta px , p y and pz and, similarly, εrot
depends quadratically on the components of the angular momentum. They do
not depend on the conjugated coordinates as would happen if the system were
immersed in an external field. In this case, the energy would depend on the position or the orientation of the molecule as we shall see in Sect. 10.5 for polarized
systems. Indeed, in the latter case the energy depends on the cosine of an angular
coordinate and not quadratically and this term is not harmonic according to our
, the contribution depends quadratically both on the
definition. In the case of εvib
coordinate qα and on the conjugated momentum as shown in Eq. (6.57) and they
are both harmonic.
(2) The energy contributions are treated classically. This is the second fundamental
requirement. Consider, as an example, one term relative to the energy of rotation.
In the quantum description, the energy levels are distributed according to a
discrete spectrum whose fundamental energy level is of the order
E 0rot ∼
where h is the Planck constant, and I the molecule’s moment of inertia. Let us
introduce a characteristic temperature Trot , defined by
kB Trot ∼
6.9 Molar Heat at Constant Volume for Dilute Gases
For T < Trot , the energy transferred in molecular collisions will not be sufficient
to excite higher rotational energy levels. In this case, the rotational degrees of
freedom are said to be frozen in the sense they are unable to absorb their share of
energy when the temperature is mildly increased. On the contrary, at temperature
T > Trot the rotation energy levels are populated. When high quantum numbers
are reached, the spacing between adjacent energy levels is very small compared
to kB T , the density of states appears as continuously distributed and hence will
behave classically. With the same argument, it is possible to define another
characteristic temperature Tvib such that temperature below which the vibrational
modes are frozen:
2 ωq
kB Tvib ∼
The theorem of energy equipartition states that, if conditions (1) and (2) stated at
the beginning of Sect. 6.9.2 are fulfilled, then at thermodynamic equilibrium every
term in Eq. (6.58) contributes with (1/2) kB T to the average energy of the single
molecule ε(i) .
Since in the thermodynamical context the total macroscopic energy U is given by
the sum of the average energies of the single molecules at equilibrium:
ε(i) ,
U= U =
the total molar energy will be given by
f kB T = f R T ,
Um = N A
where f is the number of terms appearing in Eq. (6.58) and defines the number of
microscopic degrees of freedom of the gas. From Eq. (6.62) we obtain the molar heat
at constant volume:
C V = f R.
Equation (6.63) is described by saying that every microscopic degree if freedom
contributes (1/2)R to the molar heat at constant volume of the gas.
6.9.3 On the Temperature Dependence of Molar Heats
We shall discuss only the molar heat at constant volume which comes entirely from
the internal energy of the gas, the one at constant pressure being given by Eq. (5.22).
In Fig. 6.4, we schematically report the behavior of molar heat capacity at constant
volume as a function of temperature. For a monoatomic molecule, the energy is
6 General Properties of Gaseous Systems
completely described by the three translational terms if we disregard the electronic
contributions. The three microscopic degrees of freedom lead to
kB T,
Um = NA ε(i) = R T,
= R.
CV =
∂T V
ε(i) = εtra
being Eq. (6.64) the average energy of a single molecule, Eq. (6.65) the energy of
one mole and Eq. (6.66) the molar heat capacity at constant volume.
For diatomic molecules, the number of degrees of freedom depends on temperature. At low temperatures, any gas behaves as if it were a monoatomic gas as already
discussed in Sect. 6.9.2. At increasing temperatures, the rotational terms start to
absorb energy and then the molar heat increases. When the rotational energy levels
behave classically, all the microscopic degrees of freedom are fully classical and the
energy equipartition holds. Then, the average energy of a molecule and the molar
heat at constant volume become
+ εrot
= kB T,
ε(i) = εtra
= R.
CV =
∂T V
By increasing the temperature, the molar heat of diatomic molecules remains
constant to CV = (5/2)R until the energy levels of the oscillatory motion of the
molecule begin to be excited and this happens at temperatures higher than the char-
Fig. 6.4 Qualitative representation of the molar heat at constant volume CV as a function of temperature for a dilute gas
6.9 Molar Heat at Constant Volume for Dilute Gases
Table 6.1 Rotational and
vibrational temperatures for
some substances
Trot /K
Tvib /K
acteristic temperature Tvib defined in Eq. (6.61). Then the molar heat will increase
again with temperature. When the vibrational mode is excited at high quantum numbers, it behaves classically and the energy equipartition applies again. In this range of
temperatures the average energy of the molecule and the molar heat are respectively
+ εrot
+ εvib
= kB T,
ε(i) = εtra
= R.
CV =
∂T V
For polyatomic molecules, we have to distinguish between linear molecules and
nonlinear molecules. In the former case the situation is similar to that of the diatomic
molecule as far as the rotational degrees of freedom are concerned, but it differs in
the counting of vibrational degrees of freedom. In our simplified model, the molar
heat capacity at constant volume at increasing temperatures will assume the values
(3/2) R, (5/2) R plus the contributions of the vibrational modes whose number
depends on the number of atoms according to the discussion in Sect. 6.9.1.
For nonlinear molecules , the values of the molar heat capacity at constant volume,
at increasing temperature, will range with values (3/2) R, (3R) plus the contributions
of the vibrational modes whose number depends on the number of atoms according to
the discussion in Sect. 6.9.1 as before. It is important to highlight the sharp difference,
at intermediate temperatures, which depend only on the different spatial shape.
It is interesting to consider the order of magnitude of the rotational and vibrational
characteristic temperatures defined for some substances.
As it results from Table 6.1, at ordinary temperatures we are typically in the interval
Trot , Tvib for several substances. Therefore, the vibrational modes can be considered
frozen, while the rotational degrees of freedom can be treated as classical.
At temperatures T > Tvib the molar heat increases. However, it is hard to describe
the vibrational degrees as classical since dissociation or ionization processes begin
to take place. The eventual contribution of the electronic transitions to the molar heat
requires temperature even higher and is not be considered here.
Chapter 7
Phase Transitions
Abstract In this chapter, we introduce the study of phase equilibria and phase
transitions. Phenomenological phase diagrams are showed and linked to the equations
of phase equilibrium Clausius–Clapeyron’s equations obtained from the application
of the general principles of thermodynamics. The latent heats and the conditions for
solid–liquid, solid–vapor, and liquid–vapor equilibrium are briefly discussed together
with the temperature dependence of latent heats. Finally, the continuity of gaseous
and liquid states is presented and continuous phase transitions are discussed on the
basis of the Ehrenfest relations to be applied to those phase changes in which there
is no evidence of latent heats.
Keywords Phase · Latent heats · Triple point · Critical point · Continuous phase
transitions · Phase diagrams · Melting point
7.1 Phases Equilibrium
All substances show the property of appearing in different states of aggregation
when they are in suitable combinations of pressure and temperature. According to
the discussion in Sect. 3.5.1 different states of aggregation, which at equilibrium are
considered as homogeneous parts, are different phases of the system and the transformations of matter from one state of aggregation to another, is called phase transition.
In Chap. 4, we have proved that Eq. (4.66) is the general condition which ensures the
equilibrium with respect to the transfer of matter between the two phases. In general,
the chemical potential of one component is a function of pressure, temperature, and
the concentrations {Ci }i=1,...,k of all the other independent components, as we have
seen in Eq. (4.70).
Let us consider, for simplicity, two chemically pure phases, i.e., two phases composed of only one component1 and let the two phases be in thermal and mechanical
1 Alternatively,
we can consider phases consisting of ideal solutions, i.e., phases composed by a
mixture of different noninteracting components. We can assume that the equation of state of each
component does not depend on the presence of the other components. Moreover, as we have seen,
with few formal changes we can consider phases at different pressures.
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
7 Phase Transitions
equilibrium (i.e. at the same pressure and temperature). For each component, we
write the equilibrium condition, with respect to the transfer of matter, in the following form:
μ1 ( p, T ) = μ2 ( p, T ) .
The equality set by Eq. (7.1) provides a relation between pressure and temperature
at phase equilibrium. Given one of the two intensive variables, the value of the other
variable for maintaining the phase equilibrium is determined by Eq. (7.1). For this
reason, a system of two phases in mutual equilibrium constitutes a system with only
one degree of freedom. Since we aim to obtain the dependence p = p(T ) for two
phases at equilibrium, let us differentiate both sides of Eq. (7.1):
dp +
dT =
dp +
dT .
According to Chap. 5, we have
= Vm ,
∂p T
= −Sm ,
∂T p
Vm , Sm being the molar volume and molar entropy respectively. By using Eqs. (7.3)
and (7.4), the expression in Eq. (7.2) may be written in the familiar form:
that is
Vm(1) d p − Sm(1) dT = Vm(2) d p − Sm(2) dT ,
S (2) ( p, T ) − Sm(1) ( p, T )
= m(2)
Vm ( p, T ) − Vm ( p, T )
where denotes the change of either V or S between the final state (2, p, T ) and
the initial state (1, p, T ). These two states have the same pressure and the same
temperature but differ only in the states of aggregation. In the last member of Eq. (7.6),
the index that refers to the molar quantities has been abandoned because both the
volume and entropy are proportional to the number of moles.
This equation is well known with the name of Clapeyron’s equation. It allows us to
calculate the change in the equilibrium pressure due to a small change in temperature.
7.2 Latent Heat
Since the transition (1, p, T ) → (2, p, T ) connects two states having the same temperature, in Eq. (7.6) for the entropy change we may write
7.2 Latent Heat
S = T Q 1→2 ,
where Q 1→2 is the amount of heat supplied to the overall system (1 + 2) in the phase
transition. Since the transformation occurs also at constant pressure we will have
Q 1→2 = H .
The quantity H is called latent heat of the transformation. The latent heat is given by
the enthalpy variation between the two states and represents the quantity of heat that
have to be supplied to the overall system when a certain quantity of matter undergoes a
phase transition remaining in phase equilibrium with the other. If the specific volumes
of the two phases differ, some work must be done in the phase transition. Hence the
latent heat differs from the energy difference U by this amount.
Introducing the latent heats, the Clapeyron’s equation can be written in the common
T V
7.2.1 Liquid–Vapor Equilibrium
In this case, we set phase 1 as the liquid phase and phase 2 as the vapor phase. For
temperatures T 0.65 Tcr (see Sect. 8.3.2) the second molar virial coefficient of the
vapor, as expressed by Eq. (6.2), is of the order of 10−3 − 10−4 relative to (RT / p)
and then it is well described by the equation of state of an ideal gas. In addition, with
good approximation, we can, neglect the volume of the liquid relative to that of the
Vmvap − Vmliq p
then the Clapeyron’s equation becomes
Hm p
R T2
that is
d ln p
In a temperature range in which the latent heat of evaporation may be considered
practically constant, Eq. (7.12) can be integrated and we obtain:
7 Phase Transitions
Hm 1
Hm 1
− †
p = p † exp −
where p † is the saturated vapor pressure at a reference temperature T † . The relation
given in Eq. (7.13) may be used to measure the molecular weight of a substance. We
measure the saturated vapor pressure at different temperatures and we plot (ln p)
versus (1/T ). The angular coefficient of the straight line will be equal (in absolute
value) to Hm /R. Then, if we supply a quantity of heat equal to the latent heat thus
obtained, the decrease in weight of the quantity of liquid expressed in grams will be
equal to the molecular weight.
7.2.2 Equilibrium Between Condensed Phases: Solid–Liquid
In this case the Clapeyron’s equation will be
= liq
T Vm − Vmsol
With few exceptions, the molar volume of the liquid is larger than the molar volume
of the solid in mutual equilibrium and the latent heat of fusion is always positive
(indeed you have to provide heat to melt a solid). Then, in general, we have
> 0,
and this means that if you increase the pressure on a solid the melting point, i.e. the
temperature at which the solid starts to melt, increases.
It is well known that water is an exceptional case. In fact, for water Vm < Vmsol
as evidenced by the fact that ice floats on water and therefore (d p/dT ) < 0. This
means that if you increase the pressure on the ice the melting point decreases.
This accounts for the motion of glaciers: the bedrock which supports the masses of
ice is very uneven and so locally, on very small areas, you can develop extremely high
pressures; consequently the melting point can be considerably lower, the ice locally
melts and then refreezes as soon as the irregularity is overcome and the pressure
locally decreases.
Example 7.1 Remaining on the case of water, if we consider the solid–liquid transition at atmospheric pressure, the latent heat of fusion (at θ = 0 ◦ C) is λfus 333.5 J/g
and hence Hm 6003 J/mole. Concerning the variation of molar volume we have
Vmsol 19.6 cm3 /mole and Vm 18.0 cm3 /mole and then
7.2 Latent Heat
273.16 × (18.0 − 19.6) × 10 m K
= −13.7 × 106 2
m K
7.2.3 Solid–Vapor Equilibrium
The solid–vapor transition process is called sublimation and for many aspects can
be treated similarly to the liquid–vapor transition. In particular, let us notice that
in this case equilibrium occurs at lower temperatures with respect to liquid–vapor
equilibrium but also the sublimation pressure is lower than the saturation pressure.
The discussion in Sect. 7.3 shows that, in general, the temperature is lower by a
factor of order 2 while the sublimation pressure is lower by a factor of order 100
with respect to the liquid–vapor case. This means that the molar volume of the vapor
is much, much larger than that of the solid and that the approximations leading to
Eq. (7.12) are better fulfilled.
As an example, let us calculate the sublimation pressure p2 of water at θ −60 ◦ C. We know that at θ −36 ◦ C the sublimation pressure is p1 0.02 kPa and
the heat of sublimation is H 2839.1 kJ Kg−1 . We see experimentally that the
heat of sublimation is fairly constant in that temperature interval. First, we have to
calculate the molar heat of sublimation. Get
Hm = 18 × 10−3 × H 51.1 kJ .
Then, with reference to Eq. (7.13), in our case we have
5.1 × 103
213.16 237.16
leading to
p2 1.06 × 10−3 kPa .
7.3 Triple Point
Equation (7.1) defines a curve in the ( p, T ) plane where two phases of a substance,
labelled as 1 and 2, are in mutual equilibrium. If the same substance may appear in
another state of aggregation, say in the phase 3, the equation
μ1 ( p, T ) = μ3 ( p, T )
7 Phase Transitions
defines the curve in which phases 1 and 3 are in mutual equilibrium. If the curves
defined by Eqs. (7.1) and (7.21) intersect, in the point of intersection the three phases
are in mutual equilibrium.
Denoting by μ3 = μ3 ( p, T ) the chemical potential of the substance in the phase
3, the coordinates of the point of mutual equilibrium among the three phases are
found by solving the system:
μ1 ( p, T ) = μ2 ( p, T ) = μ3 ( p, T ) .
This is a system of two equations in the two unknowns p and T whose solution
gives one value for each variable that we denote with ptr and Ttr . At this pressure
and temperature, the three phases are in mutual equilibrium and this state is called
triple point.
Three phases in mutual equilibrium constitute a system with zero degrees of
freedom and this is the reason why, to fix the absolute scale, we choose the triple
point of water instead of other possible alternatives such as, for instance, the melting
point at a pressure of one atmosphere.
7.4 Phase Diagrams
Starting from the triple point (TP), we may integrate Eq. (7.6) for the solid–liquid,
liquid–vapor, solid–vapor equilibrium. We obtain three curves as shown in Fig. 7.1a,
b. As a result, the ( p, T ) plane is divided into three main regions2 within which
only one state of aggregation (phase) is present except for the points in the curves
where the substance can coexist in two different states of aggregation in mutual
equilibrium. These representations are known as phase diagrams. As anticipated in
Sect. 7.2.2, the difference between Fig. 7.1a, b concerns the slope of the solid–liquid
equilibrium line as discussed in Sect. 7.2.2. Most substances behave as showed in
Fig. 7.1a. Conversely, Fig. 7.1b refers to the case of water and few other substances,
like antimony and bismuth, as far as the slope of the melting line is concerned.
Example 7.2 Dry ice. Let us consider, for example, the case of carbon dioxide. The
triple point is at ptr = 5.1 atm and Ttr = 216.6 K and hence it is not possible to
have liquid CO2 at atmospheric pressure where only the solid and gaseous phases
are present. Following the sublimation line, we see that the solid–gas equilibrium at
atmospheric pressure occurs at the temperature T = 194.7 K. If we denote by θsub
the sublimation temperature, in Celsius scale, at one atmosphere we see that θsub 2 For simplicity, we refer only to three phases, namely, solid, liquid, and vapor. In nature, things are a
bit more complicated because there may be multiple solid phases depending on different crystalline
forms of aggregation. In principle, the treatment does not conceptually change even if it is necessary
to provide a more detailed structure.
7.4 Phase Diagrams
Fig. 7.1 a Sketch of a typical phase diagram for a generic substance. TP and CP are the triple
point and the critical point, respectively. The box around CP is replotted in Fig. 7.3 in terms of the
( p, V ) variables. b Semi-qualitative phase diagram for water. At variance with most substances,
water exhibits a solid–liquid equilibrium line with negative slope
Fig. 7.2 In this phase
diagram two transformation
at constant pressure are
described: one is occurring
between ptr and pcr , an other
below ptr
−78.5 ◦ C and then, at normal condition, solid CO2 sublimates quite abundantly. At
atmospheric pressure solid CO2 is well known as “dry ice” because it cannot exist
in the liquid phase.
Example 7.3 Isobaric transformation. It may be useful, for didactic reasons, to
describe the various steps in a process in which a pure substance undergoes a wide
temperature change at constant pressure and in which phase transitions are involved.
With reference to Fig. 7.2, let us start from the initial state A whose pressure and
temperature are denoted, respectively, by pA and TA . Let us choose the pressure in
the interval ptr < pA < pcr , ptr and pcr being respectively the triple point and the
critical point pressures (see Sect. 8.3.2) and the temperature TA low enough so that
A is well inside the solid-state region.
7 Phase Transitions
1. We start from state A (low temperature) and supply heat at constant p until we
reach the solid–liquid equilibrium curve at state B, along the transformation AB.
In this transformation, the amount of heat supplied is equal to the product of the
heat capacity of the solid times the increase in temperature (if the heat capacity
can be considered a constant, otherwise we have to integrate over temperature).
2. When we reach state B, phenomenologicallly we can see that, while we continue
to supply heat to the system, the temperature remains at the constant value TB of
state B. The solid–liquid change is taking place and this transformation occurs at
constant pressure and temperature. The supplied heat is equal to the latent heat
of fusion.
3. When the phase transition at B has been completed the point will be in the region
named “liquid” and further heating will cause the temperature increase according
to the heat capacity of the liquid, up to the temperature TC at state C. Similarly to
what occurred at state B, the temperature does not increase despite the heat being
supplied to the system. A second phase transition from the liquid phase to the vapor
phase is taking place. Also this phase transition occurs at a constant temperature
and the amount of heat supplied is equal to the latent heat of evaporation.
4. Beyond the state C, the temperature increase under heating is regulated by the C p
of the vapor.
Similarly, in Fig. 7.2, an isobaric transformation at a pressure lower than ptr is
represented. If we start from state D in the solid phase region, we shall cross only
one phase change in state E and the analysis of the transformations is analogous to
the melting and the boiling cases just discussed.
7.4.1 ( p, V ) Diagrams
Together with the phase diagrams in the plane ( p, T ) it is useful to consider similar
diagrams on the plane ( p, V ) where curves at constant temperature will be described.
Let us first consider a small region in the ( p, V ) plane, around the critical point, as
marked in Fig. 7.1a by the box around CP. It is useful to consider a certain amount of
substance confined in a cylinder with a movable piston and maintained at constant
temperature during the transformations. The pressure is measured and the volume
variable, in the ( p, V ) diagram, is the volume of the cylinder. Inside the cylinder
one or two phases in mutual equilibrium, will be present according to the value of
pressure, temperature and volume as shown qualitatively in Fig. 7.4.
From Fig. 7.3, we see that, at relatively high temperatures, the shape of the curves
is very similar to the equilateral hyperbolas which describe, as is well known, the
ideal gas isotherms.
At decreasing temperatures, the “equilateral hyperbolas” deform showing an
increasing deviation from the behavior expected for an “ideal” gas. If we further
decrease the temperature we arrive at a well-determined temperature at which the
isotherm shows the presence of a horizontal inflection point. This is called “critical
7.4 Phase Diagrams
Fig. 7.3 Qualitative
isotherms of a given
substance close to the critical
Fig. 7.4 Cylinders at
different volume at constant
p, T for an expansion of the
liquid in equilibrium with its
vapor (or a compression of
the vapor in equilibrium with
the liquid) according to
Fig. 7.3
isotherm” the value of the temperature Tcr is called the critical temperature and the
corresponding value for pressure pcr is called the critical pressure.
For temperatures below the critical value in the cylinder, we may observe a phase
separation. If we go along the isotherm starting from large volumes we see, inside
the cylinder, only the vapor phase with the same properties of the gas observed at
“high temperatures” (that is well above the critical temperature).
When the volume reaches the point B whose volume is denoted by VB , as shown
in Fig. 7.3, a further decrease in the volume occurs at a constant pressure and we
are observing the formation of liquid (see Fig. 7.4). The fraction of liquid increases
and the portion of the gas phase decreases accordingly, until the isotherm reaches
the point A whose volume is VA , in which there is only liquid inside the cylinder.
From that point, a further decrease of volume requires a huge increase of pressure as
is the case for all liquids, i.e., the coefficient of isothermal compressibility defined
in Eq. (5.5) is very small, as it is the case of all liquids, and more generally of the
condensed phases.
7 Phase Transitions
Then, we see that the formation of liquid takes place only at temperatures below the
critical temperature. For this reason, the gaseous phase at these temperatures is called
vapor. Differently from what happens at temperatures above the critical temperature,
where the gaseous phase is called simply “gas”, here the vapor is potentially on the
way to become a liquid. The vapor at volume VB is called “saturated vapor”. This
designation is to indicate that the state has all the properties of a gas (at fairly low
temperatures, the equation of state is very close to the equation of state of ideal gases)
but if inside the cylinder there was the formation of a small amount of liquid the two
phases would be in mutual equilibrium. Symmetrically in the zone of the liquid, the
liquid in the state indicated by the value VA of the volume is in a state of equilibrium
with its vapor that is, if it were in the presence of a small amount of steam the two
phases would be in mutual equilibrium.
If the temperature further decreases the volume of the saturated vapor, that we
generically indicated with the symbol VB increases while the volume of the liquid in
equilibrium with the vapor, VA decreases.
The points in which there is only one phase but in equilibrium with the other,
trace a curve which has the shape of an asymmetric bell called “Andrews bell”.
It is necessary to note that the experimental isotherms below the critical temperature, both in the points of type A and in those of type B, are continuous but with
discontinuous derivatives.
7.4.2 Molar Heat at Equilibrium
Consider two phases in mutual equilibrium in the conditions examined so far and let
us vary, by an infinitesimal amount, the temperature of the two phases but operating
so as to maintain the phase equilibrium. Let us ask the amount of heat to be supplied
to each phase in this process. If we refer to one mole (or to a unit mass), we are led
to define, for each phase, a new kind of molar (specific) heat.
This will be neither a transformation at constant volume nor at constant pressure
but will need to change the external pressure by the right amount so as to preserve
the phase equilibrium. From the Clapeyron’s equation, we have immediately
dp =
dT .
The amount of heat that must be supplied to one mole in one phase in order to increase
its temperature by one degree keeping it in the phase equilibrium with the other one,
is called molar heat at equilibrium. Obviously, the molar heat at equilibrium is a
property not only of the phase we are considering but also depends on which is the
other phase with which it is in phase equilibrium.
7.4 Phase Diagrams
If we denote this molar heat, for each phase, with the symbol Ceq
, we may write
d̂ Q = Ceq
dT ,
where the symbol of total derivative indicates the directional derivative along an
infinitesimal transformation that preserves the phase equilibrium. Formally, we have
∂ Sm(1,2)
− α (1,2) Vm(1,2)
= C (1,2)
∂ Sm(1,2)
In going from Eq. (7.26) to Eq. (7.27), we must recall the general expression of
the derivative of entropy with respect to pressure at constant temperature given in
Eq. (5.15).
An interesting case concerns the liquid-vapor equilibrium. In this case the molar
heats at equilibrium are called “molar heats of saturation” (both for the steam and the
liquid). We have already seen that for sufficiently low temperatures T 0.65 Tcr
the saturated vapor is well described by the equation of ideal gases and, also, that the
molar volume of the steam is much greater than that of the liquid. In these conditions
we can assume with good approximation, for the steam, α 1/T and then, for the
saturated steam we will have
Csat C vap
p −
Example 7.4 In the case of water at one atmosphere, we know that the boiling
point is T = 373 K and that the latent heat amounts to Hm 40896 J mol−1 .
If, for the molar heat at constant pressure, we take the experimental value C p −1 −1
34.9 J mole K (note that this value is very close to the expected value for an
ideal gas with 6 degrees of freedom), we obtain
Csat 34.9 −
= 34.9 − 109.6 = −74.7 J mol−1 K−1 .
The negative value indicates that to increase the temperature of saturated steam
keeping it in saturation conditions, we need to remove heat. As regards the heat at
equilibrium of the liquid phase Eq. (7.27) gives
liq liq
Csat C liq
p − α Vm
vap .
7 Phase Transitions
Since the coefficient of thermal expansion of the liquid is much lower than that of
the steam and the molar volume of the liquid it is less than that of the steam by, at
least, a factor of 103 –104 (while C p is of the same order of magnitude) it is correct
to write
Csat C liq
p ,
i.e., for the condensed phases is correct to speak of one single molar heat (the same
also applies to the solid phase) if we can allow for a small error that we are able to
estimate with some precision.
7.4.3 Temperature Dependence of the Latent Heats
In order to evaluate the temperature dependence of latent heats (they are frequently
indicated with the Greek letter λ) let’s recall that, by definition, they are given by
λ = Hm and that, since the phase transition also occurs at a constant temperature,
we have Hm = T Sm . Then we may write
d Sm(2) ( p, T ) d Sm(1) ( p, T )
Sm(2) ( p, T ) − Sm(1) ( p, T ) =
where the total derivative notation is used, also in this case, because they have to
calculated along the equilibrium curves between the two phases. With this condition,
we have
Ceq = T
and then
− Ceq
from which we obtain
= Ceq
− Ceq
In the case of the liquid–vapor equilibrium, if we remember Eqs. (7.28) and (7.31)
we have
d Hm
= C vap
p − Cp ,
and this allows us to estimate the error committed when we use the integrated
Eq. (7.13) for describing the temperature dependence of the saturated steam.
7.5 Continuity of States
7.5 Continuity of States
We come back to Fig. 7.1 and we notice that the condensation line is the only line in
the phase diagram ending on a specific state, that we are able to measure, at least in
principle. In fact, while the condensation line“ends” on the critical point, the melting
line is not limited unless we consider practical constraints concerning the effective
possibility of realizing states at high pressure. Conversely, when moving along the
sublimation line, the third principle of thermodynamics (see Chap. 13) prevents to
reach states at T = 0 K, no matter how low T may be pushed. Therefore, we may
wonder whether would happen within a transformation turning around the critical
point. In fact, accordingly to Fig. 7.1 the system is expected to pass from the vapor
(liquid) phase to the liquid (vapor) phase without crossing a phase equilibrium line,
as the ones described by the Clapeyron’s equation Eq. (7.6).
In practice, with reference to the isotherms of Fig. 7.3, this transformation can
be practically realized by taking the vapor phase in a given initial state and raising
the temperature of the vapor phase above the critical temperature, by keeping the
volume always greater than the critical volume. Then, the vapor can be compressed
up to the liquid state below the critical volume, by keeping the temperature above
the critical temperature, and finally cool the liquid to its original temperature, in a
final state keeping the volume sufficiently below the critical volume. At the end of
this transformation we brought the substance from the gaseous phase to the liquid
phase having the same temperature of the gas, by a continuous change throughout
which there is never more than one phase present. This apparently contrasts to what
described by the Clapeyron’s equation. We will address this point in the following
Sect. 7.6. The possibility to pass from vapor to liquid state (and vice versa) without
observing coexistence of two states of aggregation (phases) has been first realized
by Thomson in 1871 and later discussed by van der Waals in 1873 within his Ph.D.
titled “On the Continuity of the Gaseous and Liquid States”.
7.6 Continuous-Phase Transitions
The Clapeyron’s equation provided by Eq. (7.9) transforms the equilibrium condition of Eq. (7.1) between chemical potentials into a relation between pressure and
temperature for two or more phases in equilibrium.
This relation can be visualized by equilibrium lines in ( p, T ) phase diagrams, as
showed in Fig. 7.1. Every time that we cross the equilibrium curves a system passes
from one phase to an other by experiencing jumps in the molar volume and molar
entropy as given by Eqs. (7.3) and (7.4). However, it results that the condensation
line ends with the critical point, so that, for a given substance, the passage of vapor
(or liquid) to the supercritical state can no longer be described by using Eqs. (7.3)
and (7.4). This situation can be also understood in terms of the isotherms: indeed, the
jump of Eq. (7.3) for the molar volume, represented in the ( p, V ) plane by couples
7 Phase Transitions
of states at the same temperature and same pressure on the coexisting bell around the
critical point, vanishes as the temperature of the vapor (or liquid) phase is raised to
or above the critical temperature. In parallel, the same applies to the jumps of molar
entropy, causing the vanishing of the latent heat in the passage to the supercritical
As a consequence, around the critical point the Clapeyron’s equation does not
provide any information on pressure and temperature at the equilibrium between
two phases, since d p/dT is reduced to an indeterminacy of the form 0/0. To address
a description of these phase changes, let us call μ the jump of the chemical potential
when one mole of a substance is passing from the phase 1 to the phase 2 at given
pressure and temperature:
μ(T, p) = μ2 (T, p) − μ1 (T, p) .
Similarly to what we performed in Sect. 7.1, let us denote with d(μ) the differential
of the chemical potential difference in Eq. (7.37)
d (μ) = d (μ2 − μ1 ) .
Then, we expand Eq. (7.38) accurate in d P and dT to the second-order terms:
2 ∂ μ
dp + dT + (d p)2 +
d (μ) = ∂p T
∂T p
∂ p2 T
∂ μ
∂ μ
d p dT + +
(dT )2 .
∂ p∂ T
∂T 2 p
Accordingly to Eqs. (7.3) and (7.4), the expression Eq. (7.39) becomes
∂ Vm
d (μ) = Vm d p + Sm dT + (d p)2 +
∂p T
1 C p
∂ Vm
(dT )2 ,
d p dT −
∂T p
2 T
C p = C (2)
p − C p being the (finite) difference of the heat capacities, at constant
pressure, between phases 1 and 2.
In the continuous phase transitions the first two terms of the expansion Eq. (7.40)
are clearly null, since Vm(1) = Vm(2) and Sm(1) = Sm(2) . Therefore
∂ Vm
∂ Vm
1 C p
(dT )2 .
(d p) + d p dT −
d (μ) = 2
∂p T
∂T p
2 T
Along the phase equilibrium line μ is identically null, so that Eq. (7.41) becomes
7.6 Continuous-Phase Transitions
∂ Vm
∂ Vm
1 C p
(dT )2 = 0 .
(d p)2 + d p dT −
∂p T
∂T p
2 T
Dividing both the members of Eq. (7.42) by (dT )2 we get
∂ Vm
∂ Vm
1 C p
= 0.
∂ p T dT
∂ T p dT
2 T
The relation given by Eq. (7.43) provides the differential equation for the equilibrium
curve. By regarding Eq. (7.43) as a quadratic equation, for the value d p/dT to be
unique, the discriminant in Eq. (7.43) must be equal to zero:
∂ Vm
∂ Vm
+T C p = 0.
∂p T
∂T p
The condition set by Eq. (7.44) can be used to provide two equivalent expressions for
the derivative d p/dT along the phase equilibrium curve. The solution of the second
degree Eq. (7.43) leads to
∂ Vm
∂T p
=− ∂
∂p T
By using the Eq. (7.44) the solution in Eq. (7.45) can be written in the equivalent
C p
= (7.46)
∂ Vm
∂T p
The set of Eqs. (7.45) and (7.46) is referred to as Ehrenfest’s equations, generalizing
the Clapeyron’s equation provided by Eqs. (7.3) and (7.4) to the case of continuous
phase transitions.
In general, it possible to describe continuous phase transitions in terms of a discontinuity in the heat capacity C, with continuity of H, S, G, at a certain temperature, with neither latent heats nor a discontinuity in volume. The temperature at
which these transitions take place are usually known as lambda point Tλ or critical
point, and this generalizes the concept of critical point introduced in Sect. 7.4.2 for
the gas–liquid transition. Strictly speaking, the expression “lambda point” has been
first applied to the transition normal fluid–superfluid occurring in liquid 4 He for the
lambda shape assumed by C in this transition. Conversely, a critical point Tc is used
not only for gases but also for normal conductor–superconductor transition. In general, the discontinuity in C may have finite or non finite jumps depending on the
specific lambda point. The first lambda point was discovered by Curie and therefore
7 Phase Transitions
it is called the Curie point, below which a ferromagnetic material (as e.g., iron) has
permanent magnetization and above which it has not, and behave as a paramagnetic
Many other lambda points are known to occur in crystals and are usually associated
with a sudden change in the extent to which the molecules in the crystal can rotate
Phase transitions described by Eqs. (7.45), (7.46) are also known as transition of
the second order. The reason for this denominations can be easily understood in the
classification provided by Ehrenfest. In detail, for an ordinary phase change, which
we may call a transition of the first order, we have
first-order phase transitions.
In the transitions of the second order, which we have been discussing, we have
second-order phase transitions.
T ∂2G
C =−
∂T 2
We point out that the expression phase change (or phase transition) of the second
order has been the subject of a large debate and criticism in the past and led to a
considerable confusion. For this reason, it would be better to avoid this nomenclature.
To conclude, we just notice that it is possible to iterate the procedure outlined
above and introduce a transition of the higher order by
∂G ∂ 2 G
third-order phase transitions,
∂ G
∂T 3
and so on.
7.6.1 Differences Between Continuous- and
Discontinuous-Phase Transitions
We would now focus on the deep difference between continuous (or second-order)
and discontinuous (or first-order) phase transitions.
Within the latter case, each phase is stable by itself on both sides of the transition
point. This follows from the fact that the chemical potential of a “phase 1” μ1 ( p, T )
and the chemical potential of a “phase 2” μ2 ( p, T ) are determined on both sides of
the phase-transition point, either a pressure or a temperature, where they mutually
7.6 Continuous-Phase Transitions
intersect. However, while one of them corresponds to the absolute minimum (i.e., to
the absolutely equilibrium state), the curve with the larger value of μ corresponds to
the metastable state of a substance. Accordingly, during discontinuous phase transitions superheating and supercooling phenomena are possible. Conversely, metastable
states (like superheating and supercooling) are impossible within continuous phase
This aspect can also be understood on a different way: within a given phase we
should consider the possibility of the appearance of a rather small quantity of a new
phase whose properties differ strongly from the properties of the old phase, or of the
appearance of a new phase in the entire volume of the substance, but with properties
differing but little from the properties of the old phase. The first case takes place during
discontinuous phase transitions, when the new phase originates in small nuclei and
has molar volume and molar entropy differing from the corresponding quantities of
the old phase. As it is shown in Sect. 9.6, the phase transition may be delayed due
to the existence of surface energy, and metastable states appear. The second case
takes place during continuous phase transitions, when a new phase, the phase with
symmetry differing from that of the initial phase, appears at once in the entire volume
and not in small nuclei. Therefore, no phase interface appears between the phases
and delay of phase transition is impossible. With continuous phase transition the
dependence of the chemical potential on temperature and pressure is represented by
a single smooth curve, and not by the intersection of two curves, as is the case with
first-order phase transitions. It is clear, however, that some thermodynamic functions
exhibit a sort of singularity on the phase transition line, because the second derivatives
of the chemical potential undergo a discontinuity jump on that line. This is the case,
for example, of the specific heats.
7.7 Exercises
7.1 Treating the saturated as an ideal gas, prove Eq. (7.28)
7.2 For Ammonia (NH3 ), experiments report the following expressions for the saturated vapor:
at solid–vapor equilibrium
ln ps = 19.49 −
at liquid–vapor equilibrium
ln ps = 23.03 −
where the saturation pressure ps is expressed in torr.
1. Determine the temperature of the triple point.
2. Determine the latent heats of vaporization and sublimation near the triple point.
3. Determine the latent heat of fusion near the triple point.
7 Phase Transitions
7.3 From the steam tables (see [4]) we see that the saturation pressures at θ1 =
100 ◦ C, θ2 = 90 ◦ C and θ3 = 80 ◦ C are, respectively, p1 = 1.01 × 105 Pa, p2 =
0.701 × 105 Pa and p3 = 0.474 × 105 Pa. Determine the latent heat of vaporization
Din the temperature intervals (θ1 , θ2 ), (θ2 , θ3 ) and (θ1 , ϑ3 ) .
Chapter 8
van der Waals Equation
Abstract The van der Waals equation of state is obtained as the first- order correction
to the ideal gas equation and some observational consequences are discussed: the
correlation of the critical parameters, the Joule–Thomson coefficient, the inversion
curve, and the determination of the vapor pressure. The Law of Corresponding States
is formulated and discussed in several aspects. The notion of generalized charts is
introduced with particular reference to the compressibility chart. The behavior in the
proximity of the critical point is briefly examined.
Keywords van der Waals equation of state · Critical point · Boyle temperature ·
Thermal expansion · Molar heats · Inversion curve · Vapor pressure ·
Corresponding states · Compressibility factor · Compressibility charts · Triple
8.1 Introduction
Johannes Diderik van der Waals was born in Leiden on November 23, 1837 and died
in Amsterdam on March 8, 1923. He was awarded the Nobel Prize in 1910 “for his
work on the equation of state of gases and liquids”. Despite being a “dated” equation
of state, it is still very widely referred to both in research articles and in Physics
textbooks. In the face of an extreme simplicity, it provides a large amount of results,
some accurate enough others less accurate but, in all cases, the qualitative course of
experimental observations is well foreseen. Evidently, a state equation that depends
on only two parameters is already sufficient to highlight the essential features of the
behavior of gases including the phase transitions and the inversion curve.
8.2 A Simple Modification to the Equation of State
for Ideal Gases
For each temperature, at very low pressure, the equation of state Eq. (6.43), for one
mole, tends to form
pVm = RT ,
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
8 van der Waals Equation
which is well known as the “ideal gas” equation of state. A simple exercise in classical
kinetic theory shows that this equation of state can be obtained by a model in which
we take:
1. The molecules have zero size (point-like molecules);
2. The molecules do not interact with each other;
3. The collisions of the molecules with the walls are perfectly elastic.
It is clear that such a model is inherently contradictory because if the molecules
do not interact with each other when they are relatively far and if they do not collide
at small distances (such as a gas of hard spheres of non zero dimension), they cannot
exchange energy among themselves. Moreover, if they cannot exchange energy, in
some way, with the walls (that is they cannot become “thermalized” by the walls as it
happens, for instance, with the radiation field), they can never reach a thermodynamic
equilibrium distribution: each molecule will retain its energy at a constant value.
The first step toward the construction of a real gas model, can be done by abandoning or modifying, some of the three assumptions that underlie the “ideal gas
model”. van der Waals has maintained the hypothesis on elastic collisions with the
walls (otherwise, the gas properties would be dependent on the properties of the
walls) and amended the first two.
Regarding the first assumption (in item 1), van der Waals envisages that the
molecules are comparable to rigid spheres, impenetrable and with constant volume.
The justification for this hypothesis is based on the consideration that the strong electrostatic repulsive interaction on the part of the nuclei is completely shielded by the
electronic clouds at medium distances. It will suddenly become effective when the
nuclei are at a relatively short distance, i.e., below the size of the electronic clouds.
As regards to the second hypothesis (in item 2), he assumes that at large distances,
the molecules are subject (on average) to an attractive force as suggested by classical
electrodynamics. If we refer to the form of the interaction potential1 between two
molecules, these two assumptions are two constraints on the form of this potential
that can be displayed, qualitatively, in Fig. 8.1. The behavior at small and at large
distances is underlined. In the former case, the derivative of the potential with respect
to the mutual distance is negative, being the absolute value extraordinarily large. This
is typical of strong repulsive forces. At large distance, the derivative of the potential
is positive and decreases with increasing distance. This indicates attractive forces
with rapidly decreasing intensity, as it is the case of dipole–dipole interactions [9].
At a very simple level of approximation, it is not necessary to specify the shape
of this force, but we can limit ourselves to take into account average effects as we
shall see.
If we denote with b the total “proper” volume of the molecules (i.e., of the rigid
spheres) then the volume available for each molecule will be Vm = Vm − b then the
1 It
is important to remember that we are referring to interactions between pairs of molecules. This
assumption is justified when the density is low enough which is equivalent to say that the range of
the interactions is small enough that the probability of finding three or more molecules at a distance
below the range of the interaction potential U becomes negligible.
8.2 A Simple Modification to the Equation of State for Ideal Gases
Fig. 8.1 Qualitative description of the interaction potential U between pairs of molecules i, j as
a function of the mutual distance r = ri j = |ri − r j |. Only the short-range interactions (nuclear,
short distance, electrostatic repulsion) and long-range interactions (attractive, dipole–dipole, van
der Waals interactions, proportional to r −6 ) are shown
kinetic model will lead to the equation of state.
pid (Vm − b) = RT .
Here, the term marked with the symbol pid is the pressure calculated from the “ideal
gas” kinetic model in which no interaction has been introduced between molecules
at large distances. Concerning the latter part of the interaction, we will not go into
discussions on detailed models but certain general features can be introduced with
simple arguments. The interaction, whatever the particular form is, will be rapidly
decreasing with the relative distance between the molecules: in the case of electric
dipole forces, the intensity decreases with the sixth power of the distance, then we
can characterize its effect by means of a parameter, called “radius of action” r0 , which
defines the distance beyond which the mutual interaction can safely be neglected.
If a molecule is located in an “internal” part of the gas, that is, at a point which is
distant from the walls for more than r0 , this molecule will be surrounded symmetrically (on average) by the other molecules and the average force that will act on it
will be null.
If the molecule is located “near the wall” that is, at a distance less than the radius
of action, it will experience, on average, an attractive force toward the internal part
of the gas by the surrounding molecules. The effect of this force, whose average
intensity will increase as the molecule is approaching the boundary, will produce a
reduction of the momentum of the molecule which is about of being reflected by
the wall. This Bremsstrahlung effect will reduce the momentum transferred by the
molecule to the wall. Since the pressure measures the momentum transferred to the
wall per unit area and per unit time, we expect that the recorded pressure will be
less than the one calculated in the “ideal gas” model. It is reasonable to assume
that the average Bremsstrahlung effect on the single molecule will be proportional
to the molecular number density. Also, the number of impacts with the walls, per
8 van der Waals Equation
second and per unit area is proportional to the molecular number density, then it is
reasonable to assume the deficit of pressure will be proportional to the molecular
number density squared.
The above argument can be put a little more clearly. Referring to the mechanical
interpretation of pressure, as developed in Appendix B.1, the value of the measured
pressure is given by the total momentum exchanged by the gas molecules normal
to the wall, per unit area, and per unit time. If we consider one single molecule, its
contribution to the measured pressure is given by 2Pz where Pz is the component
of its momentum perpendicular to the wall in the instant of the collision. After
integrating over all the directions of motions and over the velocity distribution of
the impinging molecules, let us define the average value of Pz and denote it with the
symbol Pz . For a given velocity distribution, the rate of collisions per unit area will
be proportional to the molecular number density and then, for the measured pressure
p we may safely affirm that
Pz ,
where we made use of the fact that the molecular number density is proportional to
the inverse of the molar volume of the gas Vm .
If we neglect molecular interactions, the value of Pz will be denoted with Pz 0
and will lead us to the ideal gas pressure according to the calculations performed
in Appendix B.1.
If we allow for an attractive interaction between molecules, the value of Pz for
each molecule impinging the wall, will be lower (Bremsstrahlung effect) then the
ideal gas case. Let us write the relation
Pz = Pz 0 − Pz ,
where Pz denotes the average value of the decrease of Pz due to the
Bremsstrahlung effect exerted on the impinging molecule by the internal ones.
The Bremsstrahlung effect, on the single molecule, can be safely assumed to be,
in its turn, proportional to the molecular number density, i.e., we may write
Pz ∝
then, putting together Eqs. (8.3)–(8.5) we have
1 Pz 0 − Pz .
Pz ∝
From Eq. (8.6), we see that the Bremsstrahlung effect on the measured pressure is
well described by one term proportional to
Pz ∝ 2 ,
8.2 A Simple Modification to the Equation of State for Ideal Gases
which is proportional to the molecular number density squared regardless the details
of the attractive interactions between pairs of molecules. This explains, in part, the
success and the utility of the van der Waals equation in spite of the simplicity of the
arguments on which it is based.
In conclusion, the pressure p recorded on the wall will be linked to the pressure
provided by the kinetic model of the ideal gas (that we have denoted by pid ) by a
relation of the type
p = pid − 2 ,
where a is a positive constant that summarizes, in the first approximation, the average
effect of Bremsstrahlung due to the attractive part of the intermolecular forces.
After these simple arguments, the equation of state of “the ideal gas”, that comes
from a classical kinetic model (“absurd” according to our discussion in Sect. 8.2 but
functional) will be changed, into a certainly more realistic one, becoming
(Vm − b) = RT .
The parameter b accounts for the “non-zero size of the molecules” in the most crude
way. It is the way to enter in the model the existence, as a first approximation, of a
very strong repulsive force between molecules at small distances.
The a parameter expresses, in the first approximation, the effect (on pressure) of
the attractive forces between molecules, at large distances.
In the case of n moles, it is straightforward to write Vm = (V /n) and, after replacing in Eq. (8.9), we obtain
a n2
(V − nb) = n RT .
For different values of the temperature, the equation of van der Waals provides a
family of isotherms whose shape is shown in Fig. 8.2.
Mean Field Approximation
We conclude this section with an important comment: at low density, the molecular interactions can be well described by pair interactions and then only the two
extreme parts of the interaction potential, described in Fig. 8.1 are relevant. At higher
densities, this approximation may be improved by introducing the so-called mean
field approximation. Briefly, each molecule is treated individually, subjected to an
“effective force” which takes into account the average effect of the interactions with
all the other molecules. Within this approximation, each molecule can be regarded
as a probe of the overall field generated by all the others. Therefore, we can write
the total potential energy of the system as
8 van der Waals Equation
Fig. 8.2 Isotherms for a van
der Waals gas in terms of
reduced pressure and
volume, p̃ = p/ pcr and
ṽ = V /Vcr , around the
critical isotherm. From
bottom to top, isotherms
correspond to t˜ = T /Tcr
= 0.85, 0.90, 1.00, 1.05,
1.10, respectively
Ueff (ri ) ,
2 i=1
Upot =
being ri the position of each “probe particle” and the factor 1/2 is used to avoid
double counting the number of interaction when using pair potentials. Also in a
mean-field perspective, density fluctuations of the gas are neglected when we take
the average density in place of the local density. However, it is well known that
the presence of attractive forces enhances the amplitude of the density fluctuations
and the mean-field approximation becomes a useful tool for dealing with the issue.
Any equation of state is conceived for describing states of equilibrium. As we will
see in Sect. 15.3.1 macroscopic states must be thought as average states if we take
into account the existence of fluctuations and the quantitative treatment of the phenomenon in isolated systems, is limited to linear deviations (small fluctuations).
However, this approximation is expected to fail in the attempts to describe the phenomenology associated with the approach of the critical point, where fluctuations
are definitely not negligible.
8.3 Successes and Failures of the van der Waals Equation
The shape of the isotherms, which can be seen in Fig. 8.2, shows the following
relevant behaviors:
1. For high values of the temperature, the isotherm has a hyperbola-like shape and
this shows that the gas behavior is very close to that of the ideal gas. This agrees
with what is observed experimentally;
2. As temperature falls, the isotherms show a deformation, compared to the form of
“equilateral hyperbolas”, in the area of relatively high pressures;
8.3 Successes and Failures of the van der Waals Equation
3. At a temperature, whose value depends on the a and b constants that characterize
the particular gas, the isotherms show a horizontal inflection. The temperature at
which this occurs is called the “critical temperature” and is indicated by Tcr ;
For temperatures below Tcr , the behavior of the isotherm changes markedly:
1. For large volumes, i.e., for low values of the pressure, the isotherms maintain the
form of equilateral hyperbolas, in the ( p, V ) plane, clearly showing the shape
that is expected for ideal gases;
2. For small values of the volume, and therefore relatively high pressure, the van der
Waals’s isotherms deviate significantly from the hyperbola taking a much steeper
slope. The derivative (∂ p/∂ V )T assumes very large absolute values and this shows
that in that area the equation describes a fluid with a very small coefficient of
isothermal compressibility. This property is characteristic of liquids;
3. In the intermediate zone that is the one between the large volumes (hyperbola)
and small volumes (liquid), the behavior shows a region in which (∂ p/∂ V )T > 0.
If we refer to Eq. (4.116), this part of the isotherm is formed by unstable states of
equilibrium and cannot be considered as observable states. However, that part of
the isotherm will be used in Sect. 8.3.7 in order to determine the vapor pressure,
at that temperature, as provided by the van der Waals equation of state.
8.3.1 van der Waals Equation and the Boyle Temperature
The Boyle temperature of a gas, was defined in Sect. 6.7 as the temperature at which
the second virial coefficient B (T ) vanishes. If we recall the van der Waals equation
for one mole, we may write as Eq. (8.9) retaining only linear terms in the two small
constants a and b and obtain the:
+b +b,
Vm = a
p+ 2
where me made use of the fact that a/ pVm2 1. Hence,
Vm a
where we used the approximation:
p 2 Vm2
If we remember the definition of the second virial coefficient given by Eq. (6.2), we
recover Eq. (6.44), in which the second virial coefficient was introduced on empirical
basis, while Eq. (8.13) is derived from an equation of state.
8 van der Waals Equation
8.3.2 The Critical Point
Empirical observations show that when we compress a gas at constant temperature,
we see that at relatively high temperatures, the fluid maintains the properties of a gas
at any pressure, while at relatively low temperatures the compression process leads
to the formation of the liquid phase.
The temperature below which compression leads to the formation of the liquid
phase and above which the fluid remains in the gaseous state is called critical temperature and is denoted with the symbol Tcr .
In Sect. 8.3, where we briefly discussed the behavior of the isotherms of van
der Waals in the ( p, V ) plane, we had already introduced the concept of critical
temperature as the temperature at which the function p = p (V ) exhibits a horizontal
inflection. Now, we have one opportunity of using the experimental measurements
of the critical parameters to determine the constants a and b which characterize the
specific gas. The inflection point identifies a specific value of the volume and of the
pressure that will be called critical volume Vcr , and critical pressure pcr . A horizontal
inflection point satisfies the following two conditions:
= 0,
∂V T
2 ∂ p
= 0.
∂V 2 T
Let us write the equation of state in the form
− 2,
Vm − b
and after some simple calculations we obtain
Vcr = 3b ,
1 a
pcr =
27 b2
8 a
RTcr =
27 b
It seems that if we measure accurately the critical parameters, we may obtain the
values of the two parameters a and b which define the van der Waals equation of
state of that particular gas.
Here a problem arises. We measure three numbers, namely volume, pressure, and
temperature at the critical point and determine two parameters a and b. This case of
overdetermination can be resolved only if the three parameters that define the critical
point are not mutually independent.
If we make use of the van der Waals equation, we find that between the volume,
pressure and temperature at the critical point the following relation applies:
8.3 Successes and Failures of the van der Waals Equation
Table 8.1 Observed values
of the adimensional ratio
defined in Eq. (8.21) for some
gases. Measured values show
that the Law of
Corresponding States (see
Sect. 8.4) is rather well
( pcr Vcr /RTcr )exp
pcr Vcr
= 0.375
for every gas. The idea that the ratio Eq. (8.20) has the same value for all substances
is undoubtedly of the utmost importance but this should not be attributed to one
of the successes of the van der Waals equation of state in particular but, rather,
to a consequence of the fact that the equation of state depends, mainly, on just
two macroscopic parameters or, in other words, two parameters are sufficient to
characterize, in the first instance, the diverse substances.
Phenomenological observations (some examples are given in Table 8.1) support
the prediction that actually the ratio ( pcr Vcr /R Tcr ) assumes a constant value for
all gases or, more precisely, one can identify groups of substances such that within
each group the ratio is constant with some accuracy while going from group to
another, the common value changes slightly. The different groups are formed by
the molecules which have similarities from the microscopic (molecular) point of
view. For molecules with simple structure as the noble gases, and nonpolar diatomic
molecules, the observed value is
pcr Vcr
= 0.27 .
RTcr exp
The error of the van der Waals equation is in between 25 and 30%. However, it is
remarkable that the value of the dimensionless ratio defined in Eq. (8.21) is the same
for different substances whithin each group.
Since the value for the ratio ( pcr Vcr /RTcr ) provided by the equation of van der
Waals is pretty poor if we try to determine a and b by measuring the critical parameters, we will see that choosing different pairs of critical parameters (i.e., pcr , Vcr or
pcr , Tcr or Vcr , Tcr ) will result in little different values for (a, b).
8 van der Waals Equation
8.3.3 The Dependence of the Energy of a van der Waals Gas
on Volume
It is reasonable to expect that the energy dependance of a van der Waals gas is
described by the parameter a which takes into account the existence of attractive
forces between molecule. Let us begin from the fundamental equation for a closed
system written in the entropy representation
+ dV ,
dS =
and use Eq. (8.10) for the ratio p T −1 . Let us express all in the differentials of
temperature and volume and obtain
dS =
dT +
a n2
− 2 dV .
V −nb
By applying the Schwarz identity, we cross differentiate and obtain
and then
∂ 2U
∂V ∂T
∂T ∂V
1 a n2
T2 V2
a n2
and this term depends on the parameter a only as was expected.
Moving from the result in Eq. (8.25), it is interesting to write the expression for
the energy of a van der Waals gas in the variables temperature and volume:
dU = n C V dT +
a n2
dV .
In Eq. (8.32), we shall prove that C V depends on temperature only (C V = C V (T ))
and, moreover, if we assume C V constant, in some interval of temperature, we may
integrate Eq. (8.26) and obtain energy as a function of the state:
U − U0 = n C V (T − T0 ) − n a
8.3 Successes and Failures of the van der Waals Equation
8.3.4 The Coefficient of Thermal Expansion for a van der
Waals Gas
Let us write the van der Waals equation state in the form
− 2,
Vm − b
and, by differentiating and posing d p = 0, we can easily derive the coefficient of
thermal expansion and the quantity (α − 1/T ) which shows the difference from the
ideal gas behavior. We obtain:
Vm − b 2
2a Vm − b 2
Vm T −
A coarser expression can be written if we consider the terms containing the constants
a o b small and we neglect the terms in which they appear either to the second power
or as a product of the two:
Vm T
These expressions will be very useful for qualitative arguments about the inversion
curve of the gas as provided by the equation of van der Waals. Quantitative forecasts
are not very good, but the trend of the dependence on pressure and temperature is
quite satisfactory.
8.3.5 The Molar Heats at Constant Volume and at Constant
Pressure in a van der Waals Gas
From the result Eq. (8.25), we can write the expression of the molar energy of a van
der Waals gas as a function of temperature and volume:
dUm = C V dT +
dVm ,
and applying the Schwarz identity to Eq. (8.31):
8 van der Waals Equation
∂C V
∂ Vm
= 0,
and this shows that the molar heat at constant volume is a function of temperature
only. As for the molar heat at constant pressure, we may refer to Eq. (5.22):
C p = CV +
From Eq. (5.8) we get
α2 V T
Vm − b
and then for a van der Waals gas we obtain
In order to complete the calculation of C p we have to recover Eq. (8.29) for the
expression of α. After some easy calculations we obtain
C p − CV = .
2a (Vm − b)2
The above relation can be approximated neglecting all the terms but those which are
first order in a or in b:
C p − CV = (8.37)
RT Vm
8.3.6 The Joule–Thomson Coefficient and the Inversion
Curve for a van der Waals Gas
Let us remember the definition of Joule–Thomson coefficient given by the general
definition given in Eq. (6.32). Here, the Joule–Thomson coefficient is related to the
coefficient of thermal expansion α given by Eq. (8.29) or by the simplified expression
Eq. (8.30). Combining the two we obtain, respectively:
8.3 Successes and Failures of the van der Waals Equation
CH =
CH =
Vm T
2a Vm − b 2
2a (Vm − b)2
Vm T
The expression of the inversion curve is obtained setting (α − 1/T ) = 0, and in our
case we get
Vm − b 2
This expression can be written in the variables ( p, T ) with some calculations, reusing
the equation of state. We shall come back to this topic in connection with our discussion about the Law of Corresponding States.
If we are satisfied with the approximate expression Eq. (8.39) we find, in the
simplest expression, the inversion temperature:
Ti = 2TB .
Ti =
corresponding to
We see, then approximately, that the inversion temperature is double the Boyle
8.3.7 Determination of Vapor Pressure from the van der
Waals Equation
From the van der Waals equation of state we can determine the value of the vapor
pressure, at each temperature, with a graphic method.
Consider an isotherm at a temperature below the critical temperature and represent
graphically the curve obtained from the van der Waals equation of state at that
We want to superimpose on this representation the horizontal portion that
describes the phase separation whose position gives the value of the pressure of
the saturated vapor. Let us denote with A and B, respectively, the states at which this
8 van der Waals Equation
Fig. 8.3 Maxwell construction for the determination of the vapor pressure using the van der Waals
equation of state. States A and B represent saturated liquid and vapor, respectively. The integral of
the molar volume, as a function of pressure and at constant temperature, is carried out from A to
B along the van der Waals curve, as described by Eqs. (8.44), (8.45). As a representative example,
in the figure we refer to the van der Waals isotherm at t˜ = T /Tcr = 0.9. As a result, the equal area
condition gives a vapor pressure p̃s = ps / pcr = 0.65
horizontal portion intersects the isotherm on the part of the liquid and on that of the
vapor (see Fig. 8.3).
Since the points A and B represent two states in mutual equilibrium, the chemical
potentials must be equal. We can formally calculate the value of the chemical potential
in B, μB , starting from its value in A, μA , by integrating along the van der Waals
dp ,
μB = μA +
∂p T
Since it must be μA = μB and pA = pB we obtain
dp = 0.
If we denote by C the point at which the isotherm intersects the horizontal portion,
we can decompose the integral Eq. (8.44) into the sum of the two integrals:
8.3 Successes and Failures of the van der Waals Equation
dp +
dp = 0.
Remembering that (∂μ/∂ p)T = Vm the previous relation can be written in a form
that can be more easily interpreted:
Vm d p +
Vm d p = 0 .
This relation shows that the areas of the two surfaces marked in the figure must be
equal. The procedure for deriving graphically the value of the vapor pressure from the
van der Waals equation of state, is the following: draw the isotherm corresponding
to the chosen temperature and determine the position of the horizontal line in such a
way that the two areas are equal. The number red on the ordinate provides the value
of the vapor pressure we were looking for.
8.3.8 Free Energy in a van der Waals Gas
According to item 3 of Sect. 8.3, the condition for stability implicates a positive
curvature for the free energy F as a function of the gas volume V :
∂2 F
∂V 2
> 0.
In parallel, the equilibrium condition in terms of F is usually stated as
= 0.
At a fixed temperature T < Tcr , let us consider a state A = ( pA , VA ) in the liquid
phase and a state B = ( pB , VB ) in the vapor phase. Let N = Nliq + Nvap be the total
number of molecules at any volume V between VA , VB , Nliq , Nvap being the number
of molecules in the liquid phase and the vapor phase, respectively. Clearly, as shown
in Fig. 7.4) the total volume V of the system, when VA < V < VB , is given by
V =
Vliq +
Vvap .
In parallel, let us indicate with Fvap and Fliq the free energy of the liquid phase and
the vapor phase, respectively. Conversely, let Flv be the free energy of the system
within VA < V < VB when both liquid and vapor phases are present. Recalling that
8 van der Waals Equation
Fig. 8.4 Free energy of a
van der Waals gas below the
critical temperature. Line
connecting the states A, B is
the free energy
corresponding to two phases
according to Eq. (8.51).
A , B are states on the
spinodal curve
F is an extensive quantity, we can write
Flv =
Fliq +
Fvap .
By substituting in Eq. (8.50) the expressions Nliq /N and Nvap /N obtained from Eq.
(8.49) we get
V − VA
Flv = FA +
(FB − FA ) ,
VB − VA
where FA = Fliq (A), FB = Fvap (B), VA = Vliq (A), and VB = Vvap (B). Therefore,
from Eq. (8.51), it results that the free energy of the system enclosing both liquid
and vapor phase is a straight line in the (F, V ) diagram, connecting the two states A
and B where the system is entirely in the liquid and gas state, respectively.
So, below Tcr , the van der Waals isotherms report a region where the stability
condition is violated and ∂ 2 F/∂ V 2 < 0. For any T < Tcr , this happens between the
minimum and the maximum of the van der Waals isotherm as shown in Fig. 8.4 where
they are denoted by A and B . In this region, the free energy for a biphasic system
connecting the states A and B is always below the corresponding free energy of a
single phase. Considering the analytical form of the van der Waals isotherms below
the critical temperature, for any T < Tcr , it is possible to find a couple of states
enclosing an instability region specific of each T . By iterating this construction for
different T < Tcr , we individuate a curve enclosing the instability states of the van
der Waals isotherms, known as the spinodal curve, as shown in Fig. 8.5.
Therefore, the coexistence curve results always larger than the spinodal curve.
These curve “touch” one each other only at the critical point ( pcr , Vcr , Tcr ). It is
important to remark that, at variance with unstable states inside the spinodal curve, it
is possible to observe only one phase just inside the coexisting curve, even if the free
energy of these states is larger than the one corresponding to a mixture of two phases
coexisting at the thermodynamic equilibrium with V and F given by Fig. 8.4. The
states falling just inside the coexisting curve are therefore metastable and correspond
8.3 Successes and Failures of the van der Waals Equation
Fig. 8.5 Isotherms for a van
der Waals gas. For T < Tcr
isotherms are dashed within
the coexistence region
showed by Fig. 7.3 since
they are connecting either
metastable states as the
superheated liquid (a) and
the supersaturated vapor (c),
or unstable states (b) which
are bounded by the spinodal
curve. Lines at constant
pressure within the
coexistence region are the
equilibrium vapor pressure
according to the Maxwell
construction given by Eq.
(8.46) for a van der Waals
to superheated liquid and supersaturated vapor whether the corresponding pressure
is just below, or just above ps , respectively. In Sect. 9.4 phase changes occurring in
the metastability region will be described.
8.4 The Law of Corresponding States
In the above discussion on the van der Waals equation of state, we highlighted the fact
that an equation of state that depends only on two parameters, has the consequence
that the three critical parameters are not independent of each other and that the
dimensionless ratio ( pcr Vcr /R Tcr ) has the same value for different fluids. This fact
is well confirmed by empirical observations even if the accuracy of the numerical
result suggests the existence of different groups of substances.
Each equation of state for a particular fluid, which we denote with the index γ ,
can be formally written as
ϕγ ( p, V, T ) = 0
and from that we are able to calculate the critical parameters.
We define the reduced variables the adimensional state variables:
p̃ =
ṽ =
t˜ =
and let us express the equation of state, formally, as a function of these reduced
ϕ̃γ p̃, ṽ, t˜ = 0 .
8 van der Waals Equation
The equations of state so expressed, for each γ , must give the same result for the
critical point that is
ϕ̃γ (1, 1, 1) = 0 .
This argument concerning the critical point and, more important, several empirical
evidences suggest the following hypothesis: different fluids have the same equation
of state if it is expressed in reduced variables. In other words there exists one equation
of state, in reduced variables ϕ̃ p̃, ṽ, t˜ = 0 for every γ :
ϕ̃γ p̃, ṽ, t˜ = ϕ̃ p̃, ṽ, t˜ = 0.
The assumption given in Eq. (8.56) is known as the Law of Corresponding States
and has many observable consequences. Let us consider some of these consequences
in the followings sections.
8.4.1 Corresponding States for the Second Virial Coefficient
The second virial coefficient B = B (T ) has the dimension of a volume. If, for each
gas, we measure B at various temperatures and we plot B/Vcr as a function of T /Tcr
we see that the experimental data, for different gases, overlap along the same curve
with great accuracy. In particular, the Boyle temperature for different gases has the
same value if it is expressed in units of their critical temperature:
8.4.2 The Compressibility Factor and the Generalized
Compressibility Chart
The discussion in the previous subsections, concerning the second virial coefficient
will be better understood if it is seen within a more general argument that we are
going to mention here.2
As we have seen the virial coefficients B (T ) , C (T ) , . . . describe how far the
fluid is considered to deviate from the so-called “ideal gas” which is properly
described by the first coefficient A (T ).
2 A detailed discussion is of major importance and for those interested in technical applications can
be found in [10].
8.4 The Law of Corresponding States
If we maintain, as a reference situation for a gas the one of the “ideal gas”, the
“closeness”, or the “remoteness” of the equation of state of a real fluid is currently
described by a single comprehensive parameter called compressibility factor.3
It is generally denoted with the symbol Z = Z ( p, T ) and is defined by the ratio:
Clearly for an ideal gas the function Z ( p, T ) maintains the constant value Z = 1
while, for real gases, its value depends on the state. The graphic representation of
this function is equivalent to the knowledge of the equation of state of that particular
substance. In technical applications, this parameter is represented in a plane (Z , p)
by a family of curves one for each temperature and the graphical representation of
these curves is named compressibility chart. If, at a given pressure and temperature,
the compressibility factor Z ( p, T ) remains close to 1 along a transformation, then
the gas may be treated as an ideal gas but if Z differs substantially from unity, the
equation of state departs significantly, the compressibility charts must be used and
then the compressibility factor is a good indicator of proximity of the gas to the ideal
Compressibility charts can be prepared for any pure substance of particular interest
or may be calculated from an equation of state like, for instance, the van der Waals
equation or some other more accurate ones. In both cases, having in mind to make
use of the Law of Corresponding States, the chart is represented in reduced variables
and in this case we draw, in a plane (Z , p̃), the family of curves bearing as an index
the reduced temperature t˜. In this case, we speak of a generalized compressibility
In Fig. 8.6, the represented compressibility chart is the one obtained from the
Lee–Kesler equation of state [11]. This is a quite accurate equation of state based on
12 parameters and details can be found in [12].
Following exactly the same procedure, other generalized charts are constructed
as, for instance, the generalized chart for entropy and generalized chart for enthalpy
particularly important in engineering applications. These generalized charts allow
quickly finding a variation of entropy or enthalpy in certain changes of state for substances of which we know the critical point. Let us consider the following example.
Example 8.1 Let us consider a cylinder containing 5 kg of Carbon Dioxide (CO2 ,
Molecular Weight MCO2 44) at the temperature θ1 122.5 ◦ C and at the pressure
p1 221.4 bar. Determine the volume of the cylinder and compare the result with
the value obtained treating the gas as an ideal gas. The gas is then heated, at constant
pressure, up to the temperature θ2 183.4 ◦ C. We aim to determine the amount of
work done on the gas. The critical pressure and temperature of carbon dioxide are,
respectively, pcr = 73.7 bar and Tcr = 304.36 K.
The number of moles is n 5000/44 113.6 and then we may calculate the
initial volume from the ideal gas state equation:
3 Not
to be confused with the coefficient of compressibility χ = −V −1 (∂ V /∂ p).
8 van der Waals Equation
Fig. 8.6 Generalized compressibility chart. Reproduced with permission from Van Wylen et al.,
Fundamentals of Classical Thermodynamics [12]
V1id =
n RT1
113.6 × 8.31 × 395.6
0.017 m3 = 17 L .
221.4 × 105
8.4 The Law of Corresponding States
The value of the volume we obtain from the compressibility chart is
V1 =
n Z RT
= Z V1id .
The correct value for Z is obtained from the compressibility chart provided that
we calculate first the reduced pressure and temperature of the initial state. We get,
1.3 .
t˜1 304.36
p̃1 From the compressibility chart shown in Fig. 8.6, we find that the compressibility
factor is (in the initial state)
Z 1 0.635 ,
V1 = Z 1 V1id 0.635 × 17 10.8 L .
The amount of work, W , done on the gas in the isobaric transformation to the final
state is
W = − p (V2 − V1 ) .
We need the final volume V2 . In order to determine it we have to refer, once again,
to the compressibility chart and then we need the reduced pressure and temperature
of the final state. The former takes the same constant value as in the initial state
p̃2 = p̃1 3 while the latter is t˜2 = 456.5/304.36 1.5. From the compressibility
chart the compressibility factor results
Z 2 0.79 .
The final volume is
V2 = n Z 2
8.31 × 456.5
113.6 × 0.79 ×
15.38 L .
221.4 × 105
The amount of work is
W −221.4 × 105 × (15.38 − 10.8) −1.01 × 105 J .
8 van der Waals Equation
8.4.3 Vapor Pressure and Latent Heat of Vaporization
At sufficiently low temperature, which in our case means T 0.65 Tcr , the vapor,
though in the state of saturated vapor, is well described by the ideal gas equation of
state4 and the vapor pressure, according to Eq. (7.13), is well described by
ln † = −
− †
where p † and T † refers to a reference state. From this relation, we see that logarithm
of p is a linear function of (1/T ), i.e., may be written in the form
ln † = c1 − c2
where c1 and c2 are two phenomenological constants. According to the Law of
Corresponding States, let’s write
= Γ1 − Γ2
with Γ1 and Γ2 being two adimensional phenomenological constants which are the
same for every gas. From the comparison between Eq. (8.59) and Eq. (8.61) we get
an important relation which provides us with the value of latent heat of vaporization:
Γ2 =
From this relation and from the experimental value for the constant Γ2 we get the
interesting relationship for the latent heat:
Hm = 5.21 RTcr .
Relations given by Eqs. (8.61) and (8.63) have a theoretical basis, as we have seen
in Eq. (8.59), which is based on the assumption that the saturated vapor is very close
to an ideal gas and that the latent heat of vaporization is practically constant (at least
in the temperature interval used to integrate the Clapeyron’s equation). At “high”
temperatures that is 0.65Tcr T Tcr the saturated vapor is not well represented
by the ideal gas law and the latent heat of vaporization decreases to become zero at
the critical temperature. For this reason it is preferable to characterize the gaseous
and liquid phases in mutual equilibrium by means of their densities and the latter
are measured in units of the respective critical density (reduced densities). These
4 This is a statement according to the Law of Corresponding States. It can be formulated by observing
that, for t˜ 0.65, Z is sufficiently near to 1.
8.4 The Law of Corresponding States
Fig. 8.7 Reduced density of the liquid and the gaseous phases in mutual equilibrium, as a function
of the reduced temperature for a group of substances. Data show that the Law of Corresponding
States holds with appreciable accuracy. Figure reproduced with permission from Guggenheim,
Thermodynamics: An Advanced Treatment for Chemists and Physicists [3]
reduced densities of the phases in mutual equilibrium are measured for a group
of substances and the results are shown in Fig. 8.7 as a function of the respective
reduced temperature. As reported by [3], the data are well described by the very
simple relations:
2 cr
1 − t˜ ,
1 − t˜
The reader is referred to the discussion contained in [3] for a more in-depth
examination of the problem.
Table 8.2 The ratios Ttr /Tcr
and ptr / pcr are shown for
some elements
8 van der Waals Equation
(Ttr /Tcr )exp
( ptr / pcr ) × 100
8.4.4 Triple Point and the Law of Corresponding States
If the Law of Corresponding States, according to which the equation of state
expressed in reduced variables is the same for all substances (or, at least, for some
groups of substances), holds we expect that the phase diagram represented in reduced
variables is the same for various components. One particular consequence would be
that the ratio of the critical values to the homologous values at the triple point are
the same value for different substances.
From Table 8.2 it is found that for the various substances the ratio of the temperature at the triple point to the critical temperature is maintained close to the value
0.555 .
Similarly, for the ratio of the triple point pressure to the critical point pressures:
1.4 × 10−2 .
8.4.5 The Inversion Curve and the Law of Corresponding
In Sect. 6.6, we discussed briefly the definition of inversion curve as formed, in a
( p, T ) plane, by the points in which the Joule–Thomson coefficient is zero and,
in general, this is equivalent to Eq. (6.39). Every gas will be described by its own
inversion curve. However, the Law of Corresponding States suggests that Eq. (6.39),
written in terms of reduced variable, gives
α̃ p̃, t˜ =
Therefore, from Eq. (8.68), the specific inversion curve for each gas can be obtained.
The degree of accuracy of such a rule can be estimated by looking at Fig. 8.8. Here,
the experimental data for the inversion curves of different substances are shown
together with the extrapolated results obtained from [13]. We do not enter into the
problem of choosing the most adequate equation of state but we emphasize the fact
8.4 The Law of Corresponding States
Fig. 8.8 Generalized inversion curves in reduced variables for different fluids. Reproduced with
permission from Hendricks et al., Joule–Thomson inversion curves and related coefficients for
several simple fluids [14]
that the experimental data overlap on a single sharply defined curve and this shows
to what extent the Law of Corresponding States is supported by experimental data.
8 van der Waals Equation
Fig. 8.9 The inversion curve
in reduced variables,
obtained from Eq. (8.70) is
shown. For comparison, the
inversion curve of hydrogen
showed in Fig. 8.8 is also
reported in reduced variables
8.4.6 The Law of Corresponding States and the van der
Waals’s Equation
The van der Waals equation is an example of an equation of state which can, in a
natural way, be written in reduced variables. It is a useful example because, even
if the accuracy of the numerical results is sometimes a little poor, nevertheless the
qualitative trends are easily and correctly reproduced.
Going back to Eqs. (8.17)–(8.19) with some calculations we get
p̃ +
(3ṽ − 1) = 8 .
This is the van der Waals equation written for all gases. If we refer to Sect. 8.3.6, we
can write the expression of the inversion curve in reduced variables having adopted
this equation of state.
In [7], it is quoted the result obtained with some calculations:
p̃ = 24 3 t˜ − 12 t˜ − 27 .
The graphical representation of this curve with superimposed the experimental
results are shown in Fig. 8.9. Both the experimental curve and the curve obtained
from the van der Waals equation of state has “bell-like” shape and at the points under
the bell the coefficient Joule–Thomson is positive. This means that if the initial state
is in this region, the expansion will lead to a cooling of the gas. The maximum
temperature drop is obtained with the maximum expansion pressure jump starting
from the peak of the inversion curve.
8.5 Power Laws at the Critical Point in a van der Waals Gas
8.5 Power Laws at the Critical Point in a van der Waals Gas
From van der Waals Eq. (8.9) we see that the critical isotherm exhibits a horizontal
change of curvature, where (∂ p/∂ V )Tcr = 0. Therefore
χT ∝
→ ∞ for T Tcr .
∂p T
In this section, we aim at exploring how the compressibility diverges. More generally, we would explore the behavior of thermodynamic quantities relevant for the
van der Waals gas, in the vicinity of the critical point. To this aim let use introduce
the variables tˆ, v̂, p̂:
T − Tcr
= t˜ − 1 ,
V − Vcr
v̂ ≡
= ṽ − 1 ,
p − pcr
= p̃ − 1 .
Within these change of state variable, the van der Waals equation becomes:
1 + p̂ =
4(1 + tˆ)
1 + (3/2)v̂ (1 + v̂)2
By expanding the denominators of Eq. (8.74) close to the critical point, so that v̂ 1,
we get:
1 + p̂ = 4(1 + tˆ ) 1 − v̂ + v̂2 + v̂3 + . . . − 3 1 − 2v̂ + 3v̂ 2 − 4v̂3 + . . . .
After some simple algebra, Eq. (8.75) may be approximated as
p̂ 4tˆ − 6(tˆ v̂) − v̂3 .
By taking the derivative of Eq. (8.76) respect to v̂ we get
∂ p̂
∂ v̂
= −6tˆ − v̂2 .
At the critical point V = Vcr , T = Tcr :
8 van der Waals Equation
χT = −
T =Tcr
∂ p̂
∂ v̂
so that
ˆ −1
χT ∼ (T − Tcr )−1 .
Density Mismatch Between the Vapor and Liquid Phase
The equilibrium condition μ1 = μ2 for the chemical potential of two coexisting
phases labeled 1 and 2, implies that
Vdp = 0 ⇒
v̂ d p̂ = 0 .
Correspondingly, the differential of Eq. (8.76) at constant T results to be
d p̂ = − 6 tˆ + v dv̂ .
By substituting Eq. (8.81) in Eq. (8.80) we get
v̂ d p̂ = −3tˆ v̂22 − v̂12 −
v̂ − v̂14 = 0 .
Therefore, slightly below Tcr , or equivalently when tˆ → 0−
v̂2 = v̂1 ⇒ |V2 − Vcr | = |V1 − Vcr | .
This implies that the “bell” enclosing coexisting phases is symmetric respect to
the critical point ( pcr , Vcr ). Therefore, we can write Eq. (8.76) in both phases with
p̂1 = p̂2 and v̂1 = −v̂2 :
4tˆ − 6tˆv̂1 −
v̂ = 4 tˆ + 6 tˆ v̂2 − v̂23 ,
which implies that
v̂1 ∝ (−tˆ )1/2 ⇔ (V1 − Vcr ) ∝ (Tcr − T )1/2 .
For the density difference around the critical point we can easily derive that
∝ (Tcr − T )1/2 .
8.5 Power Laws at the Critical Point in a van der Waals Gas
Critical Isotherm
For tˆ 0 we observe that p̂(v̂ ) v̂3 . Therefore,
(V − Vcr ) ∝ ( p − pcr )1/3 ,
( −
cr )
∝ ( p − pcr )1/3 .
Summarizing, close to the critical point, the compressibility and the density difference
between the vapor and liquid phase can be expressed as power laws with respect to
T = |T − Tcr |. The corresponding exponents are known as critical exponents β, γ ,
respectively, being β = 1 and γ = 1/2 in the van der Waals model.
8.6 Exercises
8.1 A cylinder with rigid and adiabatic walls is divided, by a septum, in two parts
A and B whose volume are, respectively, VA = 20 L and VB = 2 L. Part B is empty
while part A is filled by ten moles of carbon dioxide the pressure p = 10 bar. The
gas is well described by the van der Waals equation where the two constants are a =
363.96 × 10−3 m6 Pa mole−2 , b = 0.0427 × 10−3 m3 mole−1 and the molar heat at
constant volume is C V 28.85 J mole−1 K−1 . The septum is removed and the gas
suffers a free expansion to the final volume. Estimate the temperature variation.
8.2 Carbon dioxide is expanded against atmospheric pressure in a throttling experiment. The initial state is p0 = 20 bar and T0 = 270 K. CO2 can be treated using the
van der Waals equation of state with parameters a = 363.96 × 10−3 × m6 Pa mole−2
and b = 0.0427 × 10−3 m3 mole−1 . The molar heat at constant pressure is C p =
40.225 J mole−1 K−1 . Estimate the final temperature.
8.3 The critical temperature and pressure for Nitrogen are, respectively, Tcr =
126.2 K and pcr = 33.95 bar. In the initial state we have 3 kg of Nitrogen at
T = 164 K and at p1 67.9 bar. The system undergoes an isothermal transformation in which pressure is increased by a factor 2.12 to the final value p2 143.9 bar.
We observe that the volume has decreased by the same ratio. Does this mean that the
gas in this region of operation behaves very nearly like an ideal gas? Determine the
two volumes.
8.4 In an isothermal process of a gas from the initial pressure p1 to the final pressure
p2 , the compressibility factor changes from the initial value Z 1 to the final value Z 2 .
Prove that Z 2 < Z 1 ( p2 / p1 ).
8.5 Calculate the coefficient of thermal expansion of a real gas as a function of the
compressibility factor.
8 van der Waals Equation
8.6 The vapor pressure of Neon (Ne) is well described by the relation:
log p = 8.75 − 0.0437 T −
in which p is the vapor pressure expressed in torr. The critical point of Ne is pcr
= 48 atm and
27atm. and Tcr(Ne) = 44 K. For Argon (Ar) the critical point is pcr
Tcr(Ne) = 151 K. Find the Argon vapor pressure at T = 135 K.
8.7 The critical point for carbon dioxide is Tcr = 304.20 K, pcr = 7.39 MPa and
Vcr = 94 × 10−3 m3 kg−1 . Determine the volume occupied by 44 kg (103 mole) of
carbon dioxide at pressure p = 48 bar and temperature θ = 122.3 ◦ C.
8.8 A adiabatic cylinder whose total volume is V0 = 1.2 m3 is divided into two parts
by a partition. One part, whose volume is V1 = 470 L is filled by 44 kg of carbon
dioxide at the temperature T0 = 600 K while the other part is empty. The partition is
removed and the gas will settle in a new state of equilibrium with final temperature
T . Supposing that in this temperature interval the molar heat at constant volume is
constant and is C V 28.85 J mole−1 K−1 and that the gas is well described by the
van der Waals equation with a = 363.96 × 10−3 m6 Pa mole−2 , estimate the final
Chapter 9
Surface Systems
Abstract In this chapter, the general principles of Thermodynamics are applied
to two-dimensional systems in which extensiveness is referred to areas of surfaces
instead of volumes. The boundary between a liquid and its vapor is considered and
is treated, as the first approximation, as a surface system possessing its own energy
and entropy which contribute, additively, to the overall energy and entropy of the
two phases. The surface tension is defined and the thermodynamical potentials of the
surface layer are obtained. Consequently, the general properties of thermodynamical
transformations and of specific heat of the surface system are briefly discussed. In
particular, the influence of the surface tension on the chemical potential and hence on
phase equilibrium is explored and this allows us to treat, with some approximation,
the problem of the stability of supersaturated vapor.
Keywords Surface systems · Surface tension · Interface systems · Stability of
equilibrium states · Curvature effects · Kelvin’s relation · Nucleation in
supersaturated vapor · Spinodal decomposition · Corresponding states
9.1 Introduction
In the approximation of discontinuous systems, the different phases are considered
separated by ideal surfaces of zero thickness such that the intensive properties (for
example, the density and therefore the energy density, entropy, etc.) are described as
variants, in a discontinuous manner on passing from one phase to the other. In addition
even if we assign the transition region a very small thickness, the contribution of all
extensive quantities held by it, may often be considered irrelevant in determining
the overall state of the system. In some cases, however, this cannot be accepted, and
we have to increase the level of complexity of the description. The first step is to
consider the transition zone as a third system which possesses its own amount of
thermodynamic potentials, some of which cannot be neglected.
To have a more precise idea of how to treat this transition zone, we consider the
case of the surface tension which manifests itself in the zone of separation between
two phases. In particular we shall consider, in a simplified version, a liquid and its
vapor, in mutual equilibrium.
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
9 Surface Systems
Fig. 9.1 A liquid in
equilibrium with its vapor at
a given temperature T . The
interaction among molecules
are supposed to be, on
average, attractive r0 denotes
the interaction range. The
two molecules A and B are
characterized by their
distance from the phase
separation surface
9.2 Surface Tension
Consider a vessel containing a liquid in equilibrium with its vapor all at uniform
temperature as outlined in Fig. 9.1. Then, consider a molecule A located in an inner
part of the liquid.1 Let’s denote by r0 the range of the intermolecular forces that will
be supposed to be attractive. A sphere with radius r0 centered on the molecule A
will be populated by the other molecules in a statistically homogeneous way, and
then the net forces they exert on the molecule will, on average, balance. If we now
consider a molecule B close enough to the surface of separation between the two
phases, the symmetrical situation no longer exists and, as the density of the vapor is
less than that of the liquid, the resultant force acting on it will be directed normally
to the surface of separation and directed toward the inside. This allows us to give a
first-approximation definition of the surface layer. The closer is the molecule to the
surface layer, the more intense is the attractive force. The first consequence of this
qualitative argument, is that if we want to bring a molecule from the internal zone
(molecule A) to the surface layer, we must do some positive work and, therefore, if
we want to increase the area of the surface of separation between the two phases, we
must make positive work to bring an adequate number of molecules from the inside
the surface zone.
This amount of work is proportional to the increase of the surface area and, in a
first approximation, depends only on temperature.2
If we denote with the area of the separation surface and with d̂ W the infinitesimal amount of work done to increase the area of the separation surface by the
infinitesimal amount d, we write:
1 The
internal zone is defined by the property that the distance of the molecule from the surface of
separation between the two phases is larger than the radius of action of the molecule.
2 The amount of work depends certainly on the number densities of the two phases and the latters,
at phase equilibrium, depend only on temperature.
9.2 Surface Tension
d̂W = σ (T ) d.
The quantity σ (T ) is called surface tension or interfacial tension. Remembering
the meaning of the thermodynamic potentials we see that, if we vary the area of
the separation surface in an isothermal process, it is possible to define the surface
tension as the increase of free energy dF per unit area increase d. Within the more
general context of open systems, by using the Fundamental Relation for the free
energy of Eq. (4.30) and the expression in Eq. (9.1), we can write
σ =
F, T, V, N being, for the two-phase system, the total free energy, the temperature, the
total volume, and the number of particles (molecules), respectively. A dimensional
analysis of σ leads to the following identity:
[σ ] =
In other words, for any line one can imagine to draw on the surface , the surface
tension σ may also be interpreted as the force per unit length acting perpendicularly
to that line in order to keep the surface separating two phases in a stable equilibrium
state.3 For this reason, it is common to report measures of the surface tension either
in J/m2 or N/m.
The surface tension is a property of the two phases in mutual equilibrium. It
depends on the difference in number density of molecules in the two phases which,
in turn, depends on the temperature. At “sufficiently low” temperatures (0.65 Tc ),
where the molecular density of the vapor is small compared to that of the liquid, the
surface tension depends very little on the presence of the vapor and in this case we
may speak simply of “surface tension of the liquid”. It depends only on the number
density of the molecules of the liquid, and, therefore, we expect that its temperature
dependence is connected in a simple way to the coefficient of thermal expansion.
More generally, the coefficient of surface tension can also be defined when the
liquid is in contact with other materials but in this case, it also depends on the specific
nature of these materials. We will come back on this aspect at the end of the chapter
in Sect. 9.6.4.
3 Indeed, in mechanics textbooks, it is common to introduce the surface tension by a force in specific
frame supporting liquid membranes (e.g., soap films).
9 Surface Systems
9.3 Properties of Surface Layers
Consider a portion of the surface layer between two phases, of area , at the temperature T and with very small thickness in order to consider the volume of the layer
negligible. For this system the fundamental equation, in differential form, will be
written as
dU = T dS + σ d,
and in finite form S = S (U, ) or U = U (S, ).
The extensive nature of energy and entropy will be expressed by the relationships:
U = u ,
S = s ,
where u = u (T ) and s = s (T ) are, respectively, the surface energy density
and the surface entropy density and they depend only on temperature. Then
∂ T
s =
∂ T
u =
These quantities depend only on the value of surface tension and can be determined
in the following way.
Regarding s let us refer to Eq. (9.4) and express the latter as a function of
temperature and surface area. To do this, we express the differential of the entropy
in the differentials of the area and of temperature:
dS = s d +
and we obtain
dU = T
dT + (σ + T s ) d
and then applying the Schwarz theorem we get
s = −
Similarly, we write Eq. (9.4) putting in evidence the differential of the entropy and
express the differential of the energy in terms of surface and temperature variables:
dS =
1 du dT +
T dT
u −
9.3 Properties of Surface Layers
As before, equating the cross derivatives, we get
u = σ − T
We can now write the expressions for the energy and entropy:
U = σ −T
(we recell that dσ/dT ≤ 0). Having written the expressions for the energy and, the
entropy for a surface layer we can obtain other relevant properties. For instance, the
heat capacity at constant area, C and the specific (per unit area) heat, c , will be
C = T
c = T
= −T
d2 σ
dT 2
= −T
d2 σ
dT 2
The free energy, which is defined as F = U − T S, will be
F = σ
and this could be obtained directly from the definition of surface tension Eq. (9.1) and
from the assumption that it depends on temperature only. In the following example
we estimate the order of magnitude of the thermal effects occurring in coalescence.
Example 9.1 Consider, as an example, the energy balance in a process of coalescence
of a certain number of small drops in one larger drop. Suppose to have very small
drops of water at the temperature θ = 25 ◦ C with radius r 1 µm and suppose that
they merge to form one drop with radius R 1 mm. If we denote by N the number
of small drops involved, from volume conservation (incompressible liquid) we have
109 .
If we denote by the surface area of the large drop and by 0 the total surface area
of the N small drops, we have
= 4 π N /3r 2 = N − /3 0
and hence, the variation of the total surface area in the process is
9 Surface Systems
= − 0 = −0 1 − N − /3 .
Let us refer to the energy possessed by the surface layers in the two configurations.
If we denote by U the variation of this part of the total energy of the system in
the coalescence process, we may write
U = σ = −σ 0 1 − N − /3 −σ 0 ,
where the approximate value is due to the very large value of N in this example. Taking for the surface tension of water at θ = 25 ◦ C, the approximate value
σ (θ = 25 ◦ C) 72 × 10−3 J m−2 , we have:
U −72 × 10−3 × 109 × 4 π 10−12 9.05 × 10−4 J.
Due to energy conservation, the final drop will experience an increase of temperature
T given by
T =
where Cdrop is the heat capacity of the final drop. The mass of the drop is approximately m 4.18 × 10−6 kg and hence Cdrop 4.18 × 103 × 4.18 × 10−6 1.75 ×
10−2 J K−1 . The temperature variation is obtained from Eq. (9.23):
T 9.05 × 10−4 J
5.2 × 10−2 K.
1.75 × 10−2 J K−1
Similarly let us consider Mercury. The surface tension is σ (θ 20 ◦ C) 0.559 J m−2 and then, for the energy variation in the process we have
U −559 × 10−3 × 109 × 4 π 10−12 7.03 × 10−3 J.
For the heat capacity we need the density and the specific heat of Mercury. We
have, respectively, for the density 1.34 × 104 kg m−3 and for the specific heat
0.14 × 103 J kg−1 and hence for the heat capacity:
4.188 × 10−9 × 1.34 × 104 × 0.14 × 103 0.785 × 10−2 J K−1 . (9.26)
Then the temperature variation results
T 7.03 × 10−3 J
0.9 K.
0.785 × 10−2 J K−1
9.3 Properties of Surface Layers
9.3.1 Stability of Equilibrium States
The stability criteria for equilibrium states derive directly from the fundamental
Principles of Thermodynamics as we have already widely discussed in Sect. (4.3.6).
Let’s take a surface layer with total area and consider it as composed by two
arbitrary subsystems 1 and 2 with areas 1 and 2 , respectively. The extensivity of
extensive quantities implies
= 1 + 2 ,
U = U1 + U2 ,
S = S1 + S2 .
Let the system be isolated, with constant area, and consider a virtual, infinitesimal
transformation in which the area of one subsystem changes by an infinitesimal amount
and let’s denote with T1 and T2 the temperatures of the two subsystems. For this
infinitesimal process we may write
d = 0
d1 = −d2 ,
dU = 0 ⇒ dU1 = −dU2 ,
dS = dS1 + dS2 .
The entropy variations of the two parts are
dU1 −
d1 ,
dS2 =
dU2 −
d2 .
dS1 =
then for the entire system we write
dS =
dU1 −
d1 .
The general stability criteria discussed in Sect. 4.3.6 require that a stable (or
metastable) configuration, with the given constraints, corresponds to a point in which
entropy possess a (relative) maximum. Therefore for every infinitesimal, virtual,
transformation we require that dS = 0 and from this it follows that
dU1 −
d1 = 0.
Since the variations dU1 and d1 can be taken to be independent of each other we
derive the conditions:
9 Surface Systems
T1 = T2 ,
σ1 = σ2 .
So we are led to the conclusion that in a surface layer in a stable equilibrium state,
temperature, and surface tension must be uniform.
9.4 Interfaces at the Contact Between Two Phases
in Equilibrium
We consider now two phases in mutual equilibrium separated by a surface layer with
area . What we can do, now, is to build a first modeling of the surface layer of
separation based on one single parameter, i.e., the surface tension, and, as a consequence, we are able to attribute to the surface layer, well-defined expressions for its
thermodynamic properties.
In this new context, therefore, we will consider the overall system as a system
composed of three subsystems: phase 1, phase 2, and the surface layer and their
properties will be denoted by the indices 1, 2 and , respectively.
At the surface phase we will assign a null volume and, consequently, a null (negligible) number of moles. For the overall system, the extensive properties will be
written as follows:
V = V1 + V2 ,
n = n1 + n2,
U = U1 + U2 + U ,
S = S1 + S2 + S .
The external constraints are the following:
(1) Closed system: dn = 0;
(2) Constant volume: dV = 0;
(3) Isolated system: dU = 0 that is dU = − (dU1 + dU2 ).
The condition for stable or metastable equilibrium is given by the points of maximum
entropy, which implies that for each virtual infinitesimal process compatible with the
assigned constraints we must have
dS = 0.
For the two tridimensional phases we may write
dS1,2 =
dU1,2 +
dV1,2 −
dn 1,2 ,
9.4 Interfaces at the Contact Between Two Phases in Equilibrium
while for the surface phase:
dS =
dU −
After adding and taking into account the constraints we get
dU1 +
dU2 −
dn 1 +
dV1 −
d = 0.
It should be observed that the variations of the volume V1 (or, it would be the same,
for the volume V2 ) and the variation of the area of the separation surface, d, are
not mutually independent. From fundamental geometry we know that
+ ,
r1 r2
where r1 and r2 are the two principal radii of curvature of the surface considered, as
shown in Fig. 9.2. In the simple case of a spherical surface we have
= ,
where r is the radius of the sphere. With this clarification, Eq. (9.47) becomes
dU1 +
dU2 −
dn 1 +
dV1 = 0.
T r1 r2
Now all the differentials are mutually independent so Eq. (9.50) implies
Fig. 9.2 Principal radii of
curvature of a surface
separating two subsystems
(e.g., two phases)
9 Surface Systems
T1 = T ,
T2 = T ,
μ1 = μ2 ,
p1 = p2 + σ
r1 r2
Two things need to be highlighted: the first, as seen from Eq. (9.53) the necessary
condition for the phase equilibrium is the equality of the chemical potentials but in
this case, it should be noted that the two phases will be at the same temperature but
at a different pressure. The second consideration is the role played by the curvature
of the surface of separation as seen from Eq. (9.54). For a flat surface, that is with
r1 , r2 → ∞, the pressures on the two phases are equal but in the case of a curved
surface, the difference between two pressures may be significant.
In the particular case of a spherical separation surface where r1 = r2 = r , we get
p1 = p2 +
stop The expression in Eq. (9.55) is known as Laplace equation and the pressure
of curvature is also known as Laplace pressure of a droplet or a bubble. We notice
that we obtained the Laplace equation by following only a thermodynamic approach.
It would be rather easy to recover the same result in a purely mechanical way by
treating the surface as an elastic film nevertheless, from the thermodynamical point
of view, the two situations are a little different and the analogy between surface
tension phenomena and elasticity conceals some dangers.
In order to have the order of magnitude of the pressure difference consider,
as an example, the case of a spherical drop of water at the temperature 20 ◦ C
and immersed in a saturated atmosphere. At this temperature the surface tension is approximately σ 0.073 N m−1 while the pressure of saturated vapor is
p (20◦ C) 17.51 mm Hg 0.023 atm. For a drop of radius equal to 10 µm the
internal pressure that is approximately:
0.146 atm,
r1 r2
p1 0.023 + 0.146 = 0.169 atm.
9.5 Curvature Effect on Vapor Pressure: Kelvin’s Relation
According to Eq. (9.55) inside a drop of liquid pressure is enhanced and, if the
curvature radius is sufficiently small, the pressure increase may be very high. It
follows that the chemical potential of the liquid may change significantly and, as a
consequence, the vapor pressure of the surrounding vapor increases accordingly.
9.5 Curvature Effect on Vapor Pressure: Kelvin’s Relation
Fig. 9.3 A and B represent, respectively, the states in which we have liquid and saturated vapor in
mutual equilibrium at the same temperature T . B1 represents the supersaturated vapor at pressure
pss and A1 the state of the liquid within the droplet at pressure pliq . They are both at the same
temperature T . The chemical potential in A and B has the same value
Let us have a pure vapor at temperature T , kept constant, and let ps be the saturation
vapor pressure at that temperature as provided by the Clapeyron’s equation (7.9).4
Suppose that, within the vapor, a spherical drop, of radius r , is formed and denote
with pliq the total pressure inside it. In order to reestablish the equilibrium condition
between vapor and liquid, the pressure of the vapor phase must be brought to a new
value such that the two chemical potentials are equal. Let us denote with pss this new
value. With reference to Fig. 9.3 let us denote with A1 the thermodynamical state of
the liquid and with B1 the state to which the vapor must be brought so that it is again
in equilibrium with the drop. It easy to see that it must be a supersaturated vapor and
then its pressure will be denoted by pss ( pss > ps as will be proved below).
If a spherical drop of radius r is formed then, as we have seen, inside the drop the
pressure of the liquid is given by
pliq = pss +
pss being the pressure external to the drop.
The chemical potential in A1 can be obtained starting from its value in A and by
integration along the isotherm to the point A1 :
A1 μliq T, pliq = μliq (T, ps ) +
d p.
4 In view of the specific topic discussed in this section, the vapor pressure given by Eq. (7.9) is often
quoted as p∞ that is for flat separation surface between the two phases.
9 Surface Systems
= Vmliq (T, p) ,
where Vm (T, p) is the molar volume of the liquid at temperature T and at the
generic pressure p. If we neglect the change of volume of the liquid with pressure
then the integral becomes trivial and we get
μliq T, pliq = μliq (T, ps ) +
Vmliq (T )
pss − ps +
In a similar way, we proceed to determine the chemical potential of supersaturated
vapor starting from its value in the state B and integrating along the isotherm to the
pressure pss (state B1 ):
B1 μss (T, pss ) = μvap (T, ps ) +
d p.
In this case (∂μ/∂ p)T = Vm (T, p) and, describing the vapor with the equation of
state of ideal gases:
d p,
μss (T, pss ) = μvap (T, ps ) + RT ln
μss (T, pss ) = μvap (T, ps ) +
μliq (T, ps ) = μvap (T, ps )
let us define
μ = μss (T, pss ) − μliq T, pliq = RT ln
− Vm (T )
pss − ps +
The last term, assuming that the logarithmic term is of the order of unity, is of the
order RT ∼ ps Vm where Vm is the molar volume of saturated vapor at temperature
T and pressure ps . As we supposed to be far enough from the critical temperature to
be able to use the approximation of an ideal gas for vapor, we’ll have Vm
so we can well approximate:
5 See
Eq. (4.84).
9.5 Curvature Effect on Vapor Pressure: Kelvin’s Relation
μ RT ln
− Vmliq (T )
In order to have equilibrium between the drop and the ambient vapor we require
μ = 0
and this allows to determine the saturation pressure for a liquid–vapor system with
curved separation surface:
V liq (T ).
r RT m
Notice that r is the curvature radius of the liquid and it must be understood as
positive if the separating surface is convex, negative if it is concave. For instance if
we introduce a capillary in the liquid and the latter does not wet the capillary, then
the separation surface seen from the liquid is convex and then the saturation pressure
of the vapor in the capillary is higher pss > ps .
If the liquid wets the capillary the surface seen from the liquid is concave, the
radius r is negative and the saturation pressure of the vapor is less than the saturation
pressure for flat surface (i.e., than that given by Eq. (7.9).
Equation (9.66) which leads to Kelvin’s equation (9.68) can be examined from
another point of view and this introduces, as a first approximation, the problem of
the stability of metastable states. Let us keep the external pressure pss as fixed and
discuss Eq. (9.66) in terms of the size of different drops.
First, let us define a characteristic size, denoted by re , such that the drop is in
equilibrium with the ambient vapor. The equilibrium radius is given by
re 2σ Vm
RT ln
Looking at Eq. (9.66) it is immediate to recognize that for drops with size smaller
than re the difference in the chemical potentials will be
μ < 0
and we expect the drop to evaporate. The equilibrium radius given in Eq. (9.69) seems
to constitute the criterion that characterizes the dimension of the perturbations that
are spontaneously reabsorbed in a supersaturated vapor. The issue will be treated with
some rigor in Sect. 9.6 nevertheless it is useful discuss some orders of magnitude.
To this aim, let us consider steam at the temperature of 20 ◦ C. We have seen
that at this temperature the surface tension of water amounts to σ 0.073 N m−1 .
As regards the molar volume of the liquid, we can approximately take the usual
9 Surface Systems
value Vm 18 cm3 = 1.8 × 10−5 m3 . As for the logarithmic term let us write the
value pss of the pressure of the supersaturated steam in the form pss = ps + p, and
consider p < ps . Then, we have
ln 1 +
The expression for the equilibrium radius Eq. (9.69) can be written in the form
re Vm 2σ
Vm p
At the temperature θ 20 ◦ C is ps 17.51 mm Hg 2.3 × 103 Pa and thus the
molar volume of the saturated steam will be
Vmvap =
2.44 × 103 J
1.06 m3 .
2.3 × 103 Pa
Let’s take for p the value p ∼ 4 mm Hg 5.3 × 102 Pa (that is, approximately
23% of the saturated vapor pressure) for the equilibrium radius we obtain the value
re 4.67 × 10−9 m.
We can express this critical size of the fluctuation in terms of the number of
molecules involved in the density fluctuation.
The volume of the characteristic droplet is approximately Vdrop 4.26 ×
10−25 m3 . Since the mass of a molecule of water (equal to about 18 times the mass
of the proton) is roughly m H2 O 3 × 10−26 kg and we assume the approximate
value 103 kg m−3 for the density of water, we get, for the mass of the droplet, the
value m drop 4.26 × 10−22 kg. This corresponds to a drop formed by a number of
molecules Nmolec of the order of
Nmolec 1.4 × 104 .
We note that the number of molecules within the critical radius is very sensitive to
the pressure difference as we find that
Nmolec ∝ (p)−3 .
With similar arguments, it is easy to show that Eq. (9.69) can be applied to the
formation of a vapor bubble when entering the coexistence region from the state A
of Fig. 9.3. In this case, the superstaurated pressure pss needs to be replaced by the
superheated pressure of the liquid, and the molar volume of the liquid droplet with
the one of the vapor bubble. The nucleation rate of droplets (or bubbles) formed at re
by density fluctuations rapidly increases as the system is progressively penetrating
9.5 Curvature Effect on Vapor Pressure: Kelvin’s Relation
inside the coexistence region. Indeed, before reaching the instability region, it is
possible to individuate an intermediate, sharp region, known as fuzzy nucleation line,
at which the nucleation and growth of the new phase are in practice instantaneous. As
a consequence, the lifetime of the metastable phases inside the coexistence region
rapidly decreases as the instability region is approached, and is connected to the
nucleation processes.
9.6 Nucleation Processes and Metastability in
Supersaturated Vapor
Some phase transitions belong to our normal experience. For example, we consider
obvious that water, cooled to zero degrees Celsius, solidifies becoming ice. Likewise,
the condensation of water vapor in drops of liquid water when the temperature of the
steam is lowered, belongs or to everyday experience. Nevertheless, we know that if
the water is completely pure and without mechanical disturbances, the cooling may
proceed beyond the phase transition points without phase separations (if we operate
in an appropriate way). The states we may observe operating in this way are called
supersaturated vapor or supercooled liquids and they are metastable states. Indeed
if they are perturbed with enough intensity they spontaneously undergo a phase
separation which proves to be the more stable configuration. Within this evidence we
aim to deepen the knowledge of the mechanism that is responsible for triggering the
phase transitions. The theory which explains this very important class of phenomena
is called Nucleation Theory of Phase Transitions and is based on the idea that in
metastable phases some very small portions of matter form aggregates of more stable
phase, called seeds or embryos, which, in their turn, are either reabsorbed or grow
and initiate the phase transition.
These embryos may be formed either by stochastic processes in a pure homogeneous phase, or by condensation caused by external agents like, for instance, ions
or impurities in the walls. In the former case, we speak of homogeneous nucleation
in the latter of heterogeneous nucleation.
Let us now discuss the equilibrium between a small liquid drop and its supersaturated vapor, when the latter is maintained at constant temperature T0 and pressure
p0 > ps greater than the vapor pressure ps for the equilibrium liquid–vapor. With
reference to Fig. 9.3 the pressure p0 corresponds to pss . The pressure of the liquid
inside the drop of radius r is
p1 = p0 +
2 σ (T0 )
on account for the surface tension σ = σ (T0 ) at the temperature T0 . Since the vapor
is supersaturated, the equilibrium configuration would be that of the liquid at the
saturated pressure in equilibrium with the vapor. However, it is well known that
such supersaturated vapor do not immediately change to liquid as far as it is com-
9 Surface Systems
pressed to ps . Rather, if the pressure p0 is only slightly greater than ps , the vapor
can rest in a metastable equilibrium state for a long time. Ideally, in the absence
of any perturbation this metastable state can be kept indefinitely. Let us consider a
nucleus of a liquid drop formed due to a spontaneous fluctuation of the vapor density
(homogeneous nucleation).
We regard the metastable phase as an external medium at T0 and p0 containing
the nucleus, and we aim to calculate the work necessary to form of the nucleus. The
volume V0 of the medium is so large compared to volume of the droplet V1 , that T0
and p0 can be reasonably considered constant during the nucleation process. Within
the constraint of a surrounding medium at constant p0 and T0 , we need to make use
of the available work introduced in Eq. (3.82) to calculate the work required to
form a liquid drop with free energy per unit area σ (T0 ). By using Eq. (3.84) for the
= , we can write
W =
= (U − T0 S + p0 V ) .
Since, in this case, the process occurs at constant temperature equal to the temperature
T0 of the surrounding vapor, the work in Eq. (9.77) can be written in terms of the
free energy:
= (F + p0 V ) .
To calculate the (finite) variation of F + p0 V in Eq. (9.78), it is sufficient to
consider only the amount m vap of vapor which enters the liquid phase within the
drop. Indeed, the state of the remaining (M0 − m vap ) vapor in the metastable phase
remains unchanged:
= [ F1 ( p1 ) + p0 V1 + σ ] − [ F0 ( p0 ) + p0 V0 ] ,
where F1 and F0 are the liquid and the vapor free energy, respectively, the former taken
at the pressure p1 of the liquid phase inside the droplet and latter at the supersaturated
pressure p0 . According to Eq. (9.2), for the liquid nucleus we should also consider
the free energy per unit area σ = σ (T0 ) competing to the droplet surface at the
temperature T0 of the surrounding vapor.
We stress that the available work = U − T0 S + p0 V of the whole system “liquid + vapor” differs from the Gibbs potential G(T, p) since T0 and p0 are always
referred to the temperature and the pressure of the vapor, remaining constant during
any change . Therefore, in the change described by Eq. (9.79) the last term
corresponds to the Gibbs potential of the vapor G 0 (T0 , p0 ). Since T0 , for the vapor
we can write
= F0 ( p0 ) + p0 V0 = G 0 ( p0 ) .
Conversely, the first term can be reconsidered as a function of the the Gibbs potential
of the liquid G 1 (T0 , p1 ) as
9.6 Nucleation Processes and Metastability in Supersaturated Vapor
= F1 ( p1 ) + p0 V1 = G 1 ( p1 ) − ( p1 − p0 ) V1 ,
= (u − T0 s ) = σ (T0 )
given that
is the available work contributed by the surface with free energy per unit area
equal to σ (T0 ) = (u − T0 s ), u , s being the energy and the entropy per unit
area, respectively, at the temperature T0 . By combining Eqs. (9.82), (9.81), (9.80),
then Eq. (9.78) can be written in terms of the Gibbs potential as
= G 1 ( p1 ) − G 0 ( p0 ) − ( p1 − p0 ) V1 + σ .
It is useful to notice that, for a nucleus in equilibrium with the metastable phase
we should have G 1 ( p1 ) = G 0 ( p0 ) so that the work to form a nucleus in equilibrium
with vapor as expressed by Eq. (9.83) reduces to sum of ( p1 − p0 )V1 which is known
as useful work, and the energy σ accounting for the formation of the surface
. Under the assumptions that the (i) formation of the liquid drop can be treated
as a perturbation with respect to the surrounding vapor, and (ii) the liquid can be
considered incompressible during a change at constant T = T0 from p0 to p1 , the
Gibbs potential within the drop can be approximated by
G 1 ( p1 ) G 1 ( p0 ) + ( p1 − p0 )V1 .
By replacing Eq. (9.84) in Eq. (9.83) we get
G 1 ( p0 ) − G 0 ( p0 ) + σ = G + σ ,
where the change of the Gibbs potential G = G(T0 , p0 ) is to be considered
at the temperature and the pressure of the surrounding vapor. Since the vapor at
the supersaturated pressure p0 is found in a metastable state whose equilibrium
state would be the liquid phase, the change at constant p0 , and T0 of the Gibbs
potential between the liquid and the vapor phase is always negative, according to the
equilibrium conditions (4.104)
G = G 1 ( p0 ) − G 0 ( p0 ) < 0.
Conversely, the energy σ competing to the surface is always positive, being the
work needed to create a surface with energy per unit area equal to σ . For a nucleus
of radius r , Eq. (9.85) becomes
(r ) = − πr 3 1 G ∗0 ( p0 ) − G ∗1 ( p0 ) + 4πr 2 σ,
1 being the mass density of the liquid phase, G ∗0 , G ∗1 the specific Gibbs potential of
the vapor and the liquid phase, respectively. According to Eq. (9.86) the difference
(r ) as a function of the
(G ∗0 − G ∗1 ) > 0 is positive at constant p0 , and therefore
9 Surface Systems
Fig. 9.4 Bulk contribution
(dashed dotted line) and
surface contribution (dashed
line) to the availability
for a sphere of radius r
(dashed dotted line)
according to Eq. (9.87). All
curves are plotted as a
function of the nucleus size r
in units of the critical radius
rc = 2σ/ (1 g) according
to Eq. (9.88)
nucleus size r is maximum for a “critical” nucleus size rc given by
rc =
1 G ∗0 − G ∗1
which can be easily obtained by differentiating Eq. (9.87) with respect to r and
looking for the roots of the derivative. For r < rc a decrease of r is a natural (spontaneous) process and the nucleus is absorbed. Conversely, for r > rc an increase of
r is natural and the nucleus grows. The condition r = rc is an equilibrium condition
which is unstable according to the stability of equilibrium states Eq. (4.104) applied
to Eq. (9.77):
(r ) = max if r = rc .
Concluding, the expression given by Eq. (9.87) can be regarded as the potential
barrier which needs to be overcome for the formation of a stable nucleus, as shown
in Fig. 9.4.
9.6.1 Spinodal Decomposition
In Sect. 9.6 we saw that in liquid–vapor transitions, the formation of the new phase
takes place through nucleation of droplets or bubbles depending on the direction
of the transformation considered (see Fig. 9.5a, c). The creation of a droplet (bubble) with surface requires the energy σ , as introduced in Eq. (9.79). However,
from Eq. (9.93) it results that this energy is extremely low in the vicinity of the critical
point. Therefore, it is spontaneous to wonder what would happen if one enters in the
coexistence region from the critical point. In this case, the system is “pushed” in a
9.6 Nucleation Processes and Metastability in Supersaturated Vapor
Fig. 9.5 Spinodal decomposition and nucleation of metastable phases in a two- phase system
(specifically, liquid and vapor) for transformations (a), (b), (c), respectively, as shown in Fig. 8.5
inside the coexistence region
thermodynamical unstable state and the separation in two phases takes place immediately in the absence of any nucleation step. However, since the interfacial tension
between the two phases is zero, there is no need to form droplets. As a consequence,
the structure of the interface between the two phases results particularly complex,
characterized by a fractal geometry in which the phases are “surrounding” one each
other (see Fig. 9.5b). It is relevant that by analyzing the spinodal pattern in the wave
vectors domain it is possible to individuate a characteristic length growing in time
up to the complete phase separation, corresponding to the equilibrium condition. We
point out that the separation process is driven by gravity (a volume force) acting on
the density mismatch of the two phases, since the interfacial tension (a surface force)
between phases is negligible, and this is the opposite of what happens for metastable
9.6.2 Temperature Dependence
Since the existence of surface tension is justified, in the atomic-molecular model, by
the different number density of the substance in the two phases, we will expect its
value to grow with increasing this number density difference. If we remember the
typical shape of the “Andrews bell” for a substance below the critical temperature as
discussed in Chap. 7, we see that the surface tension must be a decreasing function
of temperature and tends to zero if we increase the temperature to the critical point
where the two phases are no longer distinguishable.
As the temperature decreases the density difference between the two phases
increases, and then σ will increase. For water, the dependence on T is shown
in Fig. 9.6. As we can see the critical temperature is about 374 ◦ C which is the
point in which the surface tension vanishes.
In Table 9.1 the surface tension of some substances at θ 20 ◦ C is reported. It is
interesting to further investigate a little deeper in the temperature dependence of the
surface tension. Near the critical point, the experimental data are well described by
9 Surface Systems
Fig. 9.6 Surface tension of
water in equilibrium with
saturated vapor below the
critical point. At the critical
temperature (θ = 374 ◦ C),
the surface tension vanishes.
Data are taken from Sychev,
Complex Thermodynamics
Systems [15]
Table 9.1 Surface tension
with respect to air at
θ 20 ◦ C for selected
substances [16]
the expression:
σ (J/m2 )
Methyl alcohol
Oil (olive)
σ ∝ (Tcr − T ) liq − vap 3 ,
(see discussion in [3]) where liq and vap are, respectively, the mass density of the
liquid and that of the vapor in mutual equilibrium at temperature T . The dependence
of these densities on temperature are very well described by the empirical relations of
Eq. (8.65). For our purpose it is useful to point out the dependance of the difference
between the liquid and gas densities, at equilibrium, on temperature is given by:
T 3
liq − vap ∝ 1 −
Putting together Eqs. (9.90) and (9.91) we obtain the following relation for the
temperature dependence of the surface tension:
σ = σ0
1+ 29
9.6 Nucleation Processes and Metastability in Supersaturated Vapor
where σ0 is a constant (obviously it cannot be named the surface tension at the
critical temperature as Eq. (9.92) could erroneously suggest). The reason for writing
the exponent in the form (1 + 2/9) is due to the fact that the generally accepted
formula is written in the form
σ = σ0
T 1+r
and the different data set and modelizations differ in the value of the constant r .
9.6.3 Surface Tension and the Law of Corresponding States
In Eq. (9.92) the constant σ0 has to be determined empirically for every substance
and it has the dimension of
a surface tension, that is [energy]/[area]. It follows that
the quantity σ0 V 2/3 /RT is dimensionless. According to the Law of Corresponding
States the quantity
R Tcr
is expected to have the same value for different substances. In [3] the experimental data for Neon, Argon, Xenon, Nitrogen, Oxygen, and Methane. The empirical
results are, respectively, 0.487 × 10−7 , 0.517 × 10−7 , 0.505 × 10−7 , 0.541 × 10−7 ,
0.529 × 10−7 and 0.528 × 10−7 . It is remarkable that among Argon, Xenon, Nitrogen, Oxygen and Methane the values are within 2% while Neon is off by roughly
10%. This suggest, once again, that the Law of Corresponding States may be better
verified within subgroups of substances with similar microscopic properties but we
shall not go through this argument.
9.6.4 Interfaces at Contact Between Three Phases in
We saw in Sect. 9.4 that interfaces separating two phases in mutual equilibrium can
be planar or spherical, according to Eq. (9.54) with r1 , r2 → ∞ and r1 = r2 = r ,
respectively. Within the presence of a third phase the equilibrium conditions need to
be reconsidered. This is the case of the capillarity, when a liquid is found in simultaneous contact with its vapor and, in addition, to a solid wall. Similarly, at equilibrium,
liquid drop deposited on a solid substrate assumes at fixed T different shapes depending on the specific substrate. Indeed, due to the presence of three phases, we need
to consider three surface tensions corresponding to the interfaces solid–liquid (SL),
liquid–gas (LG), and solid–gas (SG). We already noticed in Sect. 9.4) that the density
9 Surface Systems
Fig. 9.7 Different wetting configurations: a drop partially wetting the solid surface and balancing
of the surface tensions, defining equilibrium contact angle; b liquid film in total wetting; c drop
completely impregnates micro grooves (Wenzel state); d the liquid is suspended on the asperities
(Cassie state, or lotus effect)
of a saturated vapor phase is much lower than the one of the corresponding liquid.
This applies also to gases or mixtures of gases regardless the specific nature of gas
considered. Therefore, for sake of simplicity, let us assume that the “gas” may be
either a given substance in the gas phase, including the mixture of gases (e.g., the
air), or the saturated vapor of the liquid phase.
Let us introduce the triple line or contact line as the line in simultaneous contact
with three phases, liquid (L), solid (S) and gas (G). The equilibrium condition at the
contact line was given by Young and Dupré by balancing the forces per unit length
acting on the contact line. In other words, considering Eq. (9.3) this force balance is
a relation for the surface tensions σSG , σSL , σLG :
σLG cos ϑeq = σSG − σSL ,
where ϑeq , known as the equilibrium contact angle, is the angle formed by the
liquid and the solid interface, while σLG cos ϑeq is the projection of σLG on the plane
containing σSL and σSG , i.e., the solid surface (see Fig. 9.7a).
It is clear from Eq. (9.95) that the contact angle depends on the equilibrium
conditions of the three coexisting phases. If the “gas” phase is just the saturated
vapor of the liquid phase, the contact angle is determined either by the properties
of the liquid and its saturated vapor or by the properties of the solid material. It
is therefore meaningless to speak of the contact angle simply for a given liquid; it
should be always indicated the solid material the liquid is in contact with. In that
case, at fixed T , the contact angle is thus a constant, depending on the interaction
between the liquid and the solid.
For any value of ϑeq satisfying Eq. (9.95), the liquid is said to (partially) wet the
solid surface. However, as the difference σSG − σSL is increased enough that
σSG − σSL ≥ σLG ,
it is more convenient for the system to form two distinct surfaces (LG and SL) respect
to have only one SG surface (see Fig. 9.7b.) In other words it “costs” less energy to
completely cover the solid surface, than wet one portion. The difference:
9.6 Nucleation Processes and Metastability in Supersaturated Vapor
Udry − Uwet = Seq ,
between the energy of the dry surface Udry with respect to the energy Uwet of same
surface wet by the liquid phase is called spreading parameter. Per unit surface:
Seq = σSG − (σSL − σLG ) .
Seq > 0
complete wetting,
Seq < 0
partial wetting.
For Seq > 0, the liquid spreads over the solid surface as a thin film. As far as the
spreading proceeds the film becomes progressively thinner, becoming as thin as
the molecular thicknesses, at least in principle. In practice, the spreading of the film
progressively slows down, being strongly affected by the presence of surface defects,
where it easily pins well before tapering to the molecular scale.
In the case of partial wetting (Seq < 0), for many solid surfaces, ϑeq can be larger
than 90◦ , leading to drops shaped as spherical caps on the solid surface. When
the liquid is water, such surfaces are said hydrophobic. Conversely, if ϑeq < 90◦
we speak of hydrophilic surfaces. It is interesting to notice that, within the most
hydrophobic surfaces, the contact angle can hardly be larger than 120◦ . Indeed, the socalled superhydrophobic surfaces, commonly defined as ϑeq ≥ 150◦ , are definitely
not smooth as it happens in nature for lotus leaves. In fact, lotus leaves display a
complex micro texturing so that some air remains confined within these structures
and hence, behind the liquid drop, we find an heterogeneous, multiphase system
(S+G) composed by air bubbles trapped inside the “micro stings”.
Strong water repellence occurs because the roughness imposed on smooth
hydrophobic surfaces further decreases the solid-liquid free energy with respect to the
gas-liquid free energy, so that the contact angle results increased. Without entering
a discussion that is largely out from our scope, we just mention that the equilibrium
contact angle denoted by Eq. (9.95) can be defined only locally for rough surfaces.
Therefore, it is necessary to introduce a new quantity, the so called apparent contact
angle ϑap , accounting for the effective wettability on textured surfaces. The apparent
contact angle is well described by the Wenzel’s model (just sketched in Fig. 9.7c) for
rough, yet chemically homogeneous, surfaces characterized by a ratio r of the real
surface area to the apparent surface area:
cos ϑap = r cos ϑeq ,
Since r > 1 the hydrophobicity of smooth surfaces results enhanced by the presence of the roughness.
For hydrophobic rough surfaces (see Fig. 9.7d) preventing the liquid to fill all
the rough asperities, air (gas) pockets remain trapped beneath the liquid drop whose
9 Surface Systems
contact line is in simultaneous contact with a the fraction f s of solid surface at ϑeq
and a fraction (1 − f s ) of air (gas) at ϑeq = π . Within these composite surfaces the
apparent contact angle is provided by the Cassie-Baxter’s model6
cos ϑap = −1 + f s cos ϑeq + 1 ,
that sets the non wetting limit condition at θap = 180◦ obtained when f s vanishes.
We conclude this section by mentioning that superhydrophobic surfaces have
attracted a considerable interest for applications related to self-cleaning, nonsticking, anti-icing, low-friction resistance. With regard to heat exchanges, they have
shown excellent capabilities to increase the heat transfer in several engineering processes involving vapor condensation, such as air conditioning, condensers, desalination, and refrigeration. In fact, since the condensation heat transfer is in general
proportional of the total surface exposed to the vapor, at least for nonmetal vapors it
results enhanced by dropwise condensation occurring in superhydrophobic surfaces,
rather than filmwise condensation taking place in the hydrophilic or moderately
hydrophobic ones.
6 We
point out that Cassie-Baxter’s model can be applied as well to planar, yet chemically heterogeneous, surfaces like a sequence of regions 1,2 occupying fractions f 1 , f 2 of the total surface with
contact angles ϑ1 , ϑ2 respectively, so that cos ϑap = f 1 cos ϑ1 + f 2 cos ϑ2 with f 1 + f 2 = 1. In
addition, the Wenzel’s model of Eq. 9.101 can be also used with hydrophilic surfaces having a
roughness index r . In this case, the effect is that of reinforcing its hydrophilic property [50].
Chapter 10
Electrostatic Field
Abstract In this chapter, we discuss the modifications of the thermodynamical
potentials of a system when it is immersed in an electrostatic field. The interaction
between field and matter is described by the electric susceptibility of the material
which is called dielectric constant for linear materials. The correct expressions for
energy, free energy, chemical potential, and the other thermodynamic potentials due
to the interaction with electrostatic fields is obtained neglecting electrostriction. The
dielectric constant for dilute gases is obtained by statistical methods and, as an example, the increase of gas density following the charging of a condenser is calculated.
A brief overlook to electrostriction as the second-order effect, is given in the last
Keywords Electrostatic field · Electric susceptibility · Dielectric constant · Linear
dielectrics · Thermodynamic potentials · Electrostriction · Langevin’s function
10.1 Introduction
In this chapter, we want to discuss the modifications of the thermodynamic potentials
of a macroscopic system when an electrostatic field is created in the region occupied
by it. A new work parameter is introduced and the operations performed by the
experimenter must be analyzed.
Let us begin considering a capacitor with plane–parallel plates and neglect edge
effects.1 Consider the capacitor completely immersed in a fluid (liquid or gas) and
indicate, respectively, with V the volume of the portion of the fluid in which the
capacitor is fully immersed and with p the value of the pressure acting on the part
of the fluid that is external to the condenser.2
h the distance between the
√ plates and by their area (we assume
that the two dimensions are comparable) it must be h thus the portion of the capacitor, in
which we commit a significant error in treating the fields and the charge distribution as perfectly
homogeneous, will have a very small volume compared to the volume in which our description will
be accurate enough.
2 Let us avoid to refer to the “pressure” of the fluid inside the condenser. In this region the fluid is
not, in general, isotropic and the formalism to be used might be quite complicated.
1 This means that if we denote by
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
10 Electrostatic Field
Further, denote with qfr the amount of electric charge (free charge) possessed by
the positive plate and with ψ the potential difference between the plates. In order to
change by an infinitesimal amount the charge of the capacitor, we shall spend the
infinitesimal amount of work:
d̂Wel = ψ dqfr .
Be the area of the plates and h their distance. We have ψ = h |E| and, from the
first Maxwell equation qfr = |D| where E is the electric field inside the capacitor
and D the electric displacement. The infinitesimal amount of work done on the fluid
will be written as
d̂Wel = (h ) E · dD = Vcond E · dD ,
where Vcond is the volume of the capacitor.
This is a simple but general relation: it states that in order to modify, by an
infinitesimal amount, the electric field within a (small) volume Vcond the infinitesimal
amount of work given by Eq. (10.2), has to be spent.
In order to go further and give Eq. (10.2) any practical utility, the functional
relationship E = E(D) must be known. It will then be useful to recall some basic
relations from a treatise on electrostatics.
10.2 The Response of Matter
When the experimenter creates an electric field or modifies a pre-existing field in a
certain region of space, he has to appropriately move some electrical charges that
he controls (i.e., free charges). If in the region is present some dielectric material,
the distribution of the elementary charges of which it is made up, changes, i.e., the
dielectric responds to the disturbance by changing its electrical configuration. We
say that the dielectric is polarized or, more generally, that it changes its polarization
The state of polarization of the dielectric is completely described by the polarization vector P = P(r). The vector P is defined by the relation:
dPdip = P(r) dV
dPdip being the (infinitesimal) dipole moment generated by the polarization charges
in the infinitesimal volume d V in the point of coordinates r and, therefore P(r) is the
density of dipole moment possessed by the material. If the field P(r) is known, the
distribution of the polarization charges is known3 The sum of the distribution of the
polarization charges and of the charges controlled by the observer, the free charges,
3 The
Pn .
density and the surface density of polarization charges is given, respectively, by −∇ · P and
10.2 The Response of Matter
will be known and this will enable us to determine the electrostatic field in the entire
Since P is an intensive quantity its value will depend on intensive quantities.
In all generality we can assume that it depends on the density (molar) of the
material, the temperature and the electric field acting at the point r.
If we now assume that the dielectric is homogeneous and isotropic then the polarization is parallel to the electric field and then we can write:
P = 0 χ E .
This relation defines the quantity χ named electric susceptibility of the material. 0
is a constant whose value depends on the adopted unit system, it is called permittivity
of free space or permittivity of vacuum and its value is
0 = 8.854 10−12 C2 J−1 m−1 .
As we know from the study of electrostatics, the electric displacement D is defined
by the relation:
D = 0 E + P
and it is straightforward to prove that the sources of the electric displacement are the
f r ee charges, that is exactly the charges that the observer moves. If we use Eq. (10.4)
we obtain
D = 0 (1 + χ )E ,
= 0 (1 + χ ) ,
where is named electric permittivity of the material. From definition Eq. (10.8),
we may relate the electric displacement to the electric field:
and this is the functional relationship between E and D that allows us to calculate the
thermodynamic potentials. As we can see, now the whole problem lies in knowing (or
rather in making a good modeling for) the dependence of the dielectric susceptibility
χ , and then of the dielectric permittivity , on the intensive state variables.
As noted earlier, the susceptibility χ will, in general, be a function of
χ = χ (, T, E) ,
where is the density of the material, T the temperature, and E the intensity of the
field in the point of interest (we consider isotropic materials).
10 Electrostatic Field
10.3 The Dielectric Constant
The dependence of the electric susceptibility on the electric field is difficult to model.
For small values of the intensity of the applied electric field we can, with good approximation, assume that the response of the material is proportional to the perturbation
that is, in our case, we will assume that the polarization vector is proportional to
the electric field. In this case, we speak of linear medium, and this means that the
susceptibility does not depend on the intensity of the field, therefore, by Eq. (11.22)
we can write
χ = χ (, T )
and for the electric permittivity:
= (, T ) .
In this case, the ratio of the electric permittivity to that of free space 0 , is called
dielectric constant.4
10.4 Thermodynamic Potentials for Linear Dielectrics
The thermodynamic potentials for homogeneous, isotropic and linear materials
are obtained from the fundamental equation where the elementary amount of
work Eq. (10.2) is added:
dU = T dS − p dV + Vcond E · dD + γ μγ dn γ ,
dF = −S dT − p dV + Vcond E · dD + γ μγ dn γ ,
dG = −S dT + V d p + Vcond E · dD + γ μγ dn γ .
As for the dependence of the electric susceptibility Eq. (10.11) on the density, it
will be enough to consider that, with good approximation, it will be proportional to
the molar density of the dielectric and, in a first instance, the density will be treated
as dependent on temperature and pressure of the fluid, as expected from its equation
of state in zero field conditions.
In our case, in a second approximation, the introduction of the electric field will
contribute to a variation of the density and so we have to consider that even the
electric field will enter into the equation of state.
We call electrostriction the phenomenon that consists in the variation of the volume of the fluid associated with a change in the intensity of electrostatic field at con-
4 The denomination does not mean that the electric permittivity is a constant in the sense that it does
not depend on the state variables, but means that it does not depend on the electric field.
10.4 Thermodynamic Potentials for Linear Dielectrics
stant pressure and temperature. Quantitatively, this effect is measured by (∂ V /∂E)T, p
or, equivalently, by (∂ V /∂D)T, p .
In order to obtain the integrated expressions for the thermodynamic potentials we
will, in the first instance, neglect electrostriction. Returning later to the more general
formulas Eqs. (10.13)–(10.15) we will be able to assess the extent of the phenomenon
that we have neglected.
10.4.1 Thermodynamic Potentials for Linear Dielectrics
Without Electrostriction
In many experimental situations and particularly for fluids whose compressibility
is small, electrostriction will be treated as a phenomenon of the second order. This
means that we will assume, in the first instance, that the density is constant and with
a second iteration, we will evaluate the change in volume, and therefore in density,
for the introduction of the field.
Under these conditions, we will assume, at first, = (T ) and then the amount
of electrostatic work to pass from zero field to a field of intensity E can be easily
obtained by integrating Eq. (10.2) as long as the charging process takes place along
an isothermal process. In fact, only under this condition we can write
d̂Welisot = Vcond E dE
with taken as a constant along the process. Then, the amount to create an electrostatic field of intensity E is given by
= Vcond E 2 .
Now we can write the new expression for the free energy of the system:
F = F0 + Vcond E 2 ,
where F0 is the free energy of the system with zero field. The term added to the zero
field free energy in Eq. (10.17) can be denoted with
Fel = Vcond E 2 ,
which is defined as “free energy of the electric field”. This designation is a coarse
designation because the corresponding term does not describe a property of the electrostatic field but of the interaction dielectric–electrostatic field. As the electrostatic
field inside the condenser is considered as homogeneous, we can properly imagine
10 Electrostatic Field
that the free energy (and so will be for energy and other extensive thermodynamic
potentials) is uniformly distributed in the volume of the condenser (here is an argument in favor of the definition of extensive quantities based on volume integral of
their density) and we can usefully define the free energy density due to the presence
of the field f el as
f el = E 2 .
Still within the hypothesis of a linear dielectric and neglecting electrostriction, we
can derive an expression for the energy associated with the presence of the field. In
Eq. (10.13) we may write
E · dD = E · dE + E 2
dT ,
and hence:
1 d 2
dU +
dV − Vcond dE 2 − Vcond
E dT −
dn γ .
T dT
Let us develop the differential of the energy as a function of the differentials of
V, E 2 , T and n γ
dS =
dS =
dV +
− Vcond dE 2 +
∂ E
d 2
− Vcond
E dT −
dn γ .
Now, by applying the Schwarz identity Appendix A.4 between the second and the
third term of the second member we get
− 2
and hence:
∂ E2
∂ 2U
∂T ∂ E2
∂ 2U
∂ E2 ∂T
− Vcond +
= Vcond + T
∂ E2
− Vcond
The second member of this equation does not depend on the intensity of the field so
we can easily integrate and get
10.4 Thermodynamic Potentials for Linear Dielectrics
E2 ,
U = U0 + Vcond + T
where U0 represents the energy of the system with zero field while the second term
is the additive correction due to the presence of the electric field. Similarly to what
was said before for the free energy, we can assume that the energy is distributed
uniformly throughout the volume of the dielectric and define the intensive quantity
u el
E2 ,
which is the correct expression (always within the limits of the adopted approximations) of the variation of the energy density due to the presence of the electrostatic
From the definitions of energy and free energy energy given in Eqs. (2.1) and
(4.27), we can derive the contribution of the presence of the electrostatic field to the
entropy and entropy density starting from the identity
(U − F)
we get:
U0 − F0
+ Vcond
E 2 − Vcond
E2 ,
T 2
S = S0 + Sel ,
which becomes
S0 =
U0 − F0
is the entropy of the system with zero field and
Sel =
d 2
is the contribution due to the electrostatic field. Likewise Eq. (12.35) we can write
the expression for the entropy density due to the electrostatic field:
s el =
1 d 2
E .
2 dT
It is important to highlight that, within the theoretical context that we have adopted
in this discussion, the entropy due to the interaction with the electric field, depends
only on the derivative of the dielectric constant with respect to temperature.
10 Electrostatic Field
As we will see when we shall treat the problem of the dielectric constant from
a statistical point of view, the dependence of on the temperature is related with
the polar conformation of the molecules while the contribution to the polarization
due to the polarizability (that is the term that takes into account the deformation of
the molecules) is described by a term independent of the temperature. For nonpolar
materials entropy does not depend (in a first approximation) on the presence of
electrostatic field. This is consistent with the statistical treatment of entropy, in fact
only for polar molecules, the electrostatic field creates a partial orientation of the
molecules and thus reduces the volume of the available phase space (it is also said
that the presence of the electric field creates some “order”). This leads us to predict
a decrease in entropy and indeed, as we shall see, in the statistical model we have
adopted we will find d/dT < 0.
10.5 Dielectric Constant for Ideal Gases
In order to make calculations particularly simple we consider the gas composed by
one component. All the molecules are characterized by a polarizability α and by their
dipole moment whose absolute value we denote by β.
In the presence of an electric field E a mole of gas will acquire a total dipole
moment equal to
Pmol = N A α E + N A β where N A is the Avogadro’s number and β is the mean value of the molecular
dipole moment in the field E.
To calculate the latter contribution we will assume that each molecule behaves in
a completely unrelated manner with the other molecules (low density) and that the
dielectric is in a state of thermodynamic equilibrium. Let us take as z-axis a direction
parallel to the electric field and denote by ϑ the angle between the direction of the
dipole of the molecule and the direction of the field.
The infinitesimal solid angle subtended by the directions which form with the
direction of the field (z-axis) an angle between ϑ and ϑ + dϑ is
= 2π sin ϑ dϑ .
The potential energy of an electric dipole in an electric field is
Upot = −β · E = −β E cos ϑ ,
therefore the probability that a molecule has an orientation which forms with the
electric field an angle between ϑ e ϑ + dϑ will be
10.5 Dielectric Constant for Ideal Gases
2π exp(−Upot /k B T ) sin ϑ dϑ
P(ϑ) dϑ =
exp(−Upot /k B T ) sin ϑ dϑ
For convenience we write Eq. (10.33) in the form
exp(−Upot /k B T ) d(cos ϑ)
P(ϑ) dϑ = −
exp(−Upot /k B T ) d(cos ϑ)
We are interested in the mean values of the three components of the dipole moment of
the molecule. It is obvious that the mean values in the directions perpendicular to the
electric field (z-axis) are zero and then let us calculate βz only. Since βz = β cos ϑ
we have
cos ϑ P(ϑ) dϑ .
βz = β cos ϑ = β
If we use Eq. (10.34) and we place
x = (β E/k B T ) ,
we obtain
cos ϑ =
cos ϑ exp(x cos ϑ) d(cos ϑ)
exp(x cos ϑ) d(cos ϑ)
To calculate these integrals it is convenient to make the variable change y = x cos ϑ
and notice that the integrand in the numerator can be written as
cos ϑ exp(x cos ϑ) =
d exp(y) .
Then, for the numerator of Eq. (10.37) we will have
exp(x cos ϑ) d(cos ϑ) =
d 1 +x
exp(x cos ϑ) d(cos ϑ) =
exp(y) dy =
dx −1
dx x −x
d 1
exp(x) − exp(−x) .
dx x
cos ϑ exp(x cos ϑ) d(cos ϑ) =
10 Electrostatic Field
Similarly, for the denominator Eq. (10.37) :
exp(x cos ϑ) d(cos ϑ) =
exp(y) dy =
(exp(x) − exp(−x)) .
Finally, the desired result will be
cos ϑ =
exp(x) − exp(−x) .
The derivative in Eq. (10.41) is known as the “Langevin’s function” L (x):
L (x) =
exp(x) + exp(−x) 1
exp(x) − exp(−x) =
− .
exp(x) − exp(−x) x
The behavior of L (x) is shown in Fig. 10.1. It is easy to verify that for the Langevin’s
function we have the following limiting conditions for very large and for very small
values of the variable x. For large values:
lim L (x) = 1 .
For small values it is convenient to develop L (x) in power series of the variable x.
Then for x 1 we find
L (x) =
3k B T
If we denote with (n/V ) the molar density of the gas, the intensity of the polarization
vector (dipole moment per unit volume) will be:
Fig. 10.1 The Langevin’s
function L (continuous
line). For large values of the
argument x the function
tends asymptotically to 1.
For small values of x the
Langevin’s function goes to
zero with slope 1/3 (dashed
10.5 Dielectric Constant for Ideal Gases
β2 E
α NA E +
NA .
3 kB T
As it can be seen, the polarization is proportional to the intensity of the field, for
weak fields. As it is evident from Eq. (10.36) weak fields means:
kB T
and in this approximation, the polarization can be written as
a = N Aα ,
b = NA
3 kB
Finally the dielectric constant, for weak fields, will be expressed by the relation:
= 0 +
In the following the thermodynamic potentials modified by the presence of an electrostatic field, will be applied in three cases: first, we calculate the increase of concentration of a gas within a charged condenser; second. we estimate the thermal effect
in charging or discharging a capacitor and, finally, we calculate the electrostriction
as the second-order effect.
Example 10.1 Increase of gas concentration in charged condensers. Consider
a capacitor with plane and parallel plates, immersed in a gas in ideal conditions.
The volume of the gas is much larger than the volume of the capacitor. In the initial
configuration the condenser is uncharged and the gas is in a state of thermodynamical
equilibrium at uniform temperature and density. The condenser is charged and let E
be the intensity of the electric field (neglect edge effects) and a new equilibrium state
is reached. Consider the gas divided into two subsystems: one part is constituted by
the gas inside the condenser whose volume is V = Vcond maintained constant, and
the other is formed by the gas outside the condenser also maintained at constant
10 Electrostatic Field
We denote by the superscript “int” and “ext” the values of the thermodynamic
quantities of the gas in the internal and in the external zone, respectively. When the
new equilibrium state is reached we shall observe:
T int = T ext = T ,
Let us calculate the chemical potentials. For the gas in the outer zone (zero field
zone) we have:
∂ F ext
∂ F0ext
μ =
∂n ext T,V ext
∂n ext T,V ext
Let us refer to Eq. (6.52) which was given for dilute gases and in which the dependance
on the molar concentration C was put in evidence:
μ ext = η† + RT ln C ext .
As for the gas in the internal zone we refer to Eq. (10.14):
∂ F int
∂n int
T,V int ,( E)
Making use of Eq. (10.17) for the free energy:
∂( E 2 )
∂n int T,V int ,( E)
T,V int
= μ0int + Vcond 2 E 2
∂n int T,V, int
μ int =
∂ F0int
∂n int
In the last passage of Eq. (10.56) we introduced
∂ F0int
∂n int
T,V int
as the chemical potential of the gas inside the condenser with zero field, and we
performed the derivative of ( E 2 ) at constant ( E). By making use Eq. (6.52) again,
Eq. (10.56) becomes:
= η + RT ln C
− Vcond E 2
∂n int
For the gas inside the condenser we have V int = Vcond and
T,V int
10.5 Dielectric Constant for Ideal Gases
n int
= 0 + int
then the chemical potential of the gas inside the condenser can be written
μ int = η† + RT ln C int −
E a+
The condition for phase equilibrium is μ int = μ ext and we get
C int
C ext
2 b
= exp
RT ln
C int
C ext
Since the argument of the exponential function is positive (a > 0 and b ≥ 0) we see
that the concentration inside the condenser is always greater then the concentration
in the external part. When we charge the condenser the gas is always, partially, drawn
Finally, let us conclude with a brief discussion on thermal phenomena which take
place when charging or discharging a capacitor and on electrostriction that we have
treated as the second-order phenomenon.
Example 10.2 Thermal effects in dielectrics. Let us consider a plane condenser
with the same approximations adopted up to now, filled with dielectric material. The
condenser is charged and the electric field passes from zero to the final value E,
keeping the temperature constant. Let us calculate the amount of heat supplied from
the outside world to the condenser.
Consider an infinitesimal process. From Eq. (10.28) we have
dS = Vcond
d̂ Q
E dE =
Since depends only on temperature and density, if we neglect electrostriction we
can integrate and obtain
d 2
E .
Q 0→E = Vcond T
In general we have d/dT < 0 and hence Q 0→E < 0 which means that, when the
condenser is charged isothermally, it must expel heat to the external world. On the
contrary it must be “warmed up” when it is discharged.
Example 10.3 Electrostriction in dielectrics.
As regards the electrostriction, in order to highlight variations in volume as an
effect of the intensity variation of the electric field, we will consider processes at
10 Electrostatic Field
constant temperature and pressure. Hence we shall refer to Eq. (10.15) but in order to
work with the intensity of the electric field as the independent state variable, it will
be convenient to subtract to the differential of the Gibbs potential, the differential of
the state function Vcond E 2 . We obtain the following exact differential:
d G − Vcond E 2 = −S dT + V d p − Vcond ( E) dE +
μγ dn γ .
We apply the Schwarz identity between the second and the third term of the second
∂ ( E)
= −Vcond
∂ E T, p,n γ
T,E,n γ
= −Vcond E
∂ E T, p,n γ
∂ p T,n γ
If we consider only the dielectric inside the condenser, i.e., we put V = Vcond , we
E dE ,
V T, p,n γ
∂ p T,n γ
1 ∂
E2 .
V0 T, p
2 ∂ p T,n γ
Chapter 11
Magnetic Field
Abstract Starting from the macroscopic Maxwell equations, the expressions for the
electric work, magnetic work, and radiation of energy are derived. This is, in turn,
the starting point for deriving the expressions of thermodynamic potentials. A brief
outlook to the constitutive relations for diamagnetic, paramagnetic, and ferromagnetic materials is given and hence the expressions for the thermodynamical potentials
in the presence of magnetostatic fields are given. A short reference to the adiabatic
demagnetization closes this chapter.
Keywords Magnetic fields · Diamagnetism · Paramagnetism · Thermodynamic
potentials · Linear media · Adiabatic demagnetization
11.1 Introduction
There are many similarities with the case electrostatic field but also some conceptual
differences should be highlighted.
The first analogy is obvious because it is of general methodological nature: as we
did in the case of the electrostatic field, we have to start considering the operations
that the observer must take to create or to vary the magnetic field in the region
occupied by the macroscopic system under observation and, in particular, we need
to calculate the amount of work done.
As in the case of the electrostatic field, the operations that the observer performs
in a controlled way always consist in the movement of free charges. These operations
have two possible outcomes: the construction of different static configurations (for
example, the charge of a capacitor) or the generation and the variation of free currents.
To begin with the study of thermodynamics in a more general context, it is necessary
to recall the macroscopic Maxwell equations [9].
The sources of the fields are the electric charges and the electric currents. For
the charges, we have to distinguish between free charges and polarization charges:
the former are the charges controlled by the observer, while the latter constitute the
response of the medium to the presence of the free charges. Similarly, we have to
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
11 Magnetic Field
distinguish between free currents, operated by the observer, and the magnetization
currents as the response of the medium.
Free charges qfr , whose density is denoted by ρfr , are the sources of the electric
displacement D while the free currents, denoted by the current density j, are the
sources of the vector H that we call magnetizing field.1 The roles of the free charges
and of the free currents are well visible, respectively, in the first and in the fourth of
the four macroscopic Maxwell equations that we report here:
∇ · D = ρfr ,
∇ ·B = 0,
As the sources of the electric displacement D and of the magnetizing field H are the
free sources, these two fields may be considered analogous. Similarly, we consider
the fields E and B analogous to each other because each one is, in its domain, defined
by the force that is exerted, respectively, on a test charge at rest and on an infinitesimal
segment of wire, very thin, through which an electric (free) current flows.
Thinking in terms of the microscopic view, the two fields are defined by the
so-called Lorentz force which is the force exerted on a point charge q:
F = q (E + v × B) .
11.2 Electric Work, Magnetic Work, and Radiation
In order to construct a given magnetic configuration in space, the observer has to
move electric charges and launch electric currents in suitable circuits and in a suitable
way. To do this, he must operate on electric charges, and this implies doing a certain
amount of work. The latter can be subdivided, in a sensible way, in a part that is
expended to modify the electric field and another part that we associate to the change
of the magnetic field. The latter contribution is called “magnetic work,” and the
former “electric work.”
To give a rigorous quantitative evaluation we must, as always, proceed through
infinitesimal changes. We consider the distribution of the electric and magnetic fields
at a given time t in the whole space, and then we operate to produce a small change in
the fields and calculate the amount of work done in an infinitesimal time interval dt.
Let E, B, and j be, respectively, the electric field, the magnetic field, and the free
current density at time t in a given infinitesimal volume dV .
1 There is not a universal agreement in the denomination of the vector field H. Some authors use
the denomination of magnetic field , while the vector B is called magnetic induction.
11.2 Electric Work, Magnetic Work, and Radiation
Consider the Maxwell equations and multiply Eq. (11.3) by H and Eq. (11.4) by
E. After subtracting side by side and recalling the identity
H · ∇ × E − E · ∇ × H = ∇ · (E × H)
we obtain
E · j + ∇ · (E × H) + E ·
= 0.
These terms should be analyzed and interpreted one by one. To do this, we consider
a volume V bounded by a surface Ω and, after multiplying the above equation by
dV dt, we integrate over the volume. The integral of the first term in Eq. (11.7)
dV (j · E) dt = d̂W
represents the total amount of work performed by the electromagnetic field in the
considered volume V and in the infinitesimal time interval dt. The integral on the
second term in Eq. (11.7)
dV ∇ · (E × H) dt
may be written using the Gauss theorem in the form
(E × H) · d dt
and it represents the amount of energy transferred in the time interval dt through the
surface Ω that delimits the volume under observation.
The two remaining terms in Eq. (11.7) represent the part of the work that is spent
to change the fields. Equation (11.7) can be rearranged putting in evidence the total
work done on the charges:
d̂W = −
dV (E · j) dt =
(E × H) · d dt+
+ dV (E · dD) dt + dV (H · dB) dt .
The last two terms are, as a whole, the amount of work that is used to change the
values of the fields in the volume under consideration. We agree that the second term
on the right represents the part of the total work devoted to vary the electric field
in the considered time interval and denote this term with d̂Wel . This term is called
“electric work”:
d̂ Wel = dV (E · dD) dt
11 Magnetic Field
and, similarly, we agree that the last term represents the amount of work devoted to
vary the magnetic field. We denote this contribution with symbol d̂Wmag and call this
term “magnetic work”:
d̂ Wmag =
dV (H · dB) dt .
11.3 Constitutive Relations
In order to write how the thermodynamic potentials change in connection with the
introduction of electromagnetic fields, it is necessary to establish the relationships
between the fields appearing in the four Maxwell’s equations and more precisely
between the magnetic field B and the magnetizing field H and between electric
displacement D and the electric field E. These two relations describe the response
of the materials to the electromagnetic actions performed by the observer.
The functional relation between electric field and displacement field was already
discussed in Chap. 10.
Having in mind to create a homogeneous magnetic field in a small volume,
√ we
consider a rectilinear solenoid with cross section and length l, with l . Let
N be the total number of coils, uniformly distributed, and denote with n = N /l the
number of turns per unit length.
Consider first the case in which the solenoid is in a vacuum. The magnetic field
and the magnetizing field in the inner parts of the solenoid are given by
B = μ0 n I ,
where μ0 is a constant whose value depends on the units of measure and is called
magnetic permeability in vacuum and
H =nI.
Moving to a vectorial formalism, Eqs. (11.13) and (11.14) may be written as follows:
= H,
where m = I is the magnetic moment of one coil, (N m) is the total magnetic
moment of the solenoid, Vsol = l its volume, and is a vector with modulus (the area of the coil), normal to the coil, oriented according to the “right-hand-rule”
Neglecting edge effects, that is, considering B uniform in the whole volume occupied by the solenoid, we see that we can interpret the quantity (B/μ0 ) as the magnetic
11.3 Constitutive Relations
moment per unit volume. The reason to give such an evidence to an apparently trivial
comment is the following: while in the case of the electrostatic field the source is the
electric charge, in the case of the magnetic field, the primary source is the magnetic
dipole; it follows that the field responsible for mechanical actions will be written,
in the general case, as a superposition of the fields created by (elementary) dipoles.
Hence, the convenience of describing the basic elementary situations in terms of
dipole moments. Furthermore, it is necessary to point out the true meaning of the
vector H: it represents the dipole moment per unit volume generated by the free
currents (see Eq. (11.16)).
11.3.1 Uniform Medium
Let us consider, now, the effect of the presence of matter. We know that matter, in
the presence of free currents, is magnetized and we describe this modification, in
analogy with the electrostatic case, defining, point by point, the density of induced
magnetic moment or magnetization vector which we denote by the symbol M.
Unlike the electrostatic case in which the density of the induced electric dipole
moment can be well modeled with appropriate modifications of the dislocation of
atomic or molecular charges, in our case the distribution of the magnetization M, not
always, can be reduced to a suitable distribution and orientation of closed elementary
currents, but in some cases, it can be attributed to the overall motions (Larmor
rotations) or to effects of electronic and/or nuclear spin, which do not have the
analogue in the electrostatic case. Therefore, the thermodynamic properties of a
medium in a magnetic field are a more complex matter compared with the electrostatic
case and needs to make a brief recapitulation of what has been learned in the study
of electromagnetism.
Suppose we fill the solenoid with a homogeneous and isotropic material. This
material will magnetize and, given the symmetry of the situation, it will do it uniformly. Therefore, the density of induced magnetic moment will be described by a
uniform vector M. If through the solenoid winding a current with intensity I (free
current) is flowing, the total magnetic moment per unit volume will be the sum of that
generated by the free current (see (11.15)) and that generated by the magnetization
of the material (magnetization currents):
The magnetic moment density generated by the free current is represented by the
vector field H which, for this reason, is called magnetizing field and, in our simple
example, is given by
11 Magnetic Field
So we can write the relation of general validity:
= H +M.
This relation expresses well the fact that the overall effect exerted on a test charge
in motion is given by the sum of the effect generated by the free currents, that is,
those directly controlled by the observer, to which is added the effect produced by
the currents of magnetization which describe the response of the medium to these
The expression of Eq. (11.19) is the analogue of that already seen in electrostatics:
ε0 E = D − P .
In the latter case, with rare exceptions if we know one of the three vectors in
Eq. (11.20), we can derive the other two provided that we know enough about the
properties of the material (substantially if we know the dielectric susceptibility which
will be a function of density, temperature, and, in general, also of the field).
In the case of the magnetostatic field, this does not always happen as in the case of
ferromagnetic materials and, more generally, in all cases in which the phenomenon
of hysteresis is present. In these cases, the relationship between the magnetic field
and the magnetizing field also depends on the procedures that have been followed
in the past to achieve a given configuration. For this reason, they do not allow a
simple thermodynamic treatment and are beyond the scope of this discussion. Then,
we write
= H.
This relation defines the quantity μ which is called magnetic permeability of the
material and, in general, depends on chemical composition, density, temperature, and
the intensity of the magnetizing field. The ratio μ/μ0 is called magnetic coefficient
and, in the case that it does not depend on the intensity of the magnetic field, will be
called magnetic constant.
From the point of view of their magnetic behavior, the materials can be grouped
into three categories: diamagnetic, paramagnetic, and ferromagnetic materials.
To characterize the behavior of a material in a magnetic field, it will be more
functional to refer to the behavior of the magnetic susceptibility.
So, the magnetic susceptibility tells us how the material is magnetized when
immersed in a given magnetizing field. It may depend on the composition, the density,
the temperature, and the intensity of the magnetizing field.
It is defined by the relation
M = χm H .
11.3 Constitutive Relations
from which we have the relation
μ = μ0 (1 + χm ) .
In the cases where the susceptibility can be considered independent of the intensity
of the magnetizing field, the material will be said to be a linear material because,
in these cases, the magnetization is proportional to the intensity of the magnetizing
field (of course at constant values of the other state variables).
11.4 Diamagnetic Materials
The diamagnetic materials are characterized by a negative value of the susceptibility:
χm < 0 .
In this case, the magnetization has opposite direction to the magnetizing field, and
then the resulting magnetic field B has a value smaller than that in the case of
the vacuum in correspondence to the same intensity of the magnetizing field. Its
numerical value is always 1 and it follows that B and H are always parallel.
Diamagnetic materials are glass, water, oil, carbon, mercury, gold and silver,
copper and zinc, nitrogen, chlorine, and many organic compounds.
Diamagnetism is explained by the Langevin theory [17]. The introduction of an
external magnetic field alters the motion of atomic or molecular electrons superimposing on their “normal motion” (i.e., in the absence of field), an overall precession
(Larmor precession), with an angular velocity proportional to the intensity of the
external magnetic field.
If the atoms or the molecules have zero magnetic dipole moment, this precession
motion induces on them a magnetic moment proportional to the external field (i.e.,
to the angular velocity of the precession motion) and such as to generate a magnetic
field with the opposite orientation (see Lenz law).
So we would expect that the magnetic susceptibility is independent of the intensity
of the external field (linear medium).
Further, we expect that the angular velocity of precession is independent of the
speed of translation or rotation of the molecules and therefore the susceptibility is
independent of temperature. Experience confirms very well the Langevin theory.
11.5 Paramagnetic Materials
In paramagnetic materials, the elementary constituents (atoms or molecules) possess
a non-zero magnetic moment and then behave as dipoles whose magnetic moment
can be reasonably kept constant in absolute value. The treatment is similar to the case
11 Magnetic Field
already seen for polar dielectrics: for weak external field, the dipoles are partially
oriented parallel to the field and the magnetization increases linearly with the field.
For intense fields, the orientation of the dipoles tends to saturation, and then the
magnetization no longer increases. In this case, χm ∝ B −1 . For weak fields, M ∝ B
and hence χm does not depend on the external field. In all cases, the magnetization
is parallel and concordant with the external field then we will have
χm > 0
and consequently the magnetic field intensity increases. An increase in temperature
causes a decrease in the degree of orientation of the dipoles and so decreases the
For weak fields, i.e., in cases of a magnetization proportional to the field, the
dependence of the magnetic susceptibility on temperature has been experimentally
found by P. Curie in 1906, for some substances, and is of the type χm = A/T . Of
course, the precession (Larmor) of the motion of electrons that explains well the
phenomenon of diamagnetism is present also in this case but the effect is, in absolute
value, much smaller than the effect due to partial orientation of the magnetic dipoles
and therefore results, in practice, negligible.
As we saw in the discussion of the Langevin’s theory for the case of electrostatic
fields, it is supposed that the orientation of each elementary dipole can be treated
“individually”, i.e., the different dipoles are not significantly related to each other.
The calculations already performed in the electrostatic case for polar molecules can
be repeated in the case of paramagnetic materials by substituting the electrostatic
field with the magnetic field and the elementary dipole moment of the molecule with
its elementary magnetic moment.
11.5.1 Long, Rectilinear, and Homogeneous Solenoid
Let us apply, now, the previous relations to the case of a rectilinear solenoid filled
with an isotropic, homogeneous, and non-ferromagnetic material. In this case, we
and making use of Eq. (11.12) the amount of work we have to do in order to produce
an infinitesimal change of the magnetic field intensity, will be
d̂ Wmag = Vsol n 2 I d (μI ) ,
11.5 Paramagnetic Materials
which may, also, be written as
d̂ Wmag =
N I d (μI ) .
As it can be seen, the amount of work we have to spend to create a given current
depends on the procedure used, and this is true not only because the magnetic permeability of the material may depend on all the state variables, but also because in
the expression Eq. (11.10) the contribution relative to the flux of the vector (E × H)
may be non-negligible. In our treatment, we shall neglect this term.
11.6 Thermodynamic Potentials in the Presence of
Magnetostatic Fields
Let us consider a material whose fundamental relation in the absence of magnetic
field is known and let us denote all its thermodynamic potentials by the suffix 0 like,
for instance, U 0 , S 0 , F 0 , μ0 .
We now want to obtain their expression when the material is immersed in a
magnetostatic field B (function of the coordinates). To do this, we will have to build
a suitable distribution of free currents which will be increased from zero to their
suitable final values needed to obtain the desired magnetic field configuration. The
total amount of magnetic work done will be given by the integral given in Eq. (11.12).
It will be
Wmag =
H · dB .
As it can be seen, in the process that leads the current from the initial zero value
to the final value, this integral depends, in general, on the procedure followed in
the process because, in general, the magnetic permeability depends on all the state
variables and therefore cannot be considered as being constant during the onset of
the free currents.
We will then need to focus on what we know of quite simple on the magnetic
properties of the material in order to choose the appropriate mode of operation.
11.6.1 Expression of the Thermodynamic Potentials
We will limit ourselves to processes in which the density is considered as a constant
(neglect thermal dilatation and/or magnetostriction). The relation between H and B
will depend only on temperature. If we operate with isothermal processes, Wmag will
give the measure of the variation in the free energy:
11 Magnetic Field
F−F =
H · dB ,
and, in the case of very long solenoid (in which we can neglect the edge effects and
consider the fields homogeneous inside and zero outside), we define the variation of
free energy density:
B 1
B · dB .
f = f0 +
From the general relation S = − (∂ F/∂ T )V , we immediately obtain the following
expressions for the entropy and energy densities, respectively,
∂ (1/μ)
B · dB ,
B 1
∂ (1/μ)
B · dB .
u=u +
s = s0 −
11.6.2 Linear Media
For materials with no hysteresis and for weak fields (as it is very often in lab activities),
we can consider μ independent of the field and therefore dependent, at most, only
on temperature.
In this case, the relations that provide the thermodynamic potentials are simplified
B = f 0 + μH2
f = f0 +
s = s0 +
1 1 dμ 2
1 dμ 2
B =
2 μ2 dT
2 dT
H2 .
u = u0 +
11.7 Adiabatic Demagnetization
It is of particular interest to study the behavior of some paramagnetic substances
in the conditions in which only the degrees of freedom related to paramagnetic
behavior are relevant. For instance, in the case of ferric alum NH4 Fe (SO4 )2 12H2 O
11.7 Adiabatic Demagnetization
at temperatures about 2–3K, the vibrational, rotational, and translational degrees of
freedom are substantially “frozen” in the sense that the molecules are all in their
respective fundamental states. The only contribution to the entropy of the material is
due to the paramagnetic response of the material to an external magnetic field Bext .
It is relatively simple to consider a spherical sample immersed in a uniform magnetic field and calculate the entropy of interaction between the sample and the external
field. It can be shown that, in these conditions and for weak fields (N A β B RT
with N A Avogadro’s number and β Bohr’s magneton), the entropy depends on the
total spin quantum number s and on the ratio (β Bext /RT )2 .
In an adiabatic transformation, that is, at constant entropy, the ratio (Bext /T ) must
be kept constant:
= constant.
If we bring the material to a temperature sufficiently low that the above conditions
are verified and we do it in the presence of a magnetic field Bext , and thereafter
we diminish the intensity of the magnetic field while maintaining the system adiabatically isolated, the temperature will decrease further and the decrease will be
proportional to the decrease of the intensity of the magnetic field. The process is
named adiabatic demagnetization.
By means of this technique, we can easily go to temperatures of the order of
T ∼ 10−3 K.
Fortunately (or unfortunately depending on your point of view), we cannot bring
the system to zero temperature by letting the intensity of the magnetic field go to
zero because at temperatures of the order of 10−2 –10−3 K the material ceases to be
paramagnetic and becomes either diamagnetic or ferromagnetic.
11.8 Ferromagnetic Materials
Also , in this case the susceptibility is positive, and then the magnetization strengthens
the external field, but in this case its value is several orders of magnitude greater than
in the case of paramagnetism. This so striking difference is explained by admitting a
strong correlation between neighboring dipoles within regions whose size depends
on the material and are determined by quantum phenomena.
These regions are called “domains” and constitute, in their turn, a sort of “big
elementary dipoles” with magnetic moment equal to the sum of the dipole moments
of all the elementary constituents contained in the same domain (which defines the
level of correlation).
At a very gross level, then, we could think of a treatment similar to that seen for
paramagnetic materials but in which the elementary constituent has a dipole moment
N times more intense. The number N corresponds to the order of magnitude of the
magnetic susceptibility in relation to the one we have in the case of paramagnetic
substances. However, the analogy cannot be sustained because, unlike the case of
11 Magnetic Field
paramagnetism, the dimensions of the Weiss domains also depend on the intensity
of the magnetizing field. The latter increases the level of correlation (and therefore
the size) of the domains oriented parallel to the field and reduces the dimension, and
then the magnetic moment, of those oriented in the opposite direction.
The phenomenon of the magnetic hysteresis poses severe obstacles to the definition of thermodynamic equilibrium states and, as a consequence, of the thermodynamic potentials. In addition, the dependence of the magnetization intensity on the
magnetizing field is very complex and will not be treated here.
Chapter 12
Thermodynamics of Radiation
Abstract The problem of the thermodynamical properties of radiation in equilibrium with matter is studied. After defining the emissivity and the absorptivity of
material surfaces, the Kirchhoff’s law in the extended form is discussed together
with some fundamental consequences. After deriving the relation between spectral
energy density and spectral emissivity in a cavity, the Wien’s law is derived from
relativistic invariance requirements. The Wien’s displacement law and the Stefan–
Boltzman law are obtained as consequences of the Wien’s law. Some examples
concerning the solar constant and the temperature of the solar corona are given.
The thermodynamic potentials of radiation in thermal equilibrium with matter are
obtained, and some exercises are proposed. In the appendix to the chapter, the spectral energy density of black-body radiation is obtained, from experimental data and
through thermodynamics, according to the original line of thought by Planck in 1900.
Keywords Absorptivity · Emissivity · Emittance · Black body · Kirchhoff’s law ·
Radiation density · Wien’s law · Thermodynamic potentials · Rayleigh–Jeans ·
Ultraviolet catastrophe · M. Planck solution · Adiabatic expansion of radiation
12.1 Introduction
All materials interact, with different modalities, with electromagnetic radiation, and
this shows that they are all composed of different agglomerates of electric charges.
For a detailed description of this mutual interaction, it is necessary to refer to a specific
atomic–molecular model but much can be understood by the general principles of
physics no matter what the detailed microscopic nature of matter is.
Recall that a very important part of the study of matter–radiation, formerly known
as the “problem of the black body,” has been a major moment of transition to the
so-called “modern physics” but it was the deep knowledge of thermodynamics which
allowed Planck to find that lucky interpolation1 which led him to the solution of the
1 This
was the expression used by M. Planck in his speech to the ceremony of the Nobel Prize in
1920. See Sect. 12.6.3.
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
12 Thermodynamics of Radiation
Let us see, briefly, some fundamental experimental observations that will allow
us, on one side, to begin the study of these phenomena and, on the other, to understand why physicists at the end of the nineteenth century found themselves facing
contradictions that were to prove insurmountable.
12.2 Kirchhoff’s Law
Kirchhoff’s law establishes some universal features of the matter–radiation interaction but it is necessary to define, first, some fundamental quantities.
12.2.1 Absorptivity of Material Bodies
If the surface of a body, call it C, is hit by some electromagnetic radiation, the latter
is partly absorbed and partly reflected. The quantitative study of this phenomenon
shows that, in general, it depends on the nature of the body C, on its temperature,
on the wavelength of the incident radiation, on its state of polarization, and on the
geometry (angle of incidence, etc.).
We define absorptivity (or absorptive power) A of a surface, the ratio of the
amount of energy absorbed to the amount of incident energy in the same time interval.
As, in general, this ratio depends certainly on the temperature of the body and
on the wavelength of the incident radiation, but also the angle of incidence of the
radiation, we will speak of absorptivity of a body at a certain temperature and relative
to radiation of a certain wavelength λ (more precisely in the interval (λ, λ + dλ))
as its integral value when it is hit by isotropically distributed radiation (we always
refer, here, to isotropic bodies and in a state of thermodynamic equilibrium). The
absorptivity is a pure number in the interval between 0 and 1.
12.2.2 Emissivity of Material Bodies
Consider one surface element of a body C and let d be its area and n̂ the normal unit
vector oriented outward. At a distance r from this surface element we place another
body R, that we might consider a receiver, by means of which we want to analyze the
radiation emitted by the elementary surface of area d of body C. Denote by d the area of a surface element of this body detector and by n̂ its normal unit vector
oriented outward. We denote, respectively, by ϑ and ϑ the angles that the unit vectors
n and n̂ form with the line joining the two elementary surfaces (see Fig. 12.1).
Suppose, for the moment, that there are no other sources of radiation in the vicinity
so that we can assume that every surface element of the body C is not reflecting
12.2 Kirchhoff’s Law
Fig. 12.1 The geometrical quantities used throughout the chapter are defined: d and d are two
surface elements of the two bodies C and R, respectively. The unit vectors n̂ and n̂ directions are
oriented outward the two bodies and the two angles ϑ and ϑ are formed by the two normals with
the segment of length r connecting the two surface elements
radiation emitted by other bodies. For the moment, we assume also that R is not
emitting significantly.
The infinitesimal quantity of energy dε, emitted by d and impinging on the
infinitesimal part d of the receiver is written in this way:
dε = B
d cos ϑ d cos ϑ dt .
This relation defines the quantity B, which has the dimension of a power per unit
area, and is called the emissivity of the body C at the point considered.
As the emissivity has been defined by Eq. (12.1) in which the geometric factors
have been put in evidence, it will happen in many cases, for example, for isotropic
and homogeneous materials, that the coefficient B proves to be independent of the
direction along which the surface d is observed. In other words, this means that if we
place the receiver at a distance r , we receive a power which is proportional, according
to the coefficient B, to the projection of the emitting surface d perpendicular to the
line of sight.
Suppose that the amount of energy received per unit time by the element surface
d of the detector R depends only on the projection of the surface d perpendicular
to the line of sight. If we denote by
d =
d cos ϑ r2
the elementary solid angle subtended by the receiver d when seen from d,
Eq. (12.1) may also be written as
dε = B d cos ϑ d dt .
The emissivity depends on the nature of the emitting material, on its temperature,
and on the observed wavelength. In the case mentioned above and throughout the
12 Thermodynamics of Radiation
following discussion, we will consider B independent of the direction of emission
and the state of polarization.
Likewise, if the body R emits isotropically with emissivity B , the energy radiated
by the elementary surface d on the elementary surface d per unit time can be
written as
dε = B d cos ϑ d dt ,
d =
d cos ϑ
is the solid angle subtended by d when seen from d . Like in the previous case, the
effective surface is the projection of d orthogonal to the direction ϑ . In conclusion,
definition Eq. (12.1) of the emissivity B allows to consider only the projections of
the emitter or receiver surfaces perpendicular to the mutual line of sight, and the
solid angle within which each one sees the other.
12.2.3 Black Body
In the definition of emissivity, we have assumed that the power radiated by the surface
d toward the detector was solely due to the electromagnetic processes occurring in
the body C. Even when we have considered the case in which also the body R was a
normal emitter, we have neglected the possibility of a mutual reflection between the
two elementary surfaces.
The radiation collected by a detector in a given direction and coming from a
surface element of the body C will, in general, be given by the superposition of a
component radiated from the material which constitutes the body (and this contribution is described by the emissivity) and a component that derives from reflection by
the surface element (and according to specific laws) of the radiation emitted by all
the other surrounding bodies.
We will call black body a body that has absorptivity A0 = 1 at all wavelengths.
For a black body, therefore, the radiation emitted by one of its surface elements is
entirely due to the component radiated by the material.
The black body, therefore, represents an asymptotic situation but the reasoning on
this abstract system is of fundamental importance for understanding the behavior of
real bodies. As is well known, we can visualize a black body in the following way:
Consider a cavity made in the interior of a material body kept at a constant
temperature. This body has reached a state of thermodynamic equilibrium.
We practice a small hole in the wall of the cavity.2
2 The
transverse dimension of the hole is very small compared to the transverse dimension of the
cavity. Also, the thickness of the wall in which the hole is practiced must be small.
12.2 Kirchhoff’s Law
Fig. 12.2 Schematic representation of a black body with area d. The surface of the hole in the
material cavity, observed from outside has the property of absorbing completely all the radiation
incident on it. The radiation emerging from inside is entirely due to the electromagnetic processes
active inside the cavity. If the opening is small enough, the outgoing radiation observed from outside
in any direction ϑ with the normal n̂ is a faithful sample of the radiation inside the cavity when it
is closed
The external surface of this small hole has the following properties: (i) any ray
coming from the outside of the cavity and which impinges on this surface of the
hole enters the cavity, undergoes a number of interactions with the inner walls of
the cavity, and is completely absorbed. (ii) The small size of the hole does not
substantially disturb the distribution of the radiation that was determined within the
cavity when it was closed and all had stabilized in an equilibrium configuration
(see Fig. 12.2).
The radiation coming out of the small hole can thus be considered a fairly faithful
sample of the equilibrium radiation inside the cavity before the opening of the hole.
For this, it is also necessary that the thickness of the material in the vicinity of the
hole is small compared to the size of the hole itself.
The surface of this small hole, seen from the outside, will thus show an absorptivity
A0 = 1 for all wavelengths and is therefore a black body. The radiation that we
observe from the outside is entirely due to the outflow of a small part of the radiation
which is created inside the cavity by the radiative processes active within the material
which constitutes the cavity.
With different geometries of the hole, we can study the intensity of radiation
that propagates inside the cavity, in different directions and, with the use of filters,
at different frequencies (or wavelengths). In this way, the equilibrium of radiation
present in the cavity can be considered as the superposition of monochromatic rays
(in geometrical optics sense) which propagate in different directions. The analysis
shows that the flux density is isotropic whatever the shape of the cavity is.
12 Thermodynamics of Radiation
12.2.4 Kirchhoff’s Law for the Emissivity of a Black Body
Let us investigate the spectral distribution of the radiation emitted by a black body.
If we denote by B0 (ν, T ) the spectral emissivity of a black body at frequency
ν, or more exactly in the frequency interval (ν, ν + dν), the infinitesimal amount of
energy received by the observer (surface element of the receiver as in Fig. 12.1) in
this frequency range and per unit time, can be written as
dεν = B0 (ν, T )
d cos ϑ d cos ϑ dν dt .
If we compare this with Eq. (12.1), we see that the total emitted energy dε will be
dε =
dεν dν
and, if we denote by B0 the emissivity of a black body, we have
B0 (ν, T ) dν .
B0 (T ) =
Kirchhoff has shown that the spectral intensity of the radiation emitted by a black
body depends only on the temperature T of the material and on the observed frequency ν but does not depend on the nature of the material used.
Kirchhoff, directed by some experimental observations, has proved this proposition by referring to the second principle of thermodynamics.
We build two cavities with different materials and keep them at the same temperature T. On each of them, we make a small hole and make sure that the outgoing
radiation from each is oriented so as to enter into the other and, with the use of
suitable filters, we do this for each frequency.
Let us denote with Ba0 (ν, T ) and with Bb0 (ν, T ) the spectral emissivity of each
of the two black bodies, at the same frequency ν and temperature T . If one of them
was greater than the other, we would get a net transfer of energy between the two
bodies. This would mean that, without any external intervention, we could obtain a
temperature difference between two bodies initially at the same temperature.
One important consequence of this result is that it allows to treat the radiation
contained in a cavity at equilibrium as a thermodynamic system in its own right, that
is, to which the thermodynamic potentials can be attributed regardless of the nature of
the material wall with which it is in interaction. Indeed, if we refer to Eq. (12.34), we
see that, for the Kirchhoff’s law, also the energy density of the equilibrium radiation
that is formed in the cavity will be independent of the material which has generated
it, and then is a property of the radiation only.
12.2 Kirchhoff’s Law
12.2.5 One Fundamental Consequence of Kirchhoff’s Law
Kirchhoff’s law states, then, that the spectral emissivity of a black body B0 (ν, T )
is a universal function with exactly the same value of universality that we ascribe to
the second principle of thermodynamics.
Let us see how the expression of a black-body emissivity can be derived, apart
from a multiplicative constant, from arguments based solely on the dimensional
All laws of classical physics are built on five fundamental quantities: the three of
mechanics (mass, length, and time), the electric charge q, and the temperature T .
From Kirchhoff’s Law3 we can assume that B0 (ν, T ) does not depend on the value
of the electric charge which couples radiation with matter but depends only on the
frequency ν, on the temperature of the cavity T and on the remaining fundamental
constants with which the laws of physics are given.
In a classical context, that is pre-quantum, these fundamental constants are c
(speed of light in vacuum) and k B (Boltzmann’s constant).
We build the product:
B0 (ν, T ) ν n c p T q k rB = Γ ,
with n, p, q, r constants to be determined on the sole condition that expression Eq.
(12.9) is adimensional. This condition is also called scale invariance condition and
let us denote with Γ the value of the product Eq. (12.9).4 Imposing this condition
is equivalent to Kirchhoff’s law, i.e., to the universality of the law that regulates the
emissivity of a black body at thermodynamic equilibrium.
For convenience we replace, among the fundamental quantities of mechanics, the
mass m with the unit of energy e. This means that with this new fundamental system
of units, mass would have the dimension:
[mass] = [energy] × [length]−2 × [time]2 ,
The dimensions of the five quantities involved in the product Eq. (12.9) are
B0 (ν, T ) = energy × length
ν = [time] ,
c = length × [time]−1 ,
T = temperature ,
k B = energy × temperature
3 As
we shall prove in Sect. 12.6 of this chapter, in the treatment by M. Planck of the coupling
between matter and radiation, the result will appear to be independent of the magnitude of the
electric charge. This can be seen as a consequence of the Kirchhoff’s law.
4 The value of Γ can not be determined by the scale invariance arguments.
12 Thermodynamics of Radiation
Substituting in Eq. (12.9), we obtain
× [time]−n × length ×
Γ = energy × length
q r −r
× [time]− p × temperature × energy × temperature
By performing algebraic simplifications, we obtain
(1+r ) (−2+ p)
(q−r )
Γ = energy
× length
× [time](−n− p) × temperature
The requirement of scale invariance demands that the four following relations are
(1 + r ) = 0 ,
(−2 + p) = 0 ,
(−n − p) = 0 ,
(q − r ) = 0 .
These relationships lead to a unique solution:
B0 (ν, T ) = Γ
kB T
and, using Eq. (12.34) we get, for the spectral energy density5 :
u (ν, T ) = Γ 4π
kB T .
Apart from the numerical value for the coefficient Γ , whose value cannot be determined by the Kirchhoff’s law only, this is the expression for the energy spectral
density provided by classical physics (pre-quantum) [7].
This expression leads to an irreconcilable contradiction that is called the ultraviolet
catastrophe. This name derives from the fact that the expression for the energy density
u (T ) =
u (ν, T ) dν
diverges for any value of the temperature. It should be noted that we are forced to this
result because we have assumed a theoretical framework based on two fundamental
constants k B and c only.
5 The
expression of the spectral energy density Eq. (12.23) was first calculated in 1900 by Lord
Rayleigh with an argument based on classical statistics. For the fundamental product Γ , he found
the result Γ = 2.
12.2 Kirchhoff’s Law
To overcome this difficulty, we must recognize that there must be at least another
fundamental constant of physics on which this phenomenon depends.
It is very interesting to note that if we apply the same procedure (i.e., “scale
invariance”) assuming the post-1900 theoretical framework which is based on the
three fundamental constants k B , c, and h, we are led to the Wien’s law and to the
correct expression for the spectral energy density (always apart from a multiplicative
constant). This result can be seen in [18].
12.2.6 Extended Form of the Kirchhoff’s Law
Let us now consider a cavity constructed of a material that, at the frequency which
we are concerned with, has a coefficient of absorption very close to unity. At that
frequency (and at equilibrium), the emissivity of the wall of the cavity will be that
of a black body B0 (ν, T ).
Let us introduce, inside the cavity, a small body built with any material (for simplicity homogeneous and isotropic) and the whole is in a thermodynamic equilibrium
This implies that for every surface element of each body, the power emitted in any
direction is equal to the power absorbed and transported by the radiation impinging
on it in the same direction and this must happen for every frequency and polarization
state. If this detailed balance were not fulfilled, we could violate the second principle
with the same argument that led Kirchhoff to prove the universality of the black-body
Let us denote with B (ν, T ) and A (ν, T ), respectively, the spectral emissivity and
spectral absorptivity of the small body introduced in the cavity and let d and d be
two surface elements belonging to the black cavity and to the small body, respectively.
If we consider the elementary surface d and make use of Eqs. (12.1) and (12.4), at
equilibrium, we impose the balance between the absorbed power coming from d
and the power emitted toward the same elementary surface of the black cavity and
A (ν, T ) B0 (ν, T ) = B (ν, T ) .
Exchanging parts, the surface d entirely absorbs the power that comes from the
small body and within the solid angle subtended by d , and it is the sum of that
emitted by the small body, and then proportional according to the geometric factors to
its emissivity B (ν, T ), plus the power reflected on d by d for being illuminated
by the whole cavity. It is easy to realize that d is seen by d as having a “virtual
emissivity” B + (1 − A ) B0 . The balance condition then reads
B0 (ν, T ) = B (ν, T ) + (1 − A (ν, T )) B0 (ν, T ) .
12 Thermodynamics of Radiation
In both cases, we have
B (ν, T )
= B0 (ν, T ) .
A (ν, T )
This is the extended form of Kirchhoff’s Law.
It states that the ratio of the emissivity and the absorptivity of any surface depends
only on frequency and temperature but is independent of the nature of the material
and, as a consequence, is equal to the emissivity of a black body. This is equivalent
to say that the ratio (B/A ) is a universal function as we have seen for the black
12.2.7 Emittance
Let us consider a material body and one point P of its surface. If B is the emissivity of
the surface at point P, we want to calculate the total, i.e., integrated over all directions,
energy emitted per second and per unit area by the body at point P. This quantity is
called emittance of the body at point P and will be denoted by W.
Let us consider a small area d around P and consider the energy emitted in a
certain direction d in the time interval dt as written in Eq. (12.3). If the emissivity
does not depend on the direction, let us integrate over all the directions and, after
dividing d dt, we obtain the total power emitted per unit area by the body at point
P. This quantity is related to the emissivity by
cos ϑ sin ϑ dϑ = π B.
The same relation can be written for the spectral quantities. This relation is important
because for a body at thermodynamical equilibrium, the emissivity is the same at
every point of its surface and then it is relatively simpler to measure the total emitted
energy per second by a surface and from this to calculate the emissivity. Remember
that only for a black body we are sure that the power emitted toward the calorimeter
is entirely due to its emissivity. If, as a first instance, we assume a star to be sphericalsymmetric body with radius Rstar whose surface is radiating with emissivity B (T ),
the quantity of energy emitted per second from the unit area of the surface of the star
is proportional to the emissivity according to Eq. (12.28). The total energy emitted
by the star per second is called luminosity of the star, denoted by L, and is given by
π B (T ) .
L = 4π Rstar
12.2 Kirchhoff’s Law
12.2.8 Radiation Energy Density and Emissivity
Let us consider a closed cavity made with a completely absorbing material (black
cavity) maintained at temperature T and suppose that a state of thermodynamical
equilibrium has been achieved. All the points of the surface of the cavity will be at the
same temperature and will have the same emissivity B0 . As the radiation propagates
at a finite speed, we can define the energy density, associated with radiation, at every
point internal to the cavity. More precisely, we will call spectral energy density
of radiation at the frequency ν in the point P(x, y, z) = r, the quantity u (ν, T, r)
such that u (ν, T, r) dν is the energy density in the infinitesimal frequency interval
(ν, ν + dν) at the point P of the cavity whose walls are at temperature T . We call
energy density of radiation at point P inside the cavity whose walls are at temperature
T , the quantity u (T, r) such that u (T, r) dV is the total energy contained in the
infinitesimal volume dV at point P. Clearly, we have
u (T, r) =
u (ν, T, r) dν.
Consider a closed cavity formed by a perfectly absorbing material. Every surface
element d of the wall of the cavity will send, within an infinitesimal solid angle
oriented toward the point P, a certain amount of energy per second, and then generate,
in the vicinity of the point P a certain energy density (both spectral and total).
The energy density and the spectral energy density at point P will be the sum
of all contributions for all the surface elements that form the wall of the cavity.
Consider, then, an elementary surface d of the wall and, correspondingly, consider
an infinitesimal surface d around the point P and oriented normal to the direction
of emission (see Fig. 12.3).
The amount of energy emitted by the elementary surface d in the time interval
dt, in the frequency interval (ν, ν + dν) and directed toward the surface d will be
given by the relation (see Eq. (12.6)):
dε = B0 (ν, T )
d cos ϑ d dν dt .
This amount of energy will flow through d in the time interval dt and, therefore,
will be distributed in the infinitesimal volume dV = d c dt. The contribution of
this infinitesimal energy flow to the spectral energy density at point P will be
du (ν, T, P) dν = B0 (ν, T )
which can also be written in the following form:
d cos ϑ dν
12 Thermodynamics of Radiation
Fig. 12.3 P is a point inside a cavity which is in a state of thermodynamical equilibrium and, hence,
at uniform temperature T . The point P “sees” a generic surface element d of the surface of the
cavity, within the solid angle d . The surface element d emits radiation,with some intensity,
within the solid angle d toward point P. The surface element d is taken perpendicular to the
vector r from d to the point P and the volume dV = d c dt is filled by the radiation emitted by
d in the time interval dt
du (ν, T, P) = B0 (ν, T ) d ,
where d = d cos ϑ /r 2 is the infinitesimal solid angle subtended by d when it
is observed from the point P.
From this relation, we can integrate over the entire (closed) surface of the cavity
to obtain
B0 (ν, T ) .
u (ν, T ) =
This result shows that the value of the spectral energy density does not depend on
the position of the point P, in other words, that it is homogeneously distributed inside
the cavity. In addition, from the above relations, one can easily see that an observer
introduced inside a “black” cavity (that is, the small surface d is considered as
the sensitive surface of an instrument) would see in all directions the same flow of
energy per unit frequency interval and per unit of solid angle; this means that this
observer could not distinguish objects (provided that they are at thermodynamical
equilibrium) in the cavity but would be immersed in a “fog” with uniform brightness
at each frequency.
By integrating over all frequencies, we define the energy density of the radiation
and it will result
B0 (T ) .
u (T ) = u (ν, T ) dν =
12.2 Kirchhoff’s Law
Also, the (total) energy density is homogeneously distributed within the cavity and
its value will depend only on the emissivity of the walls and then it will depend only
on the temperature. It is immediate to prove that, in equilibrium conditions, among
the energy density, the emissivity B0 and the emittance of the surface, the following
relations hold:
B0 =
These relations allow us to switch from the measurement of the emittance to the
energy density. In this way, Stefan has found the law of proportionality of the energy
density to the fourth power of the temperature.
12.3 Wien’s Law
Sometimes, the denomination “Wien’s law” is used to indicate the law more correctly
named “Wien’s displacement law” which is summarized by the relation
λmax T = constant = 2.90 × 10−3 m K .
This relation provides the value of the wavelength λmax at which the spectral energy
density, and then the spectral emissivity, of a black body, is maximum at a given
temperature. We can say, with a short but efficacious expression used by astronomers,
that it establishes the relationship between the color of a body in thermodynamic
equilibrium and its temperature.
At ambient temperatures (say roughly in the interval 290–300 K), the wavelength
of the emission peak is near 10 µm that is in the far infrared and then, to obtain the
emission peak in the visible wavelengths, a body must be brought to a temperature
of at least 3000 K.
At this temperature, the maximum of emission is at the beginning of the visible
range. The photosphere of the Sun (i.e., the external part that we see “normally”
with the eye) has a temperature of about 5500 K, and then the emission peak is at
λ 0.53 µm.
Equation (12.37) is one of the main pillars on which the possibility of applying
the theory of Stellar Evolution to astronomical observations rests and is, therefore,
one of the fundamental blocks with which modern cosmology is built.
However, this relation, so important, represents only “one” functional aspect of a
more fundamental law, correctly named Wien’s law, whose complete formal expression is given at the end of Sect. 12.3.2 in Eq. (12.47).
The relevance of this finding is that it shows that the universal function u (ν, T )
which appears to be a function of the two variables ν e T is, in principle, a universal
function of the single variable ν/T . In particular, it follows that the frequency at
which the specific emissivity is maximum is proportional to the temperature of the
black body, which is the content of the Wien’s displacement law.
12 Thermodynamics of Radiation
12.3.1 Wien’s Law According to Wien
The result, known as Wien’s law, was obtained by W. Wien studying the change in
the radiation frequency that would occur in a cavity with perfectly reflecting walls,
in which the volume is made to vary for example with a movable piston.
During the expansion, the frequency of this radiation changes in the reflection
process against the moving piston (it is a simple exercise on the Doppler effect of
electromagnetic waves reflected from a moving mirror). Moreover, if we operate in
order to ensure also the thermodynamic equilibrium (and this requires some attention), we achieve the desired result.
Other authors [19] have faced, with some variations, the same problem but the
fundamental idea remains that of Wien.
Many authors, who are mainly interested in the displacement law, obtain
Eq. (12.37) operating “a posteriori” simply by calculating the point in which the
spectral energy density, as provided by quantum physics, is maximum. It is immediate to verify that this happens for a well-defined value of the ratio ν/T but the reader
should be aware that Wien’s displacement law is not a consequence of quantum
We want, here, to refer to the demonstration given by von Laue in an article of
1943. In this article, von Laue highlights the fundamental fact that Wien’s law is a
consequence of the theory of relativity and does not need ad hoc assumptions.
12.3.2 Wien’s Law and Relativity
Max von Laue [20] considers a surface element d of a black body and the pencil
of rays emitted at an angle ϑ with respect to the normal to the surface and within a
solid angle d, as shown in Fig. 12.4.
The energy, per unit frequency, contained in this pencil of rays in the time interval
t can be written recalling Eq. (12.3). If, as usual, we denote with B0 (ν, T ) the
spectral emissivity of the surface element and take into account that the time interval
is related to the length l of the pencil by l = c t, we get
Fig. 12.4 The surface element d of a black body is considered to be an emitter both of energy
and of entropy. We consider the pencil emitted in a direction ϑ with respect to the normal to the
surface, in a time interval t
12.3 Wien’s Law
dεν =
B0 (ν, T ) dν d cos ϑ d .
In a similar way, we consider the surface element as an entropy emitter. This point
will be made clearer in Sect. 12.4 where entropy, together with the other relevant
thermodynamic potentials for the radiation field will be defined.
In analogy of what we have made for the definition of the energy density u (T ) and
of the spectral energy density u (ν, T ) starting from the emissivity and the spectral
emissivity of the surface elements, we shall consider the entropy density s (T ) and the
spectral entropy density s (ν, T ) as radiated by the surface elements of the black body,
by developing the analogous formalism. Then, for the amount of entropy possessed
by the pencil of radiation considered above and in the same frequency interval, we
may write the following expression:
dSν =
S0 (ν, T ) dν d cos ϑ d ,
where S0 (ν, T ) is the spectral emissivity of entropy of the surface element d. Obviously, B0 (ν, T ) and S0 (ν, T ) are mutually dependent, and von Laue shows what
kind of constraint is required by the theory of relativity to this mutual dependence.
This result can be achieved as follows. First, we observe that the two quantities (dεν /ν) and dSν are two relativistic invariants (see [20]). Besides this, also the
ν2 l
dZ = 2 dνdσ cos ϑd ,
c c
is relativistically invariant. Since the ratio of two invariants is an invariant, it follows
that both
B0 (ν, T )
= 2
c dZ ν
S0 (ν, T )
= 2
c dZ
are two relativistic invariants. As a consequence, we may write
S0 (ν, T )
= f
B0 (ν, T )
where f (q) is a universal function to be determined. From the definition of temperature, we have
and then, from Eq. (12.43) we obtain
12 Thermodynamics of Radiation
= f
B0 (ν, T )
which is equivalent to write
B0 (ν, T ) = ν 3 F
where F is a universal function to be determined. From Eq. (12.34), we obtain the
spectral energy density:
4π 3
ν F
u (ν, T ) =
12.3.3 Some Consequences of the Wien’s Law
From Eq. (12.47), some important consequences can be derived.
The first one is the famous “Wien’s displacement law” Eq. (12.37). Indeed, if we
ask, given a temperature T, at which frequency the spectral emissivity, and hence the
spectral energy density, is maximum we proceed as follows.
Let us write Eq. (12.47) in the form
u (ν, T ) =
4π 3 3
T x F (x) ,
having posed
For a fixed temperature, the required frequency νmax of maximum emission can be
obtained from the value xmax which maximizes the function x 3 F (x). From this, we
νmax = xmax T .
The proportionality of the frequency of maximum emissivity to temperature is independent of the shape of F (x), while the numerical value of xmax depends on it.
From the Wien’s Law in the form Eq. (12.47), we can obtain two other results of
fundamental importance.
One is the dependence of the maximum emissivity on temperature. From Eq.
(12.48), we see that the ratio of the spectral energy densities at xmax obeys the relation
u νmax2 , T2
u νmax1 , T1
which means that the maximum emissivity increases with the cube of the temperature.
12.3 Wien’s Law
Another fundamental result, of general validity, we can obtain from Wien’s law
refers to the temperature dependence of the energy density of black-body radiation.
From Eq. (12.48), we may write
u (T ) =
4π 4
u (ν, T ) dν =
x 3 F (x) dx .
If we denote with a the quantity
x 3 F (x) dx ,
the energy density will be given by
u (T ) = a T 4 ,
where a is the fundamental constant known as the Stefan–Boltzmann constant whose
value is a = 7.56 × 10−16 J m−3 K−4 .
If we recall Eq. (12.36), we may write the expression for the emittance of a black
body at temperature T :
ac 4
T = σT4 ,
W0 =
with σ = 5.67 J s−1 m−2 K−4 called radiation constant. For instance, we may see that
for a black body at T 300 K the emittance is W0 (300) 460 W m−2 and is the
upper limit to the emittance of a real body at this temperature.
At this point, the only thing we can tell about the universal function F (x) is that
the integral
1.8 × 10−8 J m−2 s K−4 ,
x 3 F (x) dx =
but nothing more concerning the form of that function. The complete expression of
the spectral energy density of the radiation in thermodynamic equilibrium u (ν, T )
of the black body will be obtained by M. Planck in his master paper of late 1900.
His result is
u(ν, T ) = 3 ν 2
exp (hν/kT ) − 1
The procedure adopted by Planck in order to obtain this result is briefly summarized
in Sect. 12.6.
12 Thermodynamics of Radiation
Example 12.1 The temperature of the Sun from the solar constant. Suppose that
the Sun emits radiation as a spherical isotropic body having radius R 7 × 108 m
at a distance RES 15 × 1010 m from the Earth. We measure, on the Earth, the total
power collected by 1 m2 plane detector perpendicular to the direction toward the
center of the Sun and we find S 1366 W/m2 . This is called the “solar constant.”
If we assume that the “surface” of the Sun, the photosphere, emits as a black body,
we may determine the value of its temperature T .
Our detector collects the total flux coming from the different portions of the Sun’s
surface. Let us assume, as a first approximation, that all these portions of the Sun’s
surface are at the same distance RES from the detector, and that they all “see” the
. After integrating over the whole
detector under the same solid angle δ = 1/RES
surface of the Sun, we can state that the detector is traversed by the total energy
emitted by the Sun in the solid angle:
π R
Let us denote with B0 (T ) the emissivity of the surface of the photosphere. The
energy collected per second and per unit area by our detector will be what we call
the “solar constant”:
π R2
S = 2 B0 (T ) ,
B0 (T ) =
225 × 1020
× 1366 2 × 107 2 .
3.14 × 49 × 10
π R
The dependance of the emissivity on temperature can be obtained from the Stefan–
Boltzman law Eq. (12.54) and from the relation between emissivity and energy
density in a black cavity in Eq. (12.35):
B0 (T ) =
ca 4
T .
By substituting Eq. (12.60) in Eq. (12.61), we obtain
T =
4 × 3.14
× 2 × 107
3 × 10 × 7.56 × 10−16
1.1 × 1015 4 5760 K .
Example 12.2 The temperature of the Sun from spectroscopy. It is interesting to
compare the result obtained in Eq. (12.62) with that obtained by using the Wien’s
displacement law.
According to this law, in a black body at equilibrium temperature T , the energy
spectral density exhibits its maximum at the wavelength given by Eq. (12.37) and
12.3 Wien’s Law
hence the same holds for the spectral emissivity. From spectroscopic measurements,
the peak of emission from the solar photosphere is at a wavelength very near (a little
longer then) to
λmax 5 × 10−7 m .
If, again, we assume that the emitting body is a black body, we get the following
value for the temperature of the emitting surface:
T 2.9 × 10−3
5800 K ,
5 × 10−7
which is very close to the value previously found and based on a total energy argument.
Example 12.3 The luminosity of the Sun. Having derived the value of the emissivity
we may apply Eq. (12.29) and obtain
L 4 × 3.14 × 49 × 1016 × 3.14 × 2 × 107 3.86 × 1026 J s−1 .
Notice that the emitted energy will be uniformly distributed on the surface of a sphere
of radius RES 15 × 1010 m and then the power collected by 1 m2 of this sphere
will be
3.86 × 1026
W m−2 1.366 × 103 W m−2 .
S 4 × 3.14 × 2.25 × 1022
We have derived an approximate value for the emissivity of the Sun’s photosphere
by two independent ways. One is based on the Wien’s displacement law and can be
applied to any star without having further information on its radius and distance.
The other one (see Eq. (12.59)) requires the knowledge of the solid angle subtended by the star and of the energy density flux on the observation point.
12.4 Thermodynamic Potentials for Radiation
Let us consider a cavity within a material body and let us write the expressions of the
thermodynamical potentials for the radiation contained in the cavity provided that
the whole system is in a state of thermodynamical equilibrium.
First, it is necessary to establish the relation between the radiation energy density
and the pressure exerted by radiation on the walls of the cavity. We first refer to
Eq. (B.18) obtained in Eq. (B.18) which shows the general result in the case of
perfectly reflecting walls.
The same relation holds for “black walls,” i.e., for walls with absorptivity A =
1, provided that the walls are in thermodynamic equilibrium with the radiation.
12 Thermodynamics of Radiation
Indeed, the wall absorbs completely the incident radiation but emits exactly the
same power, in the reverse direction, with the same angular and energy distribution.
The momentum transferred per unit time will be twice that calculated in Eq. (B.10)
for totally absorbing walls.
Similarly, for a real surface, the result will be the same provided that it is in
thermodynamical equilibrium with radiation. Indeed, we can subdivide the incident
radiation, for every frequency and direction, into two components: one absorbed
(according to the value of the absorptivity A ) and the other reflected. For the former,
the condition of thermodynamic equilibrium requires that at the same frequency
and in the reverse direction, the wall emits the same energy, and hence the reverse
amount of momentum, per unit time. As regards the latter, Eq. (B.11) holds by
definition. In conclusion, the two components realize the condition described in
Eq. (B.18).
Notice that this is valid for every frequency interval for if we denote with pν dν
the contribution to the total pressure given by the radiation with frequency in the
interval (ν, ν + dν), we have
pν dν =
u ν dν ,
where u ν is the spectral energy density.
Consider the radiation contained in a cavity in thermodynamic equilibrium. Owing
to homogeneity, as we proved in Sect. 12.2.8 the energy U and the entropy S must
be written in the following form:
U (V, T ) = u(T ) V ,
S(V, T ) = s(T ) V ,
where u(T ) and s(T ) are, respectively, the energy and the entropy densities. For a
generic quasi-static transformation, we have
dS =
dU + dV ,
and differentiating Eq. (12.68), we obtain
dU =
V dT + u dV .
Finally, substituting Eq. (12.71) in Eq. (12.70) and remembering Eq. (B.18), we have
dS =
4 u
1 du
V dT +
dV .
T dT
3 T
Since dS is an exact differential the cross-derivatives must be equal and this leads to
12.4 Thermodynamic Potentials for Radiation
=4 .
If we integrate the above equation, we obtain the Stefan–Boltzmann law
u = a T4 ,
where a = 7.56 × 10−16 J m−3 K−4 . For energy, we have6
U (V, T ) = a T 4 V .
From Eq. (12.72), we can write the expressions for entropy and density of entropy:
dS = 4a T 2 V dT + T 3 dV = a d V T 3 ,
which leads to
aT V ,
s (T ) = aT 3 .
S(T ) =
for the thermal dependency of the entropy and the entropy density, respectively. In
parallel, for the Gibbs potential of the radiation, we have G = U + pV − T S = 0.
From the particle point of view, this implies that the chemical potential of the photon
gas is zero.
12.5 Thermodynamical Processes for Radiation
In this section, we briefly consider the basic thermodynamical transformations for
radiation in a state of thermodynamical equilibrium. Radiation possesses its own
thermodynamical potentials as any thermodynamical system but we must keep in
mind that, in general, radiation must be kept in constant interaction with matter
if we want to achieve equilibrium states (remember that in the classical context
electromagnetic waves do not interact with each other).
6 The
dependence of the emittance of a black body (i.e., the emittance of the surface of the small
hole practiced on a material cavity in thermodynamic equilibrium at temperature T ) from the fourth
power of the temperature had been found experimentally by Stefan. The derivation of the expression
for the radiation energy density Eq. (12.54) of a black body was obtained from the principles of
thermodynamics by L. Boltzmann.
12 Thermodynamics of Radiation
12.5.1 Isothermal Processes
The isothermal transformations of a radiation gas are also isobaric transformations.
The volume of the cavity is varied while keeping the walls at a constant temperature.
The work that we do and the amount of heat that we must provide the walls correspond
to the work and the amount of heat supplied to the black-body radiation which is
generated inside in order to preserve the condition of thermodynamic equilibrium.
It is supposed that the properties of the matter that constitutes the walls remain
constant as, for example, in cases in which the change of volume is realized by
moving a (frictionless) piston. If the volume varies from an initial value V1 to the
final value V2 , the variation of energy and the amount of work done by the radiation
are, respectively,
U = u (T ) [V2 − V1 ] ,
W r = u (T ) [V2 − V1 ] .
To do this, we must transfer to the walls the amount of heat
Q = T S =
aT [V2 − V1 ] .
This quantity of heat corresponds to the variation of energy of the radiation plus the
work done by the latter in the expansion Eq. (12.80).
12.5.2 Adiabatic Processes
In this type of transformations, the cavity walls are thermally insulated. In this case,
however, the thermal capacity of the walls plays a predominant role in the energy
balance of the overall system. If we are interested only in the variations of the
properties of the radiation, we have to consider the cavity with perfectly reflecting
This, however, poses a problem: if the radiation is in an initial state of thermodynamic equilibrium, at the temperature T , within a cavity with perfectly reflecting
walls and we do vary the volume in any way, we will get a new situation in which
the radiation can no longer be considered in a state of thermodynamic equilibrium.
The reason is evident in a classical context (pre-quantum) if one keeps in mind that
electromagnetic waves do not interact with each other, and then the system cannot
thermalize as would happen for example with a real gas.7
7 In
theory, this would be possible in a quantum-relativistic context but with enormous relaxation
times at temperatures below 109 –1010 K where the production of charged pairs becomes significant.
12.5 Thermodynamical Processes for Radiation
To realize a quasi-static adiabatic process is necessary therefore to ensure that the
radiation can thermalize quickly. This can be obtained by entering into the cavity
with reflecting walls a small sample of any material (preferably a conductor) such
as a small piece of copper. Of course, this little bit of material will participate in the
overall balance equations but if its mass is small enough the introduced perturbation
of the equations that govern the energy balances of the radiation, it can be accounted
for and possibly neglected.8
After these preliminary considerations, we can write the equations for the adiabatic
transformation in the following form:
T 3 V = constant .
Referring to Eq. (B.18) and to the Stefan–Boltzmann law Eq. (12.54), we have
pV 3 = constant .
then the amount of work done by the radiation in an adiabatic quasi-static process
from an initial state with initial volume V1 and pressure p1 , to the final volume V2
will be
V − 3 dV ,
W1,2 = p1 V1
= 3 p1 V1 1 −
13 .
12.5.3 Isochoric Transformations (Constant Volume)
For constant volume transformations T dS = dU and then the amount of heat supplied to the system in the transition from temperature T1 to temperature T2 will be
Q 1,2 = aV T24 − T14 ,
Q 1,2 = 3V ( p2 − p1 ) .
Following the general definition, the heat capacity at constant volume for the radiation
will be given by
= 4a T3 V .
CV = T
∂T V
8 We may say that the little piece of copper has the function of a catalyst. It makes a process possible
but does not enter in the balance equations.
12 Thermodynamics of Radiation
12.5.4 Free Expansion
Suppose we have a black-body radiation field at temperature T contained in a cavity
with perfectly reflecting walls and volume V . This cavity can be placed in communication with another cavity with perfectly reflecting walls and having the same
volume simply by removing a partition. In this second cavity, the existing radiation
is at very low temperature so that its energy content can be neglected.
The dividing wall is removed. What can we say about the final state of the radiation
contained in the total volume V = 2V ?
We denote by the index 1 all the values of the quantities in the initial state in
which the radiation occupies the volume V and by the index 2 the values when the
radiation occupies the volume 2V . Let us examine two situations.
Absence of Matter in the Cavity
The energy of the radiation will be constant and therefore the energy density will be
With reference to mechanical interpretation of pressure treated in Appendix B.1,
we proved that the relation between pressure and energy density is not dependent
on the establishment of thermodynamical equilibrium, the pressure exerted on the
reflecting walls will (we assume that the radiation is kept in a condition of isotropy)
p2 = p1 .
As it regards the temperature, this is not definable because the radiation is not in a
state of thermodynamic equilibrium.
Presence of a Small Quantity of Matter in the Cavity
In this case, the radiation will settle in a new state of thermodynamic equilibrium.
The total energy will always be constant and, for this reason (see Eq. (12.75)), the
final temperature will be
T2 = 1/4 T1 .
As in the previous situation of the absence of matter, the new pressure will be p2 =
1/2 p1 . The entropy variation in this process will be
S = 2V
a T23 − V a T13 ,
12.5 Thermodynamical Processes for Radiation
aV T13 2 4 − 1 ,
S 0.19 S1 .
The entropy increase is 19% of the initial value. As regards the final pressure the
reader should remember that in the case of absence of matter in the cavity, the term
“pressure” has been improperly used just to mean that the mechanical effect on the
walls has a halved value. Only in the second case (thermodynamical equilibrium)
the term pressure can be properly used.
12.6 Planck and the Problem of Black-Body Radiation
We have seen how relativity leads us to Wien’s law Eq. (12.47) and to Stefan–
Boltzmann law Eq. (12.54) but, of course, does not allow us to determine the form of
the function F(ν/T ) and, as a consequence, the form of the spectral emissivity and
of the spectral energy density of black-body radiation. We say “of course” because
such a determination must be the product of the general theoretical context and,
indeed, we have seen that, in the classical (i.e., pre-quantum) context, the request of
universality contained in Kirchhoff’s law led to the “catastrophe” expressed by Eq.
(12.23) and then, for the function F in Eq. (12.47), to the form
This is the crisis situation that physicists had to face at the end of the nineteenth century. It was a profound, conceptual, crisis, because the onset of the energy divergence
had conceptual roots and was not “adjustable” with appropriate models. It made it
inevitable to abandon the current theoretical context.
It is very interesting to go through, very briefly, the starting point of the complex
work by Planck in a treatise on thermodynamics because it was his deep knowledge
of this discipline, applied to the available experimental data, that allowed him to
get the determination of the function F(ν/T ). After that, and thanks to the birth, in
those years, of statistical mechanics founded by Boltzmann, he gave his fundamental
contribution to the birth of quantum physics. In this last step, however, also the
contribution of Einstein with his works of 1905 and later was decisive.
12 Thermodynamics of Radiation
12.6.1 The Situation at the End of the Nineteenth Century
and the Black-Body Radiation
At the end of the nineteenth century, the situation, with regard to the experimental
study of u (ν, T ), was the following:
(1) At high frequencies,9 various empirical formulae had been proposed but by 1900
most of physicists believed that Wien’s proposal was definitely the best. It was
summarized by the relation
u(ν, T ) ∝ ν 3 exp(−βν/T ) ,
where β is a parameter whose value is derived from the “best-fit” with the
experimental data.
(2) Later, when new techniques for measurements in the far infrared10 were developed, the situation appeared to be quite different. The spectral energy density, at
low frequencies, was well described by the relation
u(ν, T ) =
8π 2
ν kB T .
This relation not only described well the experimental data at low frequencies but was
exactly what the theoretical context of the time foresaw. This part of the spectrum is
currently named as “the Rayleigh–Jeans region” because Eq. (12.95) was previously
obtained by Lord Rayleigh and then developed by J.H. Jeans. As we have seen
in Sect. 12.2.5, the form of the Rayleigh–Jeans formula is a product of the requirement
of universality contained in Kirchhoff’s law and of the classical theoretical context.
Of course we have to consider that both the “Rayleigh–Jeans region” and the
“Wien’s region” constitute two branches of the same function, but while the former
(low frequency) was “welcome”, the latter (Wien’s at high frequencies) represented
a contradiction.
12.6.2 Planck and the Problem of Matter–Radiation
The radiation field within the cavity is created by the electric charges which constitute
the walls of the cavity and is in constant interaction with them. The spectral density
of the radiation must be correlated with the thermodynamic state of the walls and
9 In
this context when we use the term high frequency or low frequency, reference must be made to
the fact that at ordinary temperatures the maximum emissivity of the black body is about 10 µm,
that is, at a frequency of about 3 × 1013 Hz.
10 In this context, far infrared means wavelengths of order λ ∼ 30–40 µm.
12.6 Planck and the Problem of Black-Body Radiation
then Planck worked out a model in order to find this correlation. The model was
based on the following assumptions:
(1) Matter is constituted by an agglomeration of charged harmonic oscillators with
proper pulsation ω0 as indicated by the atomic theory of matter at that time; the
oscillators are put into forced oscillation by the radiation field of the cavity.
(2) The radiation field of the cavity is decomposed into plane waves mutually incoherent.
(3) If the electric fields are not too intense, the motion of the oscillators can be
well described by the usual equation of forced elastic oscillations, with a weak
damping due to the radiation of e.m. waves in the cavity.
Planck found a relationship between the oscillator’s energy distribution and the spectral energy density of the electromagnetic field that is established at equilibrium. The
relation found by Planck is
u (ν, T ) = 3 ν 2 ε ,
where ε is the oscillator’s energy. In the model developed by Planck that led to Eq.
(12.96), the following points should be highlighted: (i) the energy of a single oscillator
is the sum of kinetic and potential energy, and these are proportional to the square of
the amplitude of the harmonic motion; (ii) the frequency of the forced oscillation that
is established is given by the frequency of the incident e.m. wave but the amplitude
of the resulting motion is significant only for those oscillators that have a proper
frequency close to the frequency of the incident wave.
After having established the relation between specific energy density and the
energy of the corresponding material oscillators, Planck changes the point of view
and he uses the observed spectral energy density of the radiation as revealing the
distribution in energy of the material oscillators of the body in thermodynamic equilibrium at temperature T .
Then if we observe a specific density of radiation at the frequency ν, the amount
of energy of the oscillators associated to that frequency is given by
1 c3
u(ν, T ) .
8π ν 2
From this point of view, the above relation gives us the energy distribution of the
material oscillators of whatever body at thermodynamic equilibrium and then Kirchhoff’s law, initially formulated for radiation, implies a sort of “reversed Kirchhoff’s
law” according to which the distribution in energy of the material oscillators of a
body at thermodynamic equilibrium, does not depend on the nature of the material
and is, therefore, given by a universal. function.
12 Thermodynamics of Radiation
12.6.3 The Planck Solution (Through Thermodynamics)
How to find this universal function? At low frequencies, in the “Rayleigh–Jeans”
branch, if we use Eq. (12.97) together with Eq. (12.95), we immediately find the
ε = kB T ,
while at high frequencies, if we make use of the “Wien’s branch” experimentally
determined, we find
ν exp(−βν/T ) ,
The point is the following: these two expressions for ε are the two branches, at low
frequencies and at high frequencies, of one single function that links the energy of
the oscillators to the frequency of the radiation at a given temperature.
It is at this point that thermodynamics allowed Planck to take the decisive step.
From the fundamental equations of thermodynamics, we know that for every system,
entropy, and internal energy, at a given temperature, are related by the following
∂ε V
which, if we take the second derivative, leads to
∂ 1/T
Planck considered the fundamental relation of the material body in the entropy representation, written in the form S = S (ε, T ). About this function, we have two partial
information, in the zones at low and high
More precisely, we know the
expression of the second derivatives ∂ 2 S/∂ε2 V in these two parts. Planck aims at
finding the form of the function ∂ 2 S/∂ε2 V in the whole energy spectrum.
As regards the first part, that is, in the “Rayleigh–Jeans branch”, he can obtain
1/T from Eq. (12.98) and, deriving further, he gets
As regards the relatively high-frequency zone, i.e., the “Wien’s zone,” he operates in
a similar manner starting from Eq. (12.99) and he obtains
βν ε
12.6 Planck and the Problem of Black-Body Radiation
at this point, was to find one complete expression for the function
∂ 2 S/∂ε2 V with the requirement that it must coincide with Eq. (12.102) and with Eq.
(12.103) at low and high energies, respectively.
The simplest interpolation that Planck could try at this point was to write
ε [ε + (β k B ) ν]
By introducing
h = βk B ,
the tentative interpolated expression takes the following form:
ε (ε + hν)
Indeed, Eq. (12.106) gives Eq. (12.102) when hν
ε, while it gives Eq. (12.103)
in the region where hν
This is the famous “interpolation formula which resulted from a lucky guess”
and is just the starting point for the development of a line of thought that would
revolutionize, with the participation of other great physicists, the basic foundations
of physics.
Having in mind to integrate the interpolated formula Eq. (12.106) once, it is
convenient to write it in the following form:
ε + hν
This equation can easily be integrated and obtain
that is,
ε + hν
ε + hν
and the above expression can be put in the familiar form:
exp (hν/k B T ) − 1
Combining Eq. (12.110) with Eq. (12.96), we obtain the expression for the spectral
density of the black-body radiation:
12 Thermodynamics of Radiation
u(ν, T ) =
8π 2
exp (hν/k B T ) − 1
12.6.4 The Dawn of Quantum Physics
In his fundamental paper very briefly described in the previous sections, Planck
adopts a model in which the charges responsible of the absorption/production of the
radiation are electrons with well-defined charge e and mass m, elastically bound to
The expression given in Eq. (12.96), which concisely summarizes his calculation, does not depend on the charge and the mass of electrons as well as on the
proper frequency of the oscillator. It does not depend on 0 (the vacuum dielectric
permittivity) on which the equations of electromagnetic theory are based.11 This is a
result of great importance because it gives the right interpretation of the demand for
universality expressed by Kirchhoff’s Law. As already pointed out in the previous
section, also the distribution of energy of the oscillators (that is Eq. (12.110)) is a
universal function and the fundamental equations of thermodynamics were applied
to this system.
After obtaining the two fundamental Eqs. (12.110) and (12.111), the scientific
contribution by L. Boltzmann emerges.
For reasons totally independent of the black-body problem, Boltzmann had
founded the new branch of physics, named statistical mechanics, and, in particular, had established a fundamental principle which is summarized in the famous
formula attributed to him (and carved on his grave):
S = k B ln W ,
where W is the, so-called, thermodynamical probability of an equilibrium state whose
entropy will be S.
With reference to this Planck had recognized that “if the energy of the oscillators,
instead of assuming continuous values, could only take values multiple of a fundamental amount, then the average value of their energy would be compatible with Eq.
At the end of his fundamental article of 1900, however, he does not yet dare the
conclusion that he had at hand.
11 We
recall that thermodynamical equilibrium between matter and radiation is assumed.
12.7 Exercises
Fig. 12.5 The Carnot cycle for radiation is represented in a (T, V ) plane (a) and in a (T, S)
plane (b)
12.7 Exercises
12.1 A cavity with volume V0 = 20 m3 at temperature T = 103 K is expanded
isothermally and quasi-statically until its volume is doubled. Determine the quantity
of heat supplied to the cavity.
12.2 Calculate the efficiency of a Carnot cycle performed with radiation as working
fluid. The transformations are represented in Fig. 12.5.
Chapter 13
Third Law of Thermodynamics
Abstract The third law of thermodynamics is discussed starting from the Nernst–
Planck formulation. Some observational consequences and experimental confirmations are summarized and the formulation concerning the unattainability of absolute
zero is briefly commented.
Keywords Third law · Nernst–Planck formulation · Attainability of absolute zero
13.1 The Third Law of Thermodynamics
The first and second principles of thermodynamics lead, respectively, to the definition
of energy and entropy with all the consequences.
We have seen that both the definitions of energy and entropy contain one arbitrary
additive constant because for both potentials only the changes in a process are defined.
This does not create any difficulty because, in practice, in applications we are always
involved in using variations.
Things are not so simple for F and G potentials because in their definitions, they
contain the term (−T S). It is clear that the indetermination of entropy by the additive
constant S0 , has, as a consequence, an indetermination of the free energy and of the
Gibbs potential, of the type
− S(O) T + constant,
where O is an arbitrarily selected reference state. Once again, if we are interested in
isothermal transformations, this indetermination of the F and G potentials is irrelevant but for transformations between states at different temperatures the variations
of free energy and of Gibbs potential are undetermined.
Therefore, the absolute determination of entropy is needed to make defined and
“usable” all the thermodynamic potentials in any transformations (the energy indeterminacy has fundamental connections with other issues; however, we do not want
to deal with here).
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
13 Third Law of Thermodynamics
The third law of thermodynamics does not lead to the definition of any new
thermodynamic potential but makes already defined potentials fully defined, except
energy whose “zero-point” problem must find the solution in another context.
13.1.1 Formulation According to Nernst and Planck
The third law has found its first formulation by Nernst as part of his studies, together
with Berthelot, of the electrochemical reactions.
By studying the properties of these processes at gradually decreasing temperatures, he compared the values of the “affinity A of the reaction,” which corresponded
in his language to the maximum work obtainable (what we call now free energy
change), with the values of the variation of energy U that, at constant volume,
corresponded to the amount of heat developed in the reaction.
Nernst commented the general trend of experimental data in this way:
“ . . . I observe here an asymptotic law as the difference between A and U is
very small. It seems that at absolute zero A and U are not only equal but also
asymptotically tangent to each other so as to have:
= lim
= 0 . . .” .
T →0 dT
T →0 dT
Taking into account that A = F = U − T S it followed that S → 0 as
T → 0.
This proposition can be formulated as follows: let us consider an isothermal
transformation between two equilibrium states A and A (they are at the same
temperature). The variation of entropy in this transformation tends to zero as the
temperature goes to zero:
lim S A − S A = 0 .
T →0
Planck has given the following formulation to the third law: “When the temperature
of a physical system tends to zero, its entropy tends to a value S0 which is independent
of pressure, of the state of aggregation, of the presence of external fields and from
any other state variable.”
We can agree to put this constant equal to zero so the entropy for all systems is
determined absolutely.
13.1.2 Some Observational Consequences
If the entropy tends to a value which is independent from all other state variables, as
the temperature approaches to zero, we have
13.1 The Third Law of Thermodynamics
= 0,
T →0 ∂ p T
= 0.
T →0 ∂ V T
Thermal Expansion Coefficient
If we recall the general Maxwell’s relation (5.15), we conclude that the coefficient
of thermal expansion vanishes when approaching T 0:
1 ∂V
α(T ) =
→ 0 as T → 0 .
V ∂T p
Indeed from Eq. (5.15), we see that this is equivalent to saying that as the temperature
tends to zero the entropy of the system is independent of pressure.
Tension Coefficient
We define tension coefficient T the quantity:
T ≡
∂T V
From Eq. (5.12), it is immediate to see that even the tension coefficient tends to zero
close 0 K:
= lim
= 0.
T →0 ∂ T V
T →0 ∂ V T
Specific Heats
In principle, we can calculate the entropy of a system in a generic state (V, T ) integrating dS in a transformation at constant volume from T = 0 to the final temperature
T . Since
dT ,
dS =
∂T V
we may write
S (V, T ) =
dT + S (V, 0) .
13 Third Law of Thermodynamics
Similarly, if we describe the state in terms of pressure and temperature, we may
S ( p, T ) =
dT + S ( p, 0) .
The two constant of integration S (V, 0) e S ( p, 0) must have the same value and
may be put equal to zero according to Planck’s or Nernst’s statements of the third
law. We may then write
S (V, T ) =
S ( p, T ) =
dT ,
dT .
Both specific heats must tend to zero when the temperature tends to zero. If it were
not so, the integrals would diverge.
Solidification of Liquid Helium at Low Temperatures
This is the only known substance that remains liquid “at absolute zero” but can be
made to solidify by increasing the pressure. If we consider, therefore, the solid–liquid
equilibrium and denote by p the melting pressure of solid helium, the Clapeyron’s
equation connects it to the entropy difference as is well known:
Experimental evidence is shown in Fig. 13.1 and we can see that
T →0
= 0.
Since V maintains a non-zero value, the evidence on the melting pressure shows
lim Sliq − Ssol = 0 .
T →0
1 Remember
the definition Eqs. (5.18) and (5.19) for the heat capacities.
13.1 The Third Law of Thermodynamics
Fig. 13.1 The solid–liquid equilibrium curve of Helium at very low temperatures. Experimental
data show that the equilibrium pressure tends to a constant value as temperature decreases. Figure
reproduced with permission from Guggenheim, Thermodynamics: An Advanced Treatment for
Chemists and Physicists [3]
Unattainability of Absolute Zero
Some authors propose a formulation of the third law based on a new principle named
“the Principle of unattainability of absolute zero.” More precisely, the formulation
they give is as follows:
It is not possible, with a finite number of transformations, to reach the temperature
T = 0.
The issue of the attainability of the zero value of the temperature is a very delicate
one because, as we can well imagine, the error with which we can measure the
temperatures may make the question meaningless.
A correct formulation is the following: Starting from a state at T > 0 it is not
possible to realize a reversible adiabatic transformation which brings the system to
T = 0.
13 Third Law of Thermodynamics
Let us briefly discuss how this proposition can be derived from the Nernst’s or
Planck’s formulation of the third law. Let us go back to the proof given in Sect. 13.1.2
to justify the fact that the specific heats tend to zero as T → 0.
The argument can be put in a more general form. We express the entropy in a
generic state as follows:
dT + S (α, 0) ,
S (α, T ) =
where the symbol α is a control parameter which represents the particular transformation along which we calculate the integral of the entropy (in the two previous
examples α indicated a transformation either at constant volume or at constant pressure).
Another control parameter might be (and this is precisely the case when we want
to reach the lowest temperatures) the intensity of an external magnetic field.
In this case, the transformation connecting the two temperatures is realized at
constant field and, in this situation, the specific heat must be defined as
CB = T
∂T B
In Fig. 13.2, we describe, in a qualitative way, the entropy as a function of temperature for two different values of the field kept constant. The magnetic field is indicated
by the value of the magnetizing field which is more easily controlled by the experimenter. The curve marked by the value H = H1 = 0 shows that entropy must be an
increasing function of temperature owing to the ordering effect of the magnetic field:
indeed as temperature increases the ordering effect becomes less and less effective.
Furthermore, for any given temperature, the curve with H = 0 is expected to be
above the corresponding value with H = H1 because of the partial ordering effect.
If the third law is valid, as temperature tends to zero the two curves must tend
to the same value which can be assumed to be zero. If the third law is violated, the
curve with H = 0, extrapolated at T = 0, must cross the vertical axis at an entropy
S0 > 0, as shown by the dashed line, because it cannot cross the H = H1 curve at
any T > 0.
The argument concerning the unattainability of absolute zero is based on the fact
that the most powerful technique we know for reaching low temperatures consists
in the process of adiabatic demagnetization already described in Sect. 11.7. Let T1
be the lowest temperature available with any method but adiabatic demagnetization
and consider the diagram (S, T ) shown in Fig. 13.2 for magnetized materials. Let
us take our system to T1 when it is magnetized by a magnetizing field H1 . If we
operate reversibly and adiabatically and reduce the intensity of the magnetizing
field from H1 to H = 0, we perform a transformation which can be represented by
the horizontal line in Fig. 13.2 and then we reach a temperature T2 lower than the
initial temperature T1 .
13.1 The Third Law of Thermodynamics
Fig. 13.2 Qualitative behavior of the entropy as a function of temperature at constant magnetic field
for a paramagnetic salt. The intensity of the magnetic field is labeled by the value of the magnetizing
field H = |H| which is more easily controlled. Solid lines describe the situation according to the
provisions of the third law. Dashed line shows a possible behavior in violation of the third law. The
change of curvature at H = 0 accounts for spontaneous magnetization typical of all paramagnetic
material when approaching T 0
If the third law holds, the two solid lines converge at the same point as T → 0
(and only in this limit) and hence it shall be T2 > 0. It follows that the approaching
to the “absolute zero” will be a process constituted by an infinite number of single
steps of adiabatic demagnetization.
If the third law is not valid (dashed line case), there will be one initial temperature such that the absolute zero will be reached by means of just one adiabatic
demagnetization process.
In this sense the third law, as formulated by Nernst and Planck, has the consequence that the absolute zero can be approached only asymptotically.
The Third Law and Statistical Mechanics
The formulations of the third law given by Nernst and Planck originate from some
experimental evidence and constitute an abstract statement capable of making various
predictions. These predictions are well confirmed by experiments as we have seen in
the preceding discussion; however, the formulation acquires a fundamental character
in the light of its statistical interpretation, and thanks to the development of quantum
The statistical interpretation of the concept of entropy was given by Boltzmann
by means of the famous relation
S = k ln w ,
13 Third Law of Thermodynamics
in which the entropy S for any equilibrium state is related to the number w of
microstates that “correspond” to the macroscopic state. In this context, to establish
the value S0 = 0 for the entropy of a system at the absolute zero is equivalent to
establish w = 1. This is equivalent to say that as the temperature decreases the number
of microstates decreases as well and tends to the value w = 1 as the temperature tends
to zero.2
To what extent this may seem “intuitive” or not is a meaningless question but,
instead, it becomes of paramount interest to ask ourselves if our theoretical framework
is compatible with or, better, provides or does not provide for such a limit.
For this reason, the answer must be sought among the laws governing quantum
A thorough discussion of this topic is far beyond the scope of this book but we
still need to focus on the key point that lies in the process to move to the limit of zero
Consider a system at a very low temperature T and subject to certain external
constraints (such as volume, pressure, external field, etc.). Let us subtract energy,
and by doing so we decrease the temperature. In the atomic–molecular description
of the system, each elementary constituent will gradually be at lower and lower energy
values and its quantum nature will manifest itself in a dominant way. This process of
energy subtraction will lead the system toward a limit configuration in which all the
elementary constituents will have minimal energy (which is not necessarily zero).
All models to which we are led according to the laws of quantum mechanics and
of statistical mechanics, lead to configurations with w = 1, but this is obtained by
assuming that during the energy subtraction the macroscopic system always remains
in a state of thermodynamic equilibrium.
This is a very delicate problem because as the average energy of each elementary
constituent decreases, the relaxation time (i.e., the time required by the elementary
constituents to redistribute their energy among them in order to create a thermodynamic equilibrium situation) increases very rapidly.
The result is that we can lead to states of minimum energy (in the sense that we
can no longer extract) systems that are frozen in nonequilibrium configurations. We
call, in general, these systems vitreous.
One common example is given precisely by the silicon glass (SiO2 ). At very low
temperatures, it does not have time to transform into a crystalline solid, maintains
its glass configuration, and also maintains a large number of possible microscopic
configurations with the same energy level.
2 Hastily,
for brevity, someone says that at “absolute zero” there is only one possible microscopic
Part III
Irreversible Processes
Chapter 14
Irreversible Processes: Fundamentals
Abstract Within the approximation of discontinuous systems, the entropy production is calculated in a variety of nonequilibrium situations in open/closed systems
and for chemical and electrochemical reactions. The definition of generalized fluxes
and forces is widely discussed. The dependence of fluxes on forces is explored and
for near to equilibrium configurations this dependence is linearized. The linearization leads to the Onsager relations which give the quantitative characterization of the
cross-interference of different irreversible processes. The non-unique determination
of the fluxes and of the relative forces is widely discussed and the limits of validity for
the linear relations between fluxes and forces are examined with particular reference
to chemical reactions.
Keywords Irreversible processes · Discontinuous systems · Chemical reactions ·
Affinity · Reaction rate · Open systems · Electrochemical reactions · Generalized
fluxes · Generalized forces · Onsager relations · Linearity · Equivalent systems ·
Relaxation time
14.1 Introduction
Irreversible processes occur either within systems not in internal equilibrium or when
systems, in internal equilibrium but not in mutual equilibrium, are allowed to interact. Chemical reactions are one example of the former type and will be extensively
studied throughout the whole part. Just as equilibrium states are, in the first instance,
defined by a set of extensive quantities, so interactions consist in the exchange of one
or more extensive quantities between interacting systems; indeed in various fields
of physics, specific natural processes take their name from the flows of appropriate
extensive quantities like, for instance, mass, energy1 or electric charge. This part
is devoted to the study of the dynamics of natural, or irreversible, processes, how
they evolve over time and how they interfere with each other. The generalization
of the formalism to continuous systems will be done in Chap. 16 but, before this,
1 The
very common denomination “heat flux” seems to contradict this rule because “heat” is not an
extensive property of a system. The point has been already discussed in Sect. 4.3.
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
14 Irreversible Processes: Fundamentals
many things can be understood in the present chapter and Chap. 15, in the context of the approximation of discontinuous systems, which is formally simpler and
more intuitive. The approximation of discontinuous systems was already discussed
in Sect. 3.6 for closed systems. Here, we recall the most important points and prepare
the necessary generalizations to open systems with variable chemical composition.
Individual systems are supposed to be in internal equilibrium and, when they
are put into interaction, the region where intensive properties vary in a continuous
manner (transition region) is neglected. The time scales of the interaction processes
are long with respect to the relaxation times of the individual systems so they can
be assumed, in every instant, to be in internal equilibrium. Before proceeding with
the development of the formalism, a general consideration should be made clear.
Interactions consist in exchanges, between individual systems, of extensive quantities2 then it is necessary, for each of them, to split their variation, in an infinitesimal
process, into the sum of two contributions:
dE = d̂ i E + d̂ e E ,
where d̂ e E is the part due to the interaction with external world and d̂ i E is the part
due to processes taking place within the system.
Here, E is any extensive quantity and is a thermodynamic potential (function of
state) and its variation in an infinitesimal process is described as the exact differential
of a function while both d̂ i E and d̂ e E describe infinitesimal quantities but are not
exact differentials.
When we affirm that a certain quantity E is a conserved quantity, we mean that
d̂ i E = 0 ,
for every infinitesimal process. This is equivalent to say that the value of the quantity
E can vary only by interaction with the external world.
14.1.1 Rephrasing the First Principle
Since we are going to deal with processes in open systems, it is useful to rewrite the
first principle in the following form:
dU = d̂ − pdV ,
2 This statement should not be understood as being limited to the thermodynamic context but must be
considered to be of general validity. For example, in mechanics, the interaction consists of a transfer
of extensive quantities possessed by the interacting systems, such as, for example, momentum,
angular momentum, or energy. See also [1].
14.1 Introduction
where d̂ represents the variation of energy of the system in an infinitesimal process
after removing the infinitesimal quantity of work − pdV done on the system in the
same time interval dt. This is written in the case of a simple system but in the general
case we have to subtract all the similar terms due to the other work parameters.
With this notation, the first principle states that the energy of a system varies in two
ways: because work is done on the system (− pdV plus other work parameters) and
because energy is transferred by other means and the latter contribution is represented
by d̂. The term d̂ becomes d̂ Q in the case of closed systems and takes the familiar
name of quantity of heat transferred from the external world to the system.
It would be a nonsense, at this stage, to think that d̂ contains a contribution due
to the “transfer of heat” and a contribution due to the “transfer of matter.” It is one
term and the concept of “heat” or of “heat transfer” becomes meaningless but only
the concept of “energy transfer” remains.
In the following, we shall consider various situations which will be treated in
the discontinuous systems approximation and each situation may be described in
different ways. The selected description will give the names to the various irreversible
14.2 Heat Exchange
Let us refer to the example discussed in Sect. 3.6. On that occasion, it was used as
a paradigmatic example with the purpose of showing the way in which the propositions that define the second principle are necessary and sufficient for the operational
definition of entropy. Here, we want to go ahead and use the same configuration
as a paradigmatic example of how the second principle constitutes the fundamental
evolutionary criterion.
If we divide both sides of Eq. (3.14) by the infinitesimal time interval dt, we may
d̂ II Q
d̂ i S
> 0.
T II
The first term d̂ i S/dt measures how much entropy is produced, per second, by the
process taking place within the overall system. Let us underline that this term is not
a sort of time derivative of a function but it is just the ratio of two infinitesimals. It
is called entropy production (but, more precisely, the meaning is that of velocity of
entropy production).
When the system is in an equilibrium configuration (and we have seen that this is
the case when T I = T II ), the entropy production is zero. If the two temperatures are
different but close one another, the amount of entropy production will be small. If
the two temperatures are very different from each other, we expect that the amount
given by Eq. (14.4) will be relatively high. This leads us to think of P as an indicator
of the “level of irreversibility” of a nonequilibrium configuration.
14 Irreversible Processes: Fundamentals
Obviously, we have to remember that the entropy production is an extensive
quantity and, in general, it would be of no use to compare its amount between
different systems but this problem will not arise when we will adopt the optics of
continuous systems where the greater or lesser distance from equilibrium will be
described by the entropy production per unit volume, and then point by point.
The term d̂ II Q/dt has an evident physical meaning. It measures the amount of
energy that is transferred, per second, between the two systems and more exactly
d̂ II Q/dt measures the power passing from I to II. In many other previously encountered situations, we called this term “flow of energy” or, in particular contexts, “heat
flow.” The flux describes the dynamics of the irreversible process at stake: in our
example, the speed with which the energy is transferred and in which direction
(from I to II if the flux is positive in our choice). The quantity,
− I
T II
depends on the configuration at that moment and “pilots” the development of the
process: if its value is zero the flow is zero, if it has a positive value also the flow
of energy will have a positive direction and, conversely, if it is negative, the flow of
energy will be negative. It plays, therefore, a role similar to the one that the force
has in mechanics: it determines both the direction and the intensity of the process.
We shall call the quantity X , given by Eq. (14.5), a generalized force. The role
of generalized force is assigned by the second principle because it is the latter that
determines the direction of the process. The expression given by Eq. (14.4) may be
written as
P = J X > 0,
in which we may adopt any of the two signs in the definition of the flux and, consequently of the force, corresponding to the two possible directions of flow. We adopt
the following:
d̂ II Q
d̂ I Q
J =−
which corresponds to consider as positive the energy flow from I to II and hence the
correspondent generalized force, will be
X =−
− II
Consider now the case in which the overall system can exchange heat with thirdparty systems. In this case, d̂ Q = d̂ I Q + d̂ II Q will no longer be zero. In order to
proceed, we must increase the level of complexity, i.e., we must be able to distinguish,
in each of the two component systems, the amount of heat that it receives from
the other and the amount of heat that it receives from third-party systems. Let us
distinguish the former and the latter with the subscripts i and e, respectively,
14.2 Heat Exchange
d̂ I Q = d̂ iI Q + d̂ eI Q ,
d̂ Q =
d̂ iII Q
d̂ eII Q
d̂ iI Q = −d̂ iII Q .
Then, for the overall system, we can write
dS = d̂ i S + d̂ e S ,
d̂ e S =
1 I
d̂ e Q + II d̂ eII Q ,
d̂ i S =
− II
d̂ iI Q ,
and for the entropy production we will meet exactly Eq. (14.4). For the overall system,
the entropy change per unit of time will be written as
d̂ e S
d̂ i S
1 d̂ Q
1 d̂ II Q
= I e + II e + J X .
T dt
14.3 Chemical Reactions
Let us now consider a class of irreversible processes in which the chemical composition varies with time. This can happen for two different reasons: either because some
components γ are exchanged with the outside world (that is, the system is open with
respect to the component γ ) or because the component γ is involved in some internal
processes that, if the system were closed, would change the chemical concentration.
In general, these two causes are present simultaneously and this will be covered
in Sect. 14.4. In order to provide the necessary descriptive tools let us deal, for the
moment, with them separately. Let us start with closed systems.
The processes in question will be called “chemical reactions” meaning by this
term, not only the chemical reactions in the traditional sense but also any process
which can be treated with the same formalism by analogy.
14 Irreversible Processes: Fundamentals
14.3.1 The Rate of Reaction
As the name makes us understand, this physical quantity describes the speed with
which each process develops over time.
Case of One Reaction
Suppose, at the beginning, that in the system one single chemical reaction is taking
place, and this means that the variations in time of the number of moles n γ of all the
components can be correlated by one linear equation only. For example, consider the
synthesis of ammonia:
N2 + 3 H2 → 2 NH3 .
The observer will see that the abundances of the different components vary in time
and that the variations are related by very precise quantitative ratios as described
by Eq. (14.17). If we denote by dn γ the variation of the number of moles of the
various components in the infinitesimal time interval dt, we will observe that
dn N2
dn H2
dn NH3
Now consider a generic chemical reaction described by the stoichiometric equation:
νA A + νB B + · · · → νM M + νN N + · · ·
where A, B, M, N, . . . are the chemical components which take part to the reaction
and νA , νB , νM , νN , . . . the relative stoichiometric coefficients. If we adopt the convention of giving the stoichiometric coefficients the negative sign for the components
that are to the left of Eq. (14.19) and the positive sign to stoichiometric coefficients of
the components that are on the right, we see that, for all components involved in the
chemical reaction, the ratio dn γ /νγ has the same value. Hence, this ratio is no longer
referred to a particular component but describes the evolution of the process as such.
We may, then, define an adimensional parameter ξ , called degree of advancement of
the chemical reaction so that its variation dξ , in the time interval dt, is, for any γ :
dn γ
= dξ .
It is a dimensionless parameter and since only the variations of it are defined, it will
be undetermined by an arbitrary additive constant. As already mentioned, it describes
the evolution of the process and its variation per unit of time:
14.3 Chemical Reactions
is called velocity of the reaction, or reaction rate. It is a dynamic variable and, as the
flow of energy in the previous example, adequately expresses the speed with which
the process proceeds and its direction. By analogy with the previous case, we will
call it flux in a generalized sense.
General Case with Many Chemical Reactions
Suppose that, in our closed system, are active r chemical reactions. This means that
to have a complete description of the variations in time of the abundances of all
the components, it is necessary and sufficient to write r independent stoichiometric
We denote by the symbol νγρ the stoichiometric coefficient with which the component γ participates in ρth chemical reaction. Of course we maintain the agreement
to assign the negative or the positive sign to the stoichiometric coefficient depending
on whether the component is on the left or on the right of the arrow which denotes the
assumed direction for the reaction. If the component γ does not take part to reaction
ρ, the corresponding νγρ will be νγρ = 0.
For each chemical reaction, the degree of advancement ξρ will be defined and for
each component, we have
νγρ dξρ ,
dn γ =
then for each process (chemical reaction), we define the speed with which it develops
by the quantity vρ :
vρ =
vρ being called velocity of the reaction, or reaction rate, of the ρth reaction. With
reference to Eq. (14.19), it is clear that the choice on the direction of the arrow
that distinguishes the “reagents” from “products of the reaction” (and therefore that
distinguishes the positive and negative stoichiometric numbers) can be made in an
arbitrary manner and it is in no way binding for the correct description of the process.
It will be the analysis of the thermodynamic process that will give us an indication
of whether it is in equilibrium (and thus with zero rate) or if the chemical reaction is
active and what will be its effective direction. We must then find the expression of
the “generalized force” that is of the quantity that is responsible for equilibrium, or
for the development of the considered process and its direction.
14.3.2 Entropy Production and the Chemical Affinity
As the state of being in equilibrium or the direction of development of natural processes are ensured by the second principle, we have to start from the fundamental
equation for an infinitesimal process written in the entropy representation:
14 Irreversible Processes: Fundamentals
dS =
dU + dV −
dn γ .
We start with the simple case where the description requires the presence of a single
chemical reaction and we express the variation of the number of moles of each
component using Eq. (14.20)
dS = dU + dV −
μγ νγ
dξ .
We define the affinity of the chemical reaction and denote it with the symbol A the
νγ μγ ,
and hence the infinitesimal entropy variation will be
dS =
dU + dV + dξ ,
dU + dV ,
dξ > 0 .
d̂ i S =
d̂ e S =
By dividing each side by the duration of the infinitesimal time interval, we obtain
v > 0.
From Eq. (14.29), we see that the condition which ensures the equilibrium of the
chemical reaction (that is that the infinitesimal process is quasi-static) is d̂ i S = 0
and hence
A = 0.
If the affinity is non-zero, the chemical reaction proceeds and the direction is the one
that makes Eq. (14.30) positive, that is,
A>0 v>0
A < 0 v < 0.
In other words, the sign of the affinity determines the sign of the velocity. Here, we
see why the arbitrariness of the choice on the direction when we write the formal
14.3 Chemical Reactions
expression of a chemical reaction is irrelevant: if we make the opposite choice, the
affinity will change sign but also the velocity changes sign and then the entropy
production maintains the positive value.
In the simple case in which only one process (chemical reaction) is taking
place, the relationship of cause and effect between the affinity and velocity emerges
clearly in the sense of what we have already seen for the heat flow. The expression
in Eq. (14.30) may be written as
with J = v physical quantity which describes the speed and the direction of the
process and which, by analogy, will be called generalized flux, and
will be called generalized force. Notice that the velocity of a chemical reaction is
an extensive quantity (as in the previous example of the flow of energy) in fact it is
proportional, through a stoichiometric coefficient, to the variation of the number of
moles of a given component. Conversely, the force associated to the velocity, namely,
the affinity divided by the temperature, is an intensive quantity because so are the
chemical potentials.
Generalization to Several Chemical Reactions: Interference
Consider now the case in which there are, simultaneously and in the same place,
several chemical reactions. In order to calculate the entropy production, we have
certainly to restart from Eq. (14.24) but, in this case, the terms dn γ will be substituted
by their expressions given in Eq. (14.22). Reversing the order of summations, we
1 ρ
dS = dU + dV −
ν μγ dξρ ,
T ρ=1 γ γ
and hence, after defining the affinity Aρ of the ρth reaction as
Aρ = −
νγρ μγ ,
the entropy variation will be
dS =
dU + dV +
Aρ dξρ ,
T ρ=1
14 Irreversible Processes: Fundamentals
where the first two terms of the second member form the contribution to the change
of entropy due to the interaction with the outside, while the contribution due to the
internal processes will be given by
d̂ i S =
dξρ ,
vρ > 0 .
and for the entropy production:
Indicating the generalized force for each chemical reaction with
Xρ =
we finally get
X ρ vρ > 0 .
As we know, the second principle requires that
A1 v1 + A2 v2 + · · · + Ar vr > 0 ,
but tells nothing about the sign of each product A v. We could have, for example,
for some chemical reactions:
Aρ vρ < 0 .
Then we say that the ρth reaction proceeds in the direction opposite to that intended
by its own affinity.
This is permitted because the second principle requires only that the sum given
by Eq. (14.43) over all the reactions, be positive.
If, for some reactions, Eq. (14.44) holds, we see in a striking way that there is
interference among the various chemical reactions. In this case, we speak of “coupled
This does not mean that if we have Aρ vρ > 0 for every reaction, there is no
interference. Normally, the numerical values of the speed of each individual reaction
depend on the presence of the others, and then the interference between different
reactions is present even though the sign of every single reaction is concordant with
the sign of the respective affinities.
14.4 Open Systems
14.4 Open Systems
We return to the approximation of discontinuous systems and consider two phases,
which we will denote by the indices I and II, each in a state of internal equilibrium
but not in mutual equilibrium. We allow a weak interaction (in the sense already
discussed in Sect. 14.2) but we now drop the constraint that they should be closed.
Also, we assume that each one can interact with third-party systems, but without
exchanging matter, then the overall system is closed. We will consider, of course, an
infinitesimal transformation and the infinitesimal variations of the extensive quantities will be separated, as already stated in Eq. (14.1) into one contribution due
to internal processes and one due to the exchange with the external world. Further
on, it will be necessary to subdivide the latter term into the contribution due to the
interaction with the other system and what is left to the interaction with third-party
systems. We write, first, the infinitesimal entropy change for system I, then simply
changing the index, we will do the same for system II and we add up to find the
entropy change of the overall system:
dS I =
1 I
dn γI ,
where d̂ I stands for the total amount of energy that the system I receives from the
outside world in the infinitesimal time interval dt. Let us separate it in the following
d̂ I = d̂ eI Q + d̂ iI ,
where the contribution coming from third-party systems has been written in the form
of heat transfer because in this part of the interaction, no matter is exchanged while
the remaining part, d̂ iI is the energy coming from system II and has been denoted
with the suffix i which stands for “internal to the overall system.” As far as the dn γI
term is concerned, it will be separated according to the general criterion Eq. (14.1):
dn γI = d̂ e n γI + d̂ i n γI ,
but in this case the suffix i denotes the contribution due to chemical reactions in
system I, while the suffix e denotes the transfer of matter with system II.
Now if we insert Eqs. (14.46) and (14.47) into Eq. (14.45), we shall obtain four
terms. Let us analyze them one by one. The first will be written in the following
d̂ eI Q/T I ,
and it simply describes the injection of entropy in I coming from third-party systems.
The second
d̂ iI (14.49)
14 Irreversible Processes: Fundamentals
describes the injection of entropy in system I because of the energy that is provided
by the system II. The third term
d̂ e n γI
describes the amount of entropy supplied to system I by system II due to the transfer
of matter, and finally the last term will be of the following type:
d̂ i n γI
describes the contribution of the chemical reactions that take place in system I.
Reworking the last terms in (14.51) as we have done in the previous section (see
Eq. (14.40)), we may write
AρI v I .
I ρ
The total variation of entropy of system I will be
dS I =
AρI d̂ eI Q
d̂ iI μγI
dξρI .
We will repeat the same analysis also for system II getting the same above expression but with the index II replacing I, and finally it is necessary to note that, given
the assumptions made at the beginning, we have the following relations:
d̂ iI = −d̂ iII ,
d̂ e n γI = −d̂ e n γII ,
and hence for the overall system we shall write the entropy change in the following
d̂ eII Q
d̂ eI Q
− II d̂ i −
− II d̂ e n γI +
dS =
T II
AρI ρ
dξρI +
ρ T II
dξρII .
From this expression, it is immediate to separate the two contributions: that due to
the interaction with the outside d̂ e S and that due to internal processes d̂ i S and obtain,
14.4 Open Systems
d̂ eI Q
d̂ II Q
+ e II ,
AρI AρII
I −
I +
I +
dξρII ,
d̂ i S =
e γ
T II
d̂ e S =
and, finally, dividing by the infinitesimal time interval dt we obtain the expression
for the entropy production:
− II
μγII d̂ e n γI
d̂ iI μγI
− II
AρI ρ
v I
I ρ
ρ T
v II
II ρ
This expression for the entropy production is formed by the sum of several terms.
The latter two describe the sets of chemical reactions that are active in system I and in
system II. We must remember that the coupling as shown in Eq. (14.44) or even a mere
interference among different reactions can occur within each system separately but
not between reactions that take place in distinct regions. More generally, the second
principle must be satisfied point by point and not only for the overall system. The
other two terms that form the second member should be analyzed carefully and
They are all formed by the product of a term that describes the speed and the
direction of an irreversible process times a function of the state. The latter certifies
a situation of nonequilibrium, between the two internal parts of the overall system,
with respect to the transfer of some extensive quantities, while the former expresses
precisely the amount of extensive quantity transferred per second.
In the first term, the dynamic variable represents the transfer of energy and if we
choose to consider as positive the fluxes of energy from I to II, we shall define the
flux as
d̂ I (14.60)
JU = − i .
It follows that in order to reproduce the term which appears in Eq. (14.59) we have
to define the associated force as
XU = .
Similarly, regarding the second term, it is formed by the sum, for each component,
of products of terms of type d̂ e n γI /dt, times a function of the state given by the
difference, between the two phases, of the ratios of the chemical potential over
14 Irreversible Processes: Fundamentals
temperature. If we choose as positive the fluxes going from system I to system II,
we define as flux for each component γ the quantity:
Jγ = −
d̂ e n γI
and the significance of these fluxes is obvious: each of them describes the amount
(measured in number of moles) of component γ transferred per second from zone I
to zone II. We shall call them flows of matter. Each of them, in the expression of P,
is multiplied by the corresponding intensive term (generalized force):
X γ = −
μ γ
Let us note that, in the case in which there is only one flow of matter (say relative to
the component γ ) all the other fluxes being null, the state variables must be combined
in such a way that
T I = T II ,
ρ ,ρ (14.65)
= 0.
Then, the entropy production reduces to
μγ Jγ .
Therefore, the flux of γ is multiplied by a term which depends essentially on the
difference of the chemical potentials. As we have seen in Chap. 7, this is exactly the
quantity which governs the phase transition of component γ , hence −1/T μγ
deserves the name of generalized force. The role of force emerges clearly when we
deal with one single process, while in the case of several concurrent processes mutual
interference makes the cause–effect relationship less evident.
In conclusion, let us rewrite the entropy production in the following form:
P = JU X U +
Jγ X γ + v I X ch
+ v II X ch
> 0.
In this expression, the suffix “ch” denotes the forces relative to the chemical reactions
which have been summarized, for brevity, with two compact expressions that refer
briefly to the processes which take place separately in phases I and II. The flux JU
denotes the flux of energy between the two systems and X U the associated force.
14.5 Electrochemical Reactions
14.5 Electrochemical Reactions
We can generalize the relations obtained in the previous section to processes which
take place in the presence of external fields.
We consider the case of an electrolytic cell formed by two containers, each at uniform temperature, homogeneous, and maintained at uniform electrostatic potential
ψ I and ψ II . Each of the two cells is an open system with respect to the other but the
set of the two (the overall system) is closed.
For simplicity, we assume that the temperatures have the same value in the two
parts. In each part, or phase owing to their homogeneity, there are various components, some electrically neutral some bearing positive or negative charges as, for
instance, in the case of ionic solutions, and let us denote with z γ the electrovalency
of component γ . The electrovalency of a component is an integer number which
may be positive, negative, or null and represents its the degree of ionization. For
instance, in a solution containing sulfuric acid, we shall have hydrogen ions having
electrovalency z H+ = +1, (SO4 ) ions with electrovalency z SO4 = −2 and (OH) ions
with electrovalency z OH = −1 together with other possible components with z = 0.
We can describe the processes with the same formalism of phase transitions or
that of chemical reactions. If in the solution there are r components, we can treat
them as r chemical reactions of the following type:
γ I → γ II .
The dynamics of each of these processes will be described by the respective degree
of advancement ξγ :
− dn γI = dn γII = dξγ ,
and hence by the respective rates:
vγ =
Suppose that the two phases are separated by a membrane so as to allow the maintenance of different pressures and assume that for each phase we can write the
fundamental equation in the form Eq. (14.24) to which is sufficient to add the term
relative to the electrostatic work that the observer has to do, in an infinitesimal process, to keep the two phases at constant potentials ψ I and ψ II , respectively. This
is equivalent to assume that the entropy does not depend explicitly on the value of
the electrostatic field and then neglect the effects due to electrostatic polarization of
the material (see Sect. 10.4.1). If we write directly the fundamental equation for the
overall system, we obtain
14 Irreversible Processes: Fundamentals
d̂ I Q + d̂ II Q
ψ −ψ
dn γ +
dn γ ,
I dt −
dS =
where I is the intensity of the electric current observed at the instant t and this is
related to the rates defined in Eq. (14.70), by the relation,
I =
z γ F vγ ,
F being the Faraday, i.e., the electric charge of one mole of ions bearing a unit
electrovalency. If we denote with e the charge of the proton e = 1.6 × 10−19 coulomb
and with NA = 6.023 × 1023 the Avogadro’s number, we have
F = eNA 0.96549 × 105 coulomb .
After separating in Eq. (14.71) the terms that describe the interaction with the outside
from those that are due to internal processes, we get
1 I
ψ − ψ z γ F dξγ +
d̂ i S =
dξγ ,
T γ
where we used Eq. (14.69). For the entropy production, we have
ψ I − ψ II z γ F +
μγI − μγII
vγ .
This relation can be written in the following form:
vγ ,
in which we set
Ãγ = μγI + z γ F ψ I − μγII + z γ F ψ II
= μ˜γI − μ˜γII ,
μ˜γ = μγ + z γ F ψ .
We point out that the expression Eq. (14.79) shows the modification to the expression of the chemical potential due to the presence of the external electrostatic field
and Eq. (14.77) represents the generalization, in our case, of the definition of affinity.
14.5 Electrochemical Reactions
They are called, respectively, electrochemical potential and electrochemical affinities. The equilibrium condition becomes
Ãγ = 0 ,
μγI − μγII = −z γ F ψ I − ψ II .
which becomes
14.6 Generalized Fluxes and Forces
In the previous subsections, we have computed the entropy production in different
situations but always in the context of the approximation of discontinuous systems.
We have seen that, in all generality, the entropy production appears to be given by
the sum of n terms:
P = J1 X 1 + J2 X 2 + · · · + Jn X n ,
each of which is constituted by the product of a term that we called generalized
flow or generalized flux, Jρ , times the associated generalized force X ρ . The fluxes
describe the rate at which extensive quantities are exchanged, while the generalized
forces determine , all together, the onset of the irreversible processes described by
the generalized fluxes. The cause–effect relationship between these two classes of
physical quantities requires a more in-depth analysis of their functional dependence.
14.6.1 Determination of Generalized Fluxes and Forces
A very important question in order to understand more deeply the nature of generalized flows and in view of some concrete applications is the following: given
a certain configuration of nonequilibrium thermodynamics, are the fluxes uniquely
determined or is there a substantial indetermination that leaves a large discretion
in the identification of the processes? And in the latter case, what is the physical
quantity objectively determined by the given physical situation?
The examination of two paradigmatic examples will make us understand how
things are.
The Case of Chemical Reactions
Let us consider, as the first example, the case of a closed system in which two chemical
reactions are active. For instance, let us consider the combustion of carbon in which
both carbon monoxide and carbon dioxide are produced while carbon and oxygen
are consumed. The processes can be described by the two chemical reactions:
14 Irreversible Processes: Fundamentals
2C + O2 → 2CO ,
C + O2 → CO2 .
In order to write the expression of the entropy production, we need to determine
the two reaction rates and the two affinities. If denote by v1 and v2 the rates of
reaction Eqs. (14.83) and (14.84), respectively, and by A1 and A2 their affinities, we
dn C
= −2v1 − v2 ,
dn O2
= −v1 − v2 ,
dn CO
= 2v1 ,
dn CO2
= v2 ,
A1 = 2μC + μO2 − 2μCO ,
A2 = μC + μO2 − μCO2 ,
and the entropy production is
T P = A1 v1 + A2 v2 .
It is important to recall that the thermodynamic point of view has to provide a full
description of the evolution of the macroscopic state of the system and this, in our
example, means to give a correct description of the evolution of the macroscopic
state variables n C , n O2 , n CO and n CO2 .
For this purpose, we could adopt a different pair of chemical reactions like, for
2C + O2 → 2CO ,
2CO + O2 → 2CO2 .
With this new description, we shall have new rates, new affinities, and a new entropy
production. If we denote with A1 and A2 the affinities of reactions Eqs. (14.92)
and (14.93), respectively, it is easy to demonstrate that they are related to the affinities
A1 and A2 relative to the first description by
A1 = A1 ,
A2 = 2A2 − A1 .
As far the new rates are concerned, their relations with the previous rates can be
easily found requiring that the new ones give the same time derivatives of the mole
14.6 Generalized Fluxes and Forces
numbers dn γ /dt. The latter quantities are the observables of the problem and are,
therefore, objective, then it is easy to show that the new rates can be expressed as
functions of the old rates by
v1 = v1 + v2 ,
v2 = v2 .
As for the entropy production P , in the new description, its formal expression is
T P = A1 v1 + A2 v2 .
If we use Eqs. (14.94)–(14.97), we obtain
P = P .
The essential point in this discussion is the following: the “objective” facts, that
is what the observer really observes, is the changes in the composition in the time
interval dt that is dn C , dn O2 , dn CO and dn CO2 , but he does not observe which specific chemical reactions are taking place. The latter are just a descriptive instrument
adopted by the observer in order to account for the observations. The two sets of
chemical reactions adopted in the example constitute two different descriptions of
the same physical situation and are called, therefore, equivalent systems. In conclusion, the observer has a certain arbitrariness in assuming which processes (in
our example which chemical reactions) are taking place as long as the different
descriptions are equivalent which means that they provide the same description of
the variations of the state variables. The latter requirement provides Eqs. (14.96)
and (14.97), i.e., the relations between the generalized fluxes in the two descriptions.
As a consequence we proved in Eq. (14.99) that the entropy production is invariant
with respect to the change of the adopted description. In this sense, we may affirm
that entropy production is “objective.”
In the second example, we will make a different choice of flows and will use the
required invariance of the entropy production to find, quickly, the expression of the
new forces as a function of the forces of the first choice.
The Case of Open Systems
Let us return to consider the situation which led to the calculation of the entropy
production Eq. (14.59) and, for simplicity of writing we put to zero the terms
related to chemical reactions. One possible choice of flows and forces is illustrated
by Eq. (14.60) with the related Eq. (14.61) as the relative force and the fluxes shown
in Eq. (14.62) with their relative forces Eq. (14.63). The value of the latter depends
on both the temperature difference between the two phases and the difference of the
14 Irreversible Processes: Fundamentals
chemical potential for each component. It may happen, however, that it is preferable
to think in terms of temperature and pressure differences separately which might be
the state variables that the experimenter easily controls. If we assume that between
the two phases the temperature difference is small T T I , T II and similarly for
the pressures p p I , p II , we can approximate Eq. (14.63) to the linear terms:
X γ = −
μ γ
μγ T −
1 ∂μγ
1 ∂μγ
T −
∂p T
T −
μγ + T Smγ
= 2 Hmγ T − Vmγ p ,
in which we used the relation Hmγ = μγ + T Smγ for the molar enthalpy of the
γ component with Smγ and Vmγ being its molar entropy and the molar volume,
respectively. While the force Eq. (14.61) depends only on the temperature difference,
the one relative to the flow of matter depends on both T and the pressure difference.
Let us look for a different description, in terms of a different choice of flows and
related forces, so that the forces depend only either on T or on p so that their
effects are clearly disentangled.
Consider the following choice of generalized flows:
= JU −
J th
Hmγ Jγ ,
Jγ = Jγ .
We now want to find the expression of the related generalized forces. To do this,
we impose that the equality
JU X U +
Jγ X γ = J th
X th +
Jγ X γ
is an identity, that is, it must be verified for any value of the flows.
If we consider the case in which all fluxes Jγ are zero, from Eq. (14.104), we
= JU and from Eq. (14.106), we get
obtain J th
X th = X U .
= −Hmγ Jγ
Similarly if we put to zero all the fluxes but one Jγ we obtain J th
and, in these cases, Eq. (14.106), will give
Jγ X γ = −Hmγ Jγ X th + Jγ X γ
14.6 Generalized Fluxes and Forces
and hence
X γ = X γ + Hmγ X U .
In conclusion, by dropping the prime from Eqs. (14.104) to (14.109), the new fluxes
and the corresponding associated forces are written as
J th = JU −
Hm γ Jγ ,
X th
− 2 ,
Jγ = −
d̂ e n γI
Xγ −
Vm γ p .
In this way, we reached the goal of separating the effects of temperature and pressure
differences but it is useful to observe that the meaning of the new flow J th while
maintaining the dimension of a flow of energy, it no longer represents the energy
transferred per unit time. We will resume this issue later when we shall deal with the
quantities of transport.
In conclusion, we have seen that the choice of flows and their respective forces
is not unique but depends on what the observer believes more convenient and this
, in turn, depends on the specific problem and on the quantities we want to measure. Of course also the names of the irreversible processes taking place in a given
nonequilibrium situation are not objective but the result of a particular choice.
14.7 Onsager Relations
We have seen that, in all generality, the entropy production in a nonequilibrium
system is given by the sum of the products of the flows times the relative generalized
Jρ X ρ .
The generalized forces depend on the difference of the state parameters between
the two phases and the equilibrium situation is guaranteed by the condition that for
all the forces we have
Xρ = 0 .
In such a case for all the fluxes, it will be
14 Irreversible Processes: Fundamentals
Jρ = 0 .
We shall assume that the values of the flows in a given nonequilibrium configuration
depend on the values, in that instant, of all the generalized forces in that instant3 and
let n be the number of fluxes and forces, then we write
Jρ = Jρ (X 1 , X 2 , . . . , X n )
and we have Jρ (0, 0, . . . , 0) = 0 for every ρ. Hence, for small values of X ρ , we can
linearize the previous relation:
Jρ (X 1 , X 2 , . . . , X n ) = Jρ (0, 0, . . . , 0) +
Jρ =
n ∂ Jρ
X ρ
∂ X ρ eq
ρ =1
L ρρ X ρ .
ρ =1
The coefficients L ρρ are called linear phenomenological coefficients and, as it
is clear from their definition, their value depends on the equilibrium state close to
which we have altered the equilibrium conditions.
The coefficients L ρρ form a matrix n × n called linear phenomenological matrix
in which the diagonal elements describe the proportionality of a generalized flux
to its own generalized force, while the nondiagonal elements describe the possible
interference between different irreversible processes. Consider, as an example, two
chemical reactions simultaneously present:
v1 +
v2 .
Let us write the linear relation between flows and forces near equilibrium:
+ L 12
+ L 22
v2 = L 21
v1 = L 11
The diagonal terms L 11 and L 22 describe how strongly each chemical reaction
is driven by their own affinity, while the terms L 12 and L 21 describe, quantitatively,
the effect of each of the two forces on the evolution of the other reaction. These
nondiagonal terms describe the interference between the two reactions.
3 Let us assume that the values of the fluxes do not depend of the preceding history of the system as
it happens in those situations in which hysteresis is important.
14.7 Onsager Relations
In general, if we have two irreversible processes (i.e., the production of entropy
is given by the sum of two terms), we will have
P = J1 X 1 + J2 X 2 .
The linear relations between fluxes and forces are
J1 = L 11 X 1 + L 12 X 2 ,
J2 = L 21 X 1 + L 22 X 2 ,
and the entropy production becomes
P = L 11 X 12 + (L 12 + L 21 ) X 1 X 2 + L 22 X 22 > 0 .
The second principle requires this quadratic form to be positive definite. This implies
that the diagonal coefficients are positive and also
(L 12 + L 21 )2 < 4 L 11 L 22 .
The phenomenological matrix is the matrix of a positive definite quadratic form and
this imposes some conditions on the minors of the matrix. The most obvious is that
it must be L ρρ > 0 for every ρ.
The next step, the most important step, is to formulate the phenomenological
symmetry properties of the matrix. This is expressed by the condition
L ρρ = L ρ ρ .
This condition, named also as reciprocity condition, states, in practice, that if the
phenomenon (i.e., the flow) Jρ is influenced by the presence of the force X ρ relative
to phenomenon Jρ , also the ρ -th phenomenon is influenced by the force X ρ relative
to the ρ-th phenomenon and the coefficient which measures the strength of this
mutual interference takes the same value.
These symmetry relations provided by Eq. (14.128) are also called Onsager reciprocity (symmetry) relations in honor of Lars Onsager Nobel Prize in Chemistry in
Some authors refer to them as to the Onsager theorem (see for instance [21])
because of the considerable theoretical work which, primarily due to the contribution
of L. Onsager, has been developed to demonstrate these symmetry relations in the
study on the decay of fluctuations of a system at equilibrium. The fact that the same
reciprocal relations are applied among the irreversible processes in thermodynamic
nonequilibrium configurations represents a generalization of Onsager theorem and
constitutes a sort of fourth postulate.
14 Irreversible Processes: Fundamentals
It is just the application of these conditions of reciprocity that the thermodynamic study of irreversible processes, in the linear region, provides qualitative and
quantitative results of fundamental importance not achievable otherwise.
14.7.1 The Curie Symmetry Principle
In general, if the entropy production is formed by n terms, and then if we have n
irreversible processes, the Onsager theory requires to consider each flow coupled
through the phenomenological coefficients L ρρ to all the n generalized forces.
Let us examine a particular example in which the entropy production is formed by
only two terms, one that describes the dynamics of a chemical reaction (A/T ) v and
another describing a flow of heat in the presence of a temperature gradient. As we
will see in detail in the chapter devoted to the study of thermodynamics in continuous
systems (see Chap. 16), the contribution of the latter process to the entropy production
can be described by a “vectorial” flux in the sense that for a complete determination
of the flux we have to specify also the direction along which heat flows. Obviously,
also its generalized force will be of “vectorial” nature, in the same sense, and, as we
shall demonstrate in Eqs. (16.67) and (16.68), it will depend on the gradient of 1/T .
As far as the chemical reaction is concerned, let us remember that the rate v is an
extensive quantity and then, for small volumes it will be written as v jch δV (this
will be shown more clearly in Chap. 16). This relation defines jch , which is the rate
per unit volume of the chemical reaction, point by point. Suppose that at a certain
point the temperature gradient is parallel to the x-axis, then for an isotropic material
the production of entropy per unit volume, will be
jch + Jx
The corresponding linear relationship between fluxes and forces will be of the
type [21]:
jch = L 11 + L 12
∂x T
+ L 22
Jx = L 21
∂x T
If we consider the particular case of a system at uniform temperature, the heat flux
Jx = L 21 .
14.7 Onsager Relations
This result forces us to require that
L 21 = 0
because there is no reason that a “scalar cause” as the affinity of a chemical reaction
produces a vector effect as the heat flow in a particular direction (x-axis in our
This particular result can be generalized by requiring that the nondiagonal coefficients, which describe the coupling between different irreversible phenomena, may
be non-zero only when they couple phenomena with the same degree of symmetry,
that is, scalar forces with scalar flows, vector forces with vector flows, and tensorial
forces with tensorial flows. This is the so-called Curie symmetry principle.
As a consequence, in the case of the entropy production shown by Eq. (14.67)
discussed in Sect. 14.4, we shall, in complete generality, write the linear relationships
between flows and forces by coupling only the flows of energy and matter, while the
chemical reactions will only be coupled to each other (and in this example separately
in each phase).
14.8 The Approximation of Linearity
Here, we want to discuss, briefly, the condition of linearity between fluxes and forces,
which is the basis of the reciprocity relations. The question of how small a force
should be in order to be allowed to linearize the flux–force dependance cannot be
answered in general but must be examined case by case.
We will examine here the case of chemical reactions: we will see that, in the very
frequent cases where it is possible to have a suitable approximation for the chemical
potentials of the components, we can easily calculate the affinity on one side, the
velocity on the other, express the latter as a function of the affinity and then see under
what assumptions this relationship can be linearized.
14.8.1 Chemical Affinity
Consider as an example the general chemical reaction:
|νM | M + |νN | N → νR R + νS S ,
where M, N, R, and S are chemical components and the relative stoichiometric coefficients are written with the convention on the signs already adopted in Sect. 14.3. Let
us suppose that the chemical potentials μγ may be written, with good approximation,
in the form already adopted for ideal gases or for ideal solutions in Eq. (6.52) , and
in which their dependence on molar concentration Cγ is evidenced:
14 Irreversible Processes: Fundamentals
μγ = ηγ ( p, T ) + RT ln Cγ .
The expression for the affinity A = −
νγ μγ can be written in the form
νγ ηγ ( p, T ) − RT
νγ ln Cγ .
Let us define the function K = K ( p, T ) so that
RT ln K ( p, T ) = −
νγ ηγ ( p, T ) .
The quantity K ( p, T ) is called equilibrium constant of the chemical reaction at
given pressure and temperature (the justification will be given in Eq. (14.139)) and
the affinity will be written in the useful form:
A = RT ln
K ( p, T )
ν .
Πγ C γ γ
From this expression, it appears that at given pressure and temperature the chemical
reaction will be in a state of equilibrium if (and only if) the concentrations of the
components satisfy the relation
Πγ Cγ γ = K ( p, T )
and this relation justifies the name for K ( p, T ). It is a constant with respect to
different choices of the concentrations but its value depends on p, T . The expression
in Eq. (14.139) is sometimes called law of mass action.
14.8.2 Reaction Rate
Let us refer to the molecular theory of matter. From the microscopic point of view,
we may consider the elementary events taking place when two, or more, molecules
interact. We are interested in those events in which the outcome of the interaction
consists in the formation of those new bound states as described in the reaction
formula. The probability of this outcome will depend primarily on the energy of
the reacting molecules and on the geometry of the collision. If we integrate over
geometrical factors and over the energies, we obtain a sort of average probability for
each final outcome, whose value will depend on pressure and on temperature.
It will be useful to define two partial rates the “forward rate” denoted by v+ and the
“backward rate” denoted by v− . They express, respectively, the number of forward
events and of backward events taking place in the whole system per second. It is
clear that their value will be proportional to the concentrations in the form
14.8 The Approximation of Linearity
|ν |
|ν |
v+ ∝ CMM CN N ,
v ∝
while the proportionality factors depend on a sort of average cross section for forward/backward events and will depend on temperature and pressure. These factors
are called forward and backward kinetic constants and are denoted, respectively, with
k + ( p, T ) and k − ( p, T ). Then we may write
|ν |
|ν |
v+ = CMM CN N k + ( p, T ) ,
v =
k ( p, T ) .
The reaction rate, by definition, will be
v = v+ − v− .
Hence, by substituting Eqs. (14.142) and (14.143) in Eq. (14.144), we have4
|ν |
|ν |
v = CMM CN N k + ( p, T ) − CRνR CSνS k − ( p, T )
and factorizing v+ we obtain
CRνR CSνS
v = v+ 1 − |νM | |νN | +
CM CN k
ν k
= v + 1 − Πγ C γ γ + .
In order to obtain the desired relation between the reaction rate and the affinity, it is
necessary to express the ratio k − /k + as a function of the macroscopic parameters.
Let us consider one possible set of concentrations that we denote by Cγ∗ , which give
an equilibrium configuration for the chemical reaction (see the law of mass action).
By inserting this set of concentrations into the expressions of the affinity and in that
of the rate, we obtain two relations: one which expresses that the affinity is zero and
the other expressing that the rate is zero. By considering Eq. (14.138), the former
condition leads to
Πγ Cγ∗ γ = K ( p, T ) ,
while the condition of zero rate leads to (see Eq. (14.146))
Πγ Cγ∗
4 If
= 1.
a chemical reaction is in equilibrium, this does not mean that “nothing happens” but it means
that the number of forward events per second and the number of backward events per second are
14 Irreversible Processes: Fundamentals
From these two relations, we easily obtain the following general result:
K ( p, T )
Finally, from Eqs. (14.149) and (14.138), we obtain the desired relation between rate
and affinity:
v = v+ 1 − exp −
14.8.3 Linear Relations Between Rates and Affinities
We can now, with some awareness, tell under what conditions we can speak of linear
relationship between flow (rate) and force (affinity/temperature).
With reference to Eq. (14.150), we can approximate the function
1 − exp (−x) x
for small x which means
v+ A
R T
The linear relation becomes
and the phenomenological coefficient L ch for the chemical reaction is
L ch =
In the opposite situation, namely, for
we have v ≈ v+ , i.e., the reaction rate is independent of the affinity and is equal to
the “forward rate.” If we go back to Eq. (14.138), we see that at a given temperature,
that is for a given value of the equilibrium constant, the affinity assumes very large
values (with respect to RT ), when
Πγ C γ γ → 0
14.8 The Approximation of Linearity
and this happens if the concentration of at least one component with positive stoichiometric number (and then to the right of the chemical reaction) tends to zero.
This means that the reaction is at the “beginning” meaning by this that all or some
of the reaction products are absent. In this case, the number of “backward events”
per second is practically zero.
Chain of Reactions
It is interesting, at this point, to consider the case of consecutive chemical reactions
meaning, with this expression, a situation that can, schematically, be described by a
sequence of reactions of the following type:
M → N.
In general, the entropy production is written as
T P = A1 v1 + A2 v2 + · · · + An vn .
In some cases of considerable importance, we might have to deal with situations
in which the concentrations of the “intermediate” components can quickly become
constant in time. Then,
v1 = v2 = · · · = vn = v .
In parallel, the entropy production becomes
T P = (A1 + A2 + · · · + An ) v .
To some extent, it is as if we had only one global reaction:
B → N,
A = A1 + A2 + · · · + An ,
whose affinity is
whose rate is v and with entropy production:
T P = Av.
It is easy to conceive that we may violate the linearity condition for the global
reaction Eq. (14.163), i.e.,
14 Irreversible Processes: Fundamentals
> 1,
and, at the same time, for each “partial” reaction, the linearity condition can be saved:
where Ai is the affinity of the ith reaction. So it may happen that even for reactions
that, overall, cannot be linearized we can still make use of relations in the linear
regime if the overall process can be thought of as the cumulative effect of various
partial processes each of which proceeds in a near equilibrium condition.
14.8.4 Relaxation Time for a Chemical Reaction
After having examined more closely the linearity condition in a chemical reaction, it
is interesting to consider in more detail the dynamics of an irreversible process near
In general, the linearity condition can be written in the form
= A,
and, in turn, the affinity will be a function of the state of the system, and hence of
the degree of advancement ξ . This function must be zero at equilibrium and then we
can write in all generality:
ξ − ξ eq + · · ·
and then the dynamics will be described by the differential equation:
whose solution is
ξ − ξ eq = ξ − ξ eq
ξ − ξ eq ,
exp −
where τ , called relaxation time of the process, is given by
τ =−
∂ξ eq
14.8 The Approximation of Linearity
It can be proved (see discussion in Sect. 15.3.1 and Eq. (15.153)) that the derivative
(∂A/∂ξ )eq is negative.
The development of the dynamical theory of a system with several, mutually
interfering, chemical reactions goes beyond the scope of this book and will not be
treated here.
Chapter 15
Irreversible Processes: Applications
Abstract The Onsager symmetry relations are applied to the study of
electrokinetic effects and of thermomechanical effects. In the latter case the relation
between thermomolecular pressure difference and the heat of transfer is calculated,
for comparison, also for Knudsen gases in a classical kinetic model. The characterization of stationary states as states of minimum entropy production are studied. The
determination of stationary states, their stability and the principles of Le Chatelier
and of Le Chatelier–Braun, find their correct explanation within the context of the
thermodynamical theory of stationary states. The model by Prigogine and Waime is
presented as an example. Within the theory of fluctuations in an isolated thermodynamical system, the decay of fluctuations are treated with the formalism of linear
irreversible processes and the symmetry properties of the linear phenomenological matrix is derived from the postulate of time reversal symmetry for microscopic
Keywords Thermomechanical effects · Thermomolecular pressure difference ·
Heat of transfer · Knudsen gases · Electrokinetic effects · Stationary states ·
Minimum entropy production · Le Chatelier · Le Chatelier–Braun · Fluctuations ·
Mean values · Correlations · Decay of fluctuations · Microscopic reversibility ·
Onsager’s relations
15.1 Introduction
In order to understand the importance of the methods of thermodynamics and their
predictive power it is necessary to show some examples. In this chapter in which
we adopted the perspective of discontinuous systems, we will see, as examples, the
thermomechanical effects and the electrokinetic effects.
15.1.1 Thermomechanical Effects
Let’s go back to Sect. 14.4 and adopt the description in terms of the fluxes and
forces given by Eqs. (14.110)–(14.113) because, as we have seen, with this choice,
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
15 Irreversible Processes: Applications
in the expressions of the forces, the effects of the pressure difference and of the
temperature difference are disentangled. For simplicity, we put to zero the chemical
reactions (whose dynamics would, in any case, be decoupled at the level of Onsager
relations) and suppose that there is only one component so that the various fluxes
of matter reduce to one that we denote with the symbol Jm (and similarly for the
notation of the associated generalized force).
With these simplifications, the entropy production becomes
P = Jth X th + Jm X m
and the linear relations will be written as
Jth = −L 11
Jm = −L 21
Vm p
− L 12
Vm p
− L 22
If we act on the pressure and temperature differences we obtain various configurations. Two situations are of particular interest.
Thermomolecular Pressure Difference
If we set the temperature difference between the two containers at a constant value
there is a particular value of the pressure difference which brings to zero the value of
the flow of matter. In the linear approximation, this difference in pressure is proportional to the temperature difference. The pressure difference which compensates for
a unit difference of temperature is called thermomolecular pressure difference and
its value, in terms of phenomenological coefficients turns out to be
Jm =0
L 21 1
L 22 Vm T
As we shall discuss in the following, this configuration is a stationary state configuration and it will constitute the starting point for a general discussion of the theory
of stationary states.
Heat of Transfer
The other relevant quantity which characterizes transport phenomena with this choice
of fluxes and forces, is the so-called heat of transfer. Let us set the two containers
to have the same constant temperature. The pressure difference will cause a flux of
matter and, owing to interference phenomena, also a flux of energy. Since the force
relative to Jth is zero we can say that its non-zero value can be entirely ascribed to
15.1 Introduction
the flow of matter in other words it is transported by the flux of matter. We call it
heat of transfer and it shall be denoted by the symbol Q̃, the value of the flux Jth
transported by the unit flux of matter Jm when T = 0.
From linear relations written in Eqs. (15.2) and (15.3) we obtain
Q̃ =
T =0
L 12
L 22
From the Onsager reciprocity relations L 21 = L 12 we obtain the general relation
Jm =0
Vm T
The linear relations written in Eqs. (15.2) and (15.3) show very clearly how interference between different irreversible phenomena works.
The pressure difference, which we consider the force responsible for the flow of
matter, also affects the value of the energy flow whose associated force is related to the
temperature difference and the same can be said for the influence of the temperature
difference on the flow of matter. The fact that the two processes mutually interfere
is certainly qualitatively predictable having in mind the atomic-molecular model of
matter but in order to have a quantitative estimate we must go to a specific modeling
as we will see in the next section. In that occasion, we shall appeal to a specific
statistics and to a specific modeling of the septum that allows the passage of matter
from one container to the other.
The result given in Eq. (15.6) is of general validity because it does not depend on
the adopted assumptions except for the linearity between fluxes and forces.
It is useful to obtain Eq. (15.6) also in the particular case of perfect gases using
classical Maxwellian statistical mechanics together with the hypothesis of molecular
15.1.2 Knudsen Gases
Consider the same situation discussed in the previous subsection but suppose, now,
that in both tanks is present an ideal gas. Suppose, for the moment, that we are dealing
with a monoatomic gas but, as we shall see in the end, this assumption will easily be
The two tanks (for example, two cylinders that communicate through a tube of
small cross section) the gas will be maintained at a temperature and pressure, respectively, T I , T II , p I , p II and in each part we assume the gas to obey the Maxwell–
Boltzmann statistics. Let us denote by F (v) the velocity distribution function which
is defined by the relation
dN = F (v) dv,
15 Irreversible Processes: Applications
where dN represents the number of particles per unit volume with velocity within
the infinitesimal interval (v, v + dv). As is well known, the distribution function is
proportional to
F (v) = exp −
kB T
ε being the energy of a single particle which, in our case, for a monoatomic molecule,
1 (15.9)
ε = m vx2 + v2y + vz2
and the normalization constant will be determined by the requirement that the
integral of Eq. (15.7) over the entire set of allowed velocities (in our case over the
interval (−∞, +∞) for each component of the velocity) gives the following result:
dvx dv y dvz =
NA ,
exp −
kB T
where NA = 6.022 × 1023 is the Avogadro’s number, n the total number of moles
contained in the container at a given instant and V is the volume.
Let us calculate separately the integral in Eq. (15.10). It can be written as a product
of three independent integrals in the form
dvx dv y dvz =
exp −
kB T
dvz .
exp −
exp −
exp −
dv y
2k B T
2k B T
2k B T
If we put
ζ =
it is easy to show that
2k B T
exp −ζ x 2 dx =
and condition of normalization Eq. (15.10) leads to
NA ,
15.1 Introduction
F (v) =
NA exp −
kB T
Now we can proceed to calculate the flux of matter and then that of energy.
Suppose that the communication between√the two containers occurs via a small
tube of constant cross-sectional area with λ where λ is the mean free path
of a molecule in the gas.
Furthermore, we must assume that in the transfer of a molecule from one container
to the other every molecule does not undergo collisions with other molecules in transit
as well, with significant probability. Moreover, the collisions with the boundaries of
the pipe connecting the two tanks are supposed to be elastic. The reason for this
assumption will be made clear when we shall calculate the energy transferred.
The whole of these conditions produces a transfer mode of the gas that is called
molecular flow. This condition is necessary in order to be able to consider that if a
molecule enters the opening that allows communication between the two tanks, with
a certain velocity, it can be considered acquired on the other container and with the
same kinetic energy.
Flux of Matter
Consider the tank I and let’s calculate the number of molecules with velocity in the
interval (v, v + dv) impinging the surface of area per second, as shown in Fig.
15.1. Let us take as reference system a Cartesian tern with the x axis normal to and going from vessel I to vessel II, we consider a cylindroid with base , height
vx and parallel to the direction of v. The volume of this cylindroid is (vx ) and the
molecules with velocity in the given interval which are contained in this cylindroid
will be the only ones which enter the pipe in one second. Their number is
dN = F (v) vx dvx dv y dvz .
Let us indicate with J + the flux of matter passing from I to II defined as the number
of moles which leave tank I and pass to tank II in one second.
Since we assume that all the molecules impinging the surface in one second
may be considered transferred to the other tank in the same time interval, the flux
J + may be obtained by integration of Eq. (15.16) over all the possible values of the
molecular velocity allowing it to enter the area , i.e., with positive vx component.
The domains of integration will be
0 ≤ vx < +∞,
−∞ < v y < +∞,
−∞ < vz < +∞.
After dividing Eq. (15.15) by the Avogadro’s number NA we obtain
15 Irreversible Processes: Applications
Fig. 15.1 is the area of
the opening between two
vessels. The dashed lines
depict a cylindroid with
volume (vx ). Among the
molecules with velocity
whithin (v, v + dv) the only
ones that will enter the
aperture in one second are
those inside the cylindroid
ζ 2 n
exp −ζ v y dv y
exp −ζ vz dvz
vx exp −ζ vx2 dvx .
As far as the first and the second integrals are concerned, we obtain the same result
as given in Eq. (15.13), while for the third integral, the one over the variable vx , the
result is quite different:
vx exp −ζ vx dvx =
exp −ζ vx2 d(ζ vx2 )
Summarizing the calculation for J + and making use of the equation of state for the
ideal gas in the tank I, we get
ζ I 2 nI
J =
V I 2ζ I
kB 2
J =
√ .
R 2π m
In a similar way, and recalling that the tube is cylindrical so that the area of the inlet
section will be on both sides, we obtain for the flow of matter in the opposite
direction (and which we will obviously indicate with J − ):
15.1 Introduction
J− =
2π m
p II
T II
The flux of matter Jm , defined in the thermodynamical treatment in Eq. (15.1) (which
in turn derives from the definition in Eq. (14.112) for the case of one single component), represents the time variation of the total amount of matter contained in vessel
I and this depends on the fluxes J + and J − just calculated according to the obvious
Jm = J + − J − ,
and then it is given by
Jm =
2π m
p II
√ −√
T II
Thermomolecular Pressure Difference in a Knudsen Gas
From Eq. (15.25) we have the value of the flow of matter in a perfect gas in the
conditions considered before. In particular we can obtain the relationship between
the difference of pressure and the temperature difference in the case of flow of matter
being zero. We find
Jm = 0
T II
p II
and for small pressure and temperature differences we obtain
Jm =0
1 p
2 T
where we made use of the linear term approximation in the Taylor expansion of a
small number δ:
1+δ 1+ δ.
Average Energy Transported in a Molecular Flux
In order to calculate the energy carried by the flow of matter we must proceed in
the following way: when a molecule, belonging to the vessel I enters the opening of
area , the energy content of this vessel will decrease by the amount δU equal to
the energy possessed by the same molecule and the energy contained in the vessel
15 Irreversible Processes: Applications
II will increase by the same amount (given the hypothesis on the molecular flow).
Of course, for each molecule passing from vessel II to vessel I the same argument
will apply but it will represent the reverse energy flow. The total flow of energy, the
macroscopic flow, will be the difference between these two molecular flows.
Initially, we suppose
that the
energy possessed and hence transferred by each
molecule is 21 m vx2 + v2y + vz2 (monoatomic case). If we consider, once again, the
molecules with velocity within the small interval (v, v + dv) they will transfer to
vessel II, in unit time, the infinitesimal amount of energy:
d JU+ =
m vx + v2y + vz2 F (v) vx dvx dv y dvz
and, after integrating over the three components of the velocity (with limits of integration given by Eq. (15.17)) we shall obtain the partial energy flux toward vessel II.
Then, after changing index I into II we shall obtain the reverse partial flux. As before
the macroscopic energy flux shall be given by the difference of the two. We need to
compute the sum of three integrals each of them representing the contribution to the
flow of energy by each of the three degrees of freedom.1
mv ,
mv ,
mv .
2 x
2 y
2 z
The three integrals, coming from Eq. (15.30) are
ζ 2 n
NA vx exp −ζ vx dvx
exp −ζ v y dv y
exp −ζ vz2 dvz ,
ζ 2 n
vx exp −ζ vx2 dvx
v 2y exp −ζ v2y dv y
exp −ζ vz2 dvz ,
NA π
ζ 2 n
vx exp −ζ vx2 dvx
exp −ζ v2y dv y
vz2 exp −ζ vz2 dvz
NA π
they have to be calculated separately then sum the results. Some integrals appearing
in Eqs. (15.32)–(15.34) have already been calculated before while some other appear
for the first time. Let us now calculate them and we start from the second integral Eq.
1 The “degrees
of freedom” are the terms that make up the expression total energy of the system. In
the case of polyatomic molecules we should add more degrees of freedom that give account of the
rotational and vibrational motions.
15.1 Introduction
d 2
exp −ζ v2y dv y
v y exp −ζ v y dv y = −
exp −ζ v2y dv y
and then we obtain
v y exp −ζ v y dv y =
The same result for the third term in Eq. (15.34) for the vz component. With regard
to the first term in Eq. (15.32) we have:
vx exp −ζ vx dvx =
vx2 exp −ζ vx2 dvx2
η exp (−ζ η) dη
1 d
2 dζ
exp (−ζ η) dη
and finally we obtain
vx3 exp −ζ vx2 dvx =
2ζ 2
Remembering that (see Eq. (15.13))
exp −ζ vz dvz =
we may write the partial energy fluxes I → II for each of the three degrees of freedom.
If we denote them by JU+x , JU+y and JU+z we obtain
1 n
= m NA 2 V
2ζ 2
15 Irreversible Processes: Applications
, and
4ζ 2
1 n
= m NA .
2 V
4ζ 2
JU+y =
1 n
m NA 2 V
After summing up the contributions relative to the three degrees of freedom we write
the expression of the partial (forward) energy flux I → II in the form
2ζ 2
4ζ 2
4ζ 2
1 n
= m NA .
2 V
JU+ =
1 n
m NA 2 V
From Eqs. (15.43)–(15.45) we see that the contribution to the energy transfer associated to the vx component of the kinetic energy, is twice that relative to the other
two components v y and vz .Then it is interesting to calculate the mean value of the
energy by the positive (J + ) flux of molecules. To do this we will have to consider the
three partial energy flows Eqs. (15.43)–(15.45), and divide each one by the number
of molecules passing in one second. The latter is easily obtained by multiplying the
partial flow given by Eq. (15.21) by the Avogadro’s number NA . Let us drop, for the
moment, the index that refers to the specific vessel and let us denote the three mean
values of the energy along the x, y and z directions with the ε symbol. We get:
εx = k B T,
ε y = k B T,
εz = k B T,
and, for the mean value of the total transported energy we have
ε = 2 k B T
while the equipartition value ε
of the energy in the gas, is
k B T.
It is worth noticing that the average energy delivered by the flux of matter and
associated to the y and z components of the velocity, is equal to their equipartition
value while the average value of energy transported and associated to the x component
is twice the equipartition value.
15.1 Introduction
That the latter term should be larger than the average values of the energy associated with the y and z components it is easy to envisage. Indeed the probability that a
molecule having a particular velocity v passes through the hole per unit time, is not
simply proportional to the number density of such molecules in the gas (e.g., see Eq.
(15.7)), but also it depends on the value of the x component of the velocity. This
dependance is such that the molecules having larger value of vx are favored with
respect to those having low vx . One might say that the hole acts as a speed selector
“preferring” the molecules with larger vx while the probability of crossing the hole
per unit time is independent of the values of y or z component of the velocity.
Summarizing, we can say that the probability that a molecule passes through
the hole in the unit time is not dependent on the value of its energy in the y or z
components while depends on the value of its energy component in the x direction
and this probability is larger for those who have larger value of εx . From this it
follows that the average value of energy transported in the y and z components
depend only on the concentration of the molecules and then ε y and εz will be
equal to the equipartition value, while the average value of energy transported in the
x component is larger.
Of course, we have to repeat the same calculations also for molecules that migrate
from container II to container I in order to find the macroscopic flow of energy. Notice
that each of the two partial energy flows would cause a decrease in the temperature of
the respective vessel because, in this process, each one is expelling its most energetic
particles. A consequence of this is that if the two vessels are maintained at the same
temperature, the one that loses matter will absorb heat from the thermostat, and, vice
versa, the reservoir which gradually fills must be cooled.
Energy of Transfer and Heat of Transfer in a Knudsen Gas
Let us now calculate, in the context of this modeling, the macroscopic flux of energy
JU . We refer to Eq. (15.47), and we express the energy flux JU+ after substituting
the values for pressure and temperature relative to container I. In order to obtain
the reverse flux JU− of the energy transported per second by the molecules moving
from II to I we have just to replace, in the same relation, the values for pressure and
temperature relative to container II.
The macroscopic flow of energy; as that defined in Eq. (14.60), will be
JU = JU+ − JU− .
We will not take care of the general structure of this flow of energy but we shall only
consider its expression in the particular case in which the two containers are maintained at the same constant temperature (T = 0) in order to find, in this particular
case, the relationship between the thermomolecular pressure difference and the heat
of transfer Eq. (15.6). With this limitation, we can write the expression for the flow
of energy:
15 Irreversible Processes: Applications
JU (T = 0) =
1 NA
2 RT
1 I
p − p II .
In the same condition T = 0 let’s write the expression for the flow of matter:
Jm (T = 0) =
2π m
1 I
p − p II .
We can now calculate the energy transported by a unit flux of matter (one mole per
second). Let us call it energy of transfer and denote this quantity by the symbol Ũ :
Ũ =
JU (T = 0)
= 2RT.
Jm (T = 0)
In order to calculate the heat of transfer we must make the appropriate change of
flows, already examined, which led to Eq. (14.111) (for simplicity the sum over the
index γ extends to one component only).
From the definition Eq. (15.5) of heat of transfer we easily obtain its relation to
the energy of transfer:
Q̃ = Ũ − Hm ,
where Hm is the molar enthalpy. The latter, by definition, is
Hm = Um + pVm ,
where Um and Vm are, respectively, the molar energy and volume. Within the ideal
gas approximation we shall write, for monoatomic molecules
pVm = RT
Um =
and from these relations:
Q̃ = 2RT −
RT = − RT
and combining this result with the thermomolecular pressure difference given by Eq.
(15.28) we get
T Jm =0
Vm T
The detailed examination of this example (in the ideal gases approximation) clearly
shows the power of thermodynamics.
15.1 Introduction
As a final comment, we want to mention the general case of a polyatomic gas. In
this case, the energy of a single molecule will be given by the summation of several
terms discussed in Eq. (6.58):
m vx + v2y + vz2 + εrot + εvib + · · ·
where v = vx , v y , vz is the velocity of the center of mass of the molecule and
the two terms εrot and εvib are the contributions to the energy of the molecule due
to the rotational and to the vibrational degrees of freedom, respectively. As the
selective effect of the hole only affects the contribution relative to the εx component
of the energy, all other terms will contribute to the energy transported by the flux of
molecules, with their equipartition values. Then if we assume that all the degrees of
freedom are harmonic, the energy of transfer will be
Ũ =
RT + f RT,
where f is the number of degrees of freedom of the molecule. Likewise also the
molar enthalpy becomes
Hm = f RT + RT
and then the value of the heat of transfer will remain unchanged.
15.1.3 Electrokinetic Effects
We return to the case examined in Sect. 14.5 and to the expression of entropy production:
vγ .
Going back to Eqs. 14.75), (14.70) and (14.69), for the the definition of the rates vγ ,
we obtain
dn γI
dn γI
1 I
ψ − ψ II z γ F
μγI − μγII
T γ
Let us limit ourselves to the case in which the entire system, consisting (as described
in Sect. 14.5) by two phases which communicate through a capillary or a porous
septum, is maintained at a uniform and constant temperature.
There remain then the two “free parameters” that we can vary independently of
each other, namely the potential difference and the pressure difference. The former
15 Irreversible Processes: Applications
determines the value of the generalized force that we associate with the observed
electric current while the pressure difference governs the value of the force associated
to the flow of matter. Our purpose is to study the interference between these two
processes but in order to do this we need to define the flows (which give the name
to the processes) properly and their related forces. It is then necessary to start with
the production of entropy Eq. (15.67). Let us agree to choose as positive the fluxes
from system I to system II.
One flux is precisely the electric current intensity that we shall denote by I and
is written in the form
dn γI
zγ F
I =−
and hence the associated generalized force will be
XI = −
where the symbol indicates the variation of any quantity as its value in system II
minus its value in system I. In a similar way we define the “flux of matter” that we
shall indicate with the symbol Jm . In the second term of Eq. (15.67) we write the
variation of the chemical potentials up to the linear term in p (the two phases are
at the same temperature):
μγII μγI + Vm γ p .
Then we choose as flux of matter the quantity:
Jm = −
Vm γ
dn γI
and as the corresponding force, the expression:
Xm = −
With these determinations, the entropy production becomes
P = −I X I − Jm X m > 0 .
It should be noted, at this point, that the flow Eq. (15.71), that we somehow improperly
called “flow of matter”, should be more properly referred to as “flow of volume”, in
fact, it corresponds exactly to the decrease of volume of phase I per unit time (this
shows that the two fluxes so defined are physically independent of each other). We
now want to study how the two fluxes interfere, i.e., in which way the two forces,
depending separately on the pressure difference and on the potential difference,
combine to determine the overall effects. These results will be of general validity, that
15.1 Introduction
is regardless the specific assumptions on the capillary or on the statistical mechanics
which regulates the microscopic behavior.
The linear relations between fluxes and forces are
− L 12
Jm = −L 21
− L 22
I = −L 11
Let us define some relevant coefficients that characterize the electrokinetic processes.
Two of these concern transport phenomena. More precisely, given one flux we
measure the intensity of the other flux in the condition in which its own force is zero.
In this sense, we speak of a flux “transported” by the former.
The other two effects which give a quantitative idea of the strength of the interference between different processes, concern the measure of the “mutual neutralization”
between the two forces p and ψ in the following sense: given one of the two
forces, there may exist an appropriate value of the other which brings to zero its own
associated flux. In this sense, we say that the former force “neutralizes” the effect of
the latter.
Starting with the first two effects we have two typical situations. The first effect
called streaming current is defined by the measurement of the intensity of electrical
current dragged by the flow of matter in the condition ψ = 0 that is, in the condition
in which the force related to the electrical current is zero. For this effect, given the
linear relations written in Eqs. (15.74) and (15.75), we obtain
L 12
L 22
Symmetrically we define as “electro-osmotic coefficient” the measure of the flow of
matter (volume) dragged by a unit electric current in the condition in which its force
p = 0.
L 21
I p=0
L 11
As regards the other two effects, if we fix a certain pressure difference p between
the two phases, we shall observe, for different values of the potential difference,
different values of the electric current. We define the streaming potential as the
potential difference that must be established between the two phases, per unit of
pressure difference, in order to turn the electric current to zero:
I =0
L 12
L 11
Finally we fix the value of the potential difference and determine for what value of
the pressure difference the flow of matter will be zero. This value, per unit potential
15 Irreversible Processes: Applications
difference, will be called electro-osmotic pressure:
Jm =0
L 21
L 22
The Onsager reciprocity relation:
L 12 = L 21
allows us to determine, in all generality, the following two relationships between
these effects:
ψ Jm =0
Jm ψ=0
p I =0
I p=0
This relationship is known as Saxen relation and was first obtained by Saxen with
mechanical-statistical calculations in a way similar as to the one we have seen in the
example of the thermomechanical effects and also in this case it is necessary to make
restrictive assumptions about the nature of the porous separator. Now we proved that
these relationships are quite general. It’s true that thermodynamics does not allow us
to find the values of these coefficients (and we have to get back to statistical models),
but it allows us to “…find connections between effects that, at first glance, seem to
be completely independent…” [22].
15.2 Stationary States
In this section we will study, briefly, the properties of the stationary states: their
characterization and, within certain limits, their stability properties.
We say that a physical system is in a stationary state when the values of all its
state parameters are constant in time.
The concept of state of equilibrium was introduced, at the beginning, from an
empirical point of view as the thermodynamic configuration, constant in time, that
each isolated system spontaneously reaches. The state of equilibrium, then, is an
example of a stationary state but, as we see in ordinary life, the most frequent situation
is that of thermodynamic nonequilibrium configurations that are constant in time.
A trivial example is that of a resistor (old electric heater) to which a constant
potential difference is applied. Initially it dilates, its color changes and its temperature
increases. After a while, it is in a stationary state: its color does not change anymore
as well as its temperature, its volume is maintained constant, the potential is constant
point by point and so on. It is in a stationary state but it is not in a state of equilibrium.
15.2 Stationary States
Also, all living organisms appear, in a short time scale, in a stationary state but are
certainly not in an equilibrium state.
The most evident property of nonequilibrium stationary states is that they cannot
exist in isolated systems but they can be maintained only if the system is in constant
interaction with the external world.
In fact, like all other state variables, also entropy must have a constant value, then
its variation in a time interval dt must be dS = 0. Therefore
dS = d̂ i S + d̂ e S = 0.
If the thermodynamical configuration is a nonequilibrium configuration then it shall
be d̂ i S > 0 and hence
d̂ e S = −d̂ i S < 0
that is, the system must absorb a negative amount of entropy or in other words, must
eject outwards the entropy that it produces in the time interval dt because of its
internal processes.
In isolated systems it would always be d̂ e S = 0, and then, for stationarity, d̂ i S = 0,
and this is equivalent to say that the state is an equilibrium state. Therefore the
difference between a stationary state of equilibrium and one of nonequilibrium, lies
in having, in the former case a null value and in the latter case a positive value for
the entropy production:
P > 0.
One significant example of nonequilibrium stationary state has been examined in
some detail when, dealing with the thermomechanical effects, we defined the thermomolecular pressure difference.
In that example, the two vessels can exchange matter between them through a
capillary or a porous septum, and are kept at constant temperatures T I and T II . With
this constraint the stationary state is reached when the flow of matter is zero. The
flow of energy can be non-zero because the requirement that the energy should be
constant is satisfied thanks to the intervention of the thermostats which supply and
take away, in the form of heat, the amount of energy that is transferred, per unit time,
between the two vessels.2 As we have seen this occurs when the pressure difference is
adjusted to the value Eq. (15.6). Let us write the expression of the entropy production
in the case, for simplicity, in which only one component is present.
P = Jth X th + Jm X m .
If we limit ourselves to the case of linear relationships between fluxes and forces
and recall the Onsager symmetry relations, the entropy production is expressed by a
bilinear positive-semidefinite quadratic form
2 Energy
can be transferred between the two vessels even if the flow of matter is zero.
15 Irreversible Processes: Applications
P = L 11 X 2th + 2 L 12 X th X m + L 22 X 2m ≥ 0 .
Suppose that process 1 is the transport of energy and the process 2 the transport of
matter. We set the value of the two temperatures and this means having fixed the
value of the X th , then the entropy production will be a function of the X m variable
It is easy to recognize that such a function (second-order polynomial) has a minimum in correspondence to the steady-state configuration. In fact if we look for the
extrema of this polynomial, we get
= 2 (L 12 X th + L 22 X m ) = 0 ,
Jm = 0 ,
which is equivalent to
which is the stationary state condition. Moreover
∂ 2P
= L 22 > 0
∂2 X m
and this establishes that the extremum point is a point of minimum.
The conclusion, if it could be generalized, would be very important: fixed the
value of one force (in our case X th ), the stationary state is realized when the other
flux (in our case Jm ) is zero, and this is equivalent to the condition that the entropy
production has a minimum in the steady-state configuration.
15.2.1 Configurations of Minimal Entropy Production
The example discussed above introduces us to an important general theorem due
to de Groot and Mazur [23] concerning stationary states. The idea to characterize
the stationary states as thermodynamic configurations corresponding to a minimum
value for the entropy production consistent with externally fixed constraints, which
was suggested by the study of thermomechanical effects is confirmed,3 in the general
case for stationary nonequilibrium configurations.
Consider a discontinuous system composed by two homogeneous subsystems
(phases) I and II. Suppose that each phase has n degrees of freedom4 and denote
by EρI and EρII with ρ varying in the interval 1 ≤ ρ ≤ n, the values, in some time
3 For isolated systems the property is obviously verified: at equilibrium (stationary state), the entropy
production is zero (minimum).
Sect. 4.3.1.
4 See
15.2 Stationary States
instant t, of the n extensive quantities we chose to define the states in phases I and
II, respectively.
The entropy production will be written, as always, in the form
Jρ X ρ .
For every extensive quantity, in each phase, we can express its variation per unit of
time as
d̂ e EρI,II
d̂ i EρI,II
where d̂ e /dt and d̂ i /dt have, respectively, the meaning of variation per unit time
due to the interaction with the external world (third-party systems) and due to the
exchanges within the two phases overall system.5 In this perspective, the generalized
flows are defined by6
d̂ i EρI
d̂ i EρII
Jρ = −
To each of the n fluxes Jρ is associated the corresponding force X ρ which, as we
have seen in all the previous examples, is given by the difference, between the two
phases, of a suitable intensive quantity that we shall denote by xρ :
X ρ = − xρII − xρI .
As we have seen previously, the values of the flows, at a certain instant, can be considered as functions of the values of the forces in that instant: Jρ = Jρ (X 1 , X 2 , . . . , X n )
and, at this point, we make two assumptions:
1. k out of the n forces (with k < n) are constrained, by external interaction, to maintain constant values (for instance, by keeping the temperatures of the two systems
at constant values, we fix the value of (1/T )). Suppose, for convenience, that
this happens for ρ between 1 ≤ ρ ≤ k.
2. as far as the other (n − k) degrees of freedom are concerned, we suppose that the
overall system is closed with respect to these Eρ . This is equivalent to impose
that for (k + 1) ≤ ρ ≤ n the following condition holds:
5 Chemical
d̂ e Eρ
for (k + 1) ≤ ρ ≤ n.
reactions are excluded and, more in general, let’s treat the n extensive quantities E ρ as
a kind of “conserved quantities”. If not, the expression for the entropy production would contain
additional terms not coupled with the terms that describe the exchanges between the two phases.
The changes due to interactions with the outside world are due to interactions with “third systems”.
6 For the fluxes we choose as positive the direction I → II.
15 Irreversible Processes: Applications
The theorem proves that if:
1. We may write linear relations between fluxes and forces;
2. The phenomenological coefficients are constant in time;
3. The Onsager reciprocity relations L i j = L ji apply,
then, at the stationary state the amount of entropy production is at a relative minimum.
Suppose that the system has evolved into a steady state. This means
for 1 ≤ ρ ≤ k
for (k + 1) ≤ ρ ≤ n .
for every ρ. We then we have
Jρ =
d̂ e EρI
d̂ e EρII
Jρ = 0
In other words, the intensities of the fluxes associated with constrained forces are
fully determined by the interaction of the overall system with the external world.7
On the other hand the fluxes associated with the free forces (i.e., the unconstrained
forces) have to be zero because the overall system is closed with respect to the
respective state variables Eρ con (k + 1) ≤ ρ ≤ n.
We write, now, the linear relations between fluxes and forces and express, consequently, the production of entropy as a bilinear form in the forces:
Jρ =
L ρρ X ρ
for 1 ≤ ρ ≤ n,
ρ =1
L ρρ X ρ X ρ .
ρ,ρ =1
If we calculate the first derivatives with respect to the free forces and we make use
of the symmetry properties of the phenomenological coefficients, we obtain
L ρρ + L ρ ρ X ρ = 2Jρ
∂ Xρ
ρ =1
(k + 1) ≤ ρ ≤ n .
If the system is in a stationary state the fluxes relative to the “free forces” must be
zero, then we have
for (k + 1) ≤ ρ ≤ n .
∂ Xρ
7 We
see that, in the stationary case, the flows represent the transfer, between the two external
constraining systems, of a specific physical quantity the meaning of which depends, of course, on
the choice of the state variables EρI,II .
15.2 Stationary States
So, the stationary state constitutes a configuration in which the entropy production
exhibits an extremum. Since the bilinear form is positive semidefinite, at the configuration of stationary state the entropy production possesses a minimum value. It
is, obviously, a conditioned minimum since we imposed k constraints to the first k
forces to the function P.
15.2.2 Determination of the Stationary State
In the n linear relations given by Eq. (15.99) we have, for a general configuration,
n 2 phenomenological coefficients out of which only n (n + 1) /2 are independent if
the reciprocity relations hold.
If we start from a generic initial configuration in which the first k forces are fixed
at their constrained values and the remaining (n − k) (with (k + 1) ≤ ρ ≤ n) free
forces have arbitrary initial values (but small enough to ensure the linearization of
fluxes), we will see that the values of unconstrained forces will evolve in time until
a steady-state configuration is reached.8
If we could calculate the values of the unconstrained forces at the stationary state,
we would completely determine the final, stationary state, because the set of the n
forces would be completely defined.
This is possible if we can assume that the values of the phenomenological coefficients remain constant during the evolution from the state initially prepared and the
steady state that will be reached.
We have already seen that at the stationary state the (n − k) fluxes relative to the
unconstrained force have to be zero.
If we take into account the (n − k) equations in Eq. (15.99) with (k + 1) ≤ ρ ≤ n,
and require that the fluxes be zero, we obtain a linear system of (n − k) equations in
the (n − k) unknowns X ρ with (k + 1) ≤ ρ ≤ n.
The solution of this system gives us the (n − k) “free forces” as a function of the
k constrained forces X ρ with 1 ≤ ρ ≤ k and of all (in general) the coefficients L ρρ .
Thus the stationary configuration is uniquely determined (but remember that this
is true if all the assumptions about linear coefficients are verified).
15.2.3 Stability of Stationary States and the Principles of Le
Chatelier and of Le Chatelier–Braun
Consider the situation examined in the previous subsections, i.e., that of a discontinuous system in a stationary state. As we have already proved this is a configuration
of minimum entropy production consistent with the imposed constraints that is, in
our case, the values of the first k forces.
8 This
statement will be proved in Sect. 15.2.3.
15 Irreversible Processes: Applications
Let us denote with Jρ0 and X ρ0 the values of the fluxes and of the forces in this
As we have seen in Sect. 15.2.2 the remaining (n − k) forces can be calculated
while, as regards the fluxes, the first k are determined by the interactions with the
external world the remaining (n − k) being null.
Suppose that, due to some internal fluctuation or to some perturbation produced
either by some external noise or intentionally by the observer, a small change in the
value of one of the non-constrained forces takes place.
Denote by δ X η this small change in the value of the force X η η being a particular
value in the interval (k + 1) ≤ η ≤ n. The new set of forces will be
X ρ = X ρ0
Xη =
X η0
for ρ = η ,
+ δ Xη .
Also the fluxes will suffer alterations and, in particular, for the flux Jη , the use of Eq.
(15.99) will provide the value
Jη = Jη0 + L ηη δ X η ,
Jη = L ηη δ X η .
Since we know that L ηη > 0, Eq. (15.106) shows that value of the perturbation of
the force and the flux that this perturbation generates have always the same sign:
Jη δ X η > 0 .
This means that the flux which is activated and which is directly connected to the perturbation, always works in the direction to decrease the intensity of the disturbance.
For instance, if the perturbation results in a temperature difference, the heat flow that
is triggered is in the direction of decreasing the temperature difference itself as we
can se if we remember that the force is equal to
− 2 .
Similarly, the fluxes of matter are in the direction of decreasing the concentration
gradients of chemical potential.
The expression in 15.107 provides an important indication: it seems that stationary
states are stable configurations as the deviations that may arise in the value of one
free force seem to be reabsorbed by internal processes.
15.2 Stationary States
Le Chatelier Principle (1884)
The Le Chatelier’s principle has been formulated in relation to equilibrium states. It
states that in a system in a state of equilibrium, if for any reason an inhomogeneity in
a uniform extensive quantity is produced, internal processes are excited that tend to
reabsorb the inhomogeneities and to bring the system back to the original equilibrium
This inhomogeneity may be, for instance, in the density, concentration, energy
density, and so on.
The principle of Le Chatelier was formulated on an empirical basis but now it finds
an explanation within the theory of Thermodynamics as it has evolved to the present
day. Two points should be emphasized. The first is that Le Chatelier principle is
substantially equivalent to the stability criteria of equilibrium states already examined
in Sect. 4.3.6 and which lead to the positivity of the specific heat, of the coefficient
of isothermal and adiabatic compressibility.
The second point is that Le Chaelier principle, which was conceived for equilibrium states, appears to be just a particular case of a more general law regarding
stationary states, i.e., thermodynamical configurations possessing a (local) minimum
for the entropy production. As it will be proved in Sect. 15.2.3 the internal processes
will steer the system toward states with lower entropy production (under certain
conditions). If we start from a configuration of minimum entropy production the
perturbation will be reabsorbed and the initial configuration shall be restored.
In conclusion, we see that the development of thermodynamics gives a satisfactory
explanation to an old empirical law.
Le Chatelier–Braun Principle
Le Chatelier’s principle finds its justification in the theory of irreversible processes as
we saw obtaining Eq. (15.107) but this constitutes only a “partial result”. Although
within the limits of validity that we have recalled, it refers only to the case of a system
with a single degree of freedom (n = 1) because only, in this case, it can ensure that
the disturbance is reabsorbed by the flux that was excited.
In the general case, more degrees of freedom are present and the problem is more
complicated. The fact remains that the effect produced by the perturbation δ X η on
the flux linked to it is governed by Eq. (15.107) but this does not guarantee that the
disturbance is reabsorbed. For a full explanation of the principle of Le Chatelier we
should also look at the changes that the same disturbance δ X η produces on the other
In this more general view, Le Chatelier’s principle is modified into Le Chatelier–
Braun principle which generalizes the former to the fluxes which are not directly
coupled to the occurring perturbation but that can be excited as a side effects.
In the most rigorous language of thermodynamics, we will refer to the flows that
are coupled to the force δ X η through non-diagonal coefficients of the phenomenological matrix. For example a variation in the pressure difference may produce a
15 Irreversible Processes: Applications
“volume flow” (flow directly connected to the pressure variation) it can also produce
a variation in the temperature and the latter can excite a flow of heat. According to
the principle of Le Chatelier–Braun, in systems with several degrees of freedom all
the fluxes excited by one perturbation, cooperate to reabsorb it.
A useful discussion of Le Chatelier–Braun principle can be found in [4]. Here,
we want to take the opportunity to introduce an important theorem which provides
an important evolutionary criterion. It is within the latter that both the principle of
Le Chatelier and of Le Chatelier–Braun finds their full explanation.
Time Derivative of Entropy Production
We have seen that, given a discontinuous system described by n degrees of freedom
(the values of n forces mutually independent), if we constrain k out of these n forces
and for the remaining (n − k) the system is closed toward the outside, then the steady
state which is realized is characterized by a minimum of entropy production.
It can be shown that if, after the constraints on the first k forces have been fixed
and the system is initially prepared in a generic configuration, it will evolve in such
a way that
< 0.
This implies the generalization of the stability properties that we have discussed in
Sect. 15.2.3 limited to the case in which the perturbation, for a system already in a
stationary state, concerns the value of one free force.
This theorem establishes an overall evolutionary criterion, that is without going
into details of what happens to individual fluxes when they are modified by some
changes in the individual forces.
For this reason, it is appropriate to state that Eq. (15.109) includes the principle
of Le Chatelier–Braun in a wider perspective.
The validity of this theorem is based on the same assumptions that we have seen
previously, namely:
1. Linear relations between fluxes and forces;
2. Symmetry of he phenomenological coefficients;
3. The coefficients L ρρ are constant in time.
The proof is based on the properties of the matrix which defines the bilinear form
that expresses the entropy production and that will be examined in Sect. 15.3.1. For
a detailed discussion of Eq. (15.109) see de Groot–Mazur [23] or de Groot [24].
The result of their demonstration is as follows: under the assumptions of linearity,
symmetry and constancy in time of the phenomenological coefficients it can be shown
−1 dx ρ dx ρ
dt dt
ρ,ρ =k+1
15.2 Stationary States
in which the matrix gρρ
is the matrix of a positive-definite quadratic form and is
defined by Eq. (15.153). The expression in Eq. (15.110) expresses the time derivative
of the entropy production as a quadratic form in the variables X˙ρ and X˙ρ and no
matter how they vary in time, the entropy production always decreases until it reaches
its minimum value. In this sense, the principle of Le Chatelier–Braun is immersed
in a more general formulation.
It is clear that the theorem Eq. (15.109) has as a consequence that stationary states
are stable configurations. Indeed if some perturbations occur (obviously consistent
with the imposed constraints) the triggered internal processes (all together) tend to
bring the configuration toward the one with minimum entropy production.
One Simple Example: The Prigogine and Wiame Model
Prigogine and Wiame [25] have developed a simplified thermodynamic model of a
biological system.
The model, briefly sketched in Fig. 15.2, considers an open system that absorbs
from outside a substance M, which is processed through a series of consecutive
chemical reactions to produce a final substance F which is returned outside. The
external environment is considered at a constant temperature and such as to keep at
constant values the concentrations of all the substances present.
Let us denote with the superscript I and II respectively the values of the properties
in the system and in the external world and suppose that, within the system, we have
k consecutive chemical reactions that we indicate with an index ρ with 1 ≤ ρ ≤ k.
All together we have k + 2 processes taking take place and more precisely k
chemical reactions and 2 transport phenomena. We describe the processes with the
following formalism:
Fig. 15.2 An open system, denoted by the superscript I, absorbs from the outside world (denoted
by the superscript II) a substance indicated by M. This substance is “metabolized” within system
I through a sequence of consecutive chemical reactions until the final product F is produced. Then
the substance F is transferred to the external world. In the external world pressure, temperature and
the concentrations are kept as constant
15 Irreversible Processes: Applications
M II → M I ,
M →N ,
N →O ,
... → ...,
R I → F I,
F →F .
For each of these k + 2 processes, which are expressed here with the formalism used
for “chemical reactions”, we define the degree of advancement whose time derivative
defines the rate (the generalized flux) of the process. Let us denote with vρ the rates
of the k chemical reactions, with J M the flux of the transfer process M II → M I and
with J F the flux of the other transfer process F I → F II . The flux J M is defined as
positive when component M is transferred from outside to the system while the flux
J F is set as positive when component F is transported from inside the system to the
external world. The state variables n γ are the numbers of moles of each component
γ within the system. The production of entropy will be written as
T P = A M JM +
Aρ vρ + A F J F ,
where A M and A F are the affinities of the transport phenomena and Aρ the affinities
of the k chemical reactions. The variations per unit time of the state variables can be
written as a function of the rates of the different processes:
dn M
= J M − v1 ,
dn N
= v2 − v1 ,
dn F
= vk − J F .
At the stationary state, i.e., for dn γ /dt = 0 we will have
J M = v1 = v2 = . . . = vk = J F .
If we denote with v the common value of the rates Eq. (15.122), the entropy production becomes
15.2 Stationary States
T P = ⎝A M +
Aρ + A F ⎠ v > 0.
The situation that is determined at the steady state is the following: the state of
system I does not change (stationary state) and the k + 2 processes are equivalent
to one irreversible process operating within the external world and it can be treated,
formally, as one “chemical reaction”:
M II → F II
T P = Av
while entropy production is
and comparing the last two expressions, we see that the affinity of the overall process Eq. (15.124) will prove to be the sum of the affinities of the k + 2 processes:
A = ⎝A m +
Aρ + A F ⎠ .
The affinity of the overall process Eq. (15.124) may be written as
A = μ M II − μ F II
and in many instances, the chemical potential can be expressed in a form that explicits
the concentration dependence:
μ = η + RT ln C
so that the affinity of the overall process may be written in the form
K (T )
A = RT ln −1
From this expression, it is clear that the condition of having constant temperature
and concentrations in the external world has the consequence that the generalized
force of the overall process is fixed.
Let us now reverse the perspective and write the expression of the entropy production for a generic configuration provided that we are allowed to use linear relations
between fluxes and forces. We shall obtain a quadratic form in the forces A/T :
ρ,ρ =1
L ρρ
15 Irreversible Processes: Applications
The problem we want to solve can be formulated in this way: we impose the constraint
that the sum of the affinities of the (k + 2) processes (see Eq. (15.126)) has a fixed
value and ask for which thermodynamical configurations the entropy production
given in Eq. (15.130) has an extremum value consistent with the imposed constraint.
We can solve the problem by using the method of Lagrange multipliers, and this is
equivalent to finding the extremes of unconditional expression
L ρρ
ρ,ρ =1
− 2λ
k+2 Aρ
in other words we require that, for the above expression:
Making use of the Onsager reciprocity Eq. (14.128), we are led to
ρ =1
L ρρ
− 2λ = 0.
Then we find, for each one of the k + 2 processes the rate has one common value λ:
vρ = λ
ρ = 1, 2, . . . (k + 2)
and this is the condition of the stationary configuration.
15.3 Fluctuations
In 1826, the botanist Robert Brown described, officially for the first time, a very
curious phenomenon to which he attended regularly when he observed under the
microscope, small colloidal particles, pollen or simply dust particles suspended in
liquids or gases. He observed movements in a zig-zag, completely at random and
with short duration, and their origin remained, for many decades, a mystery. Indeed
he could not understand why in a state of stable equilibrium, i.e., in a perfectly
homogenous configuration in which the liquid or the gas were expected to be, so
rapid and disorderly interactions with pollen particle could be developed. The clarification of the phenomenon was given by A. Einstein in one of the four articles he
published in the memorable year 1905 and is based on the recognition that perfectly
homogeneous configurations that characterize the states of stable equilibrium, are
“average configurations” that appear uniform and static to an observer (which we
called “macroscopic observer”) when he is observing with relatively low spatial and
15.3 Fluctuations
temporal resolution instruments. However, when the observation relies on instruments with higher spatial and temporal resolution, then the static and homogeneous
configurations expected in an equilibrium state turn out to be rapidly changing both
spatially and temporally in a “small-scale”.
The reader must not be tempted to automatically connect this type of observations
to the atomic-molecular structure of matter. We are still dealing with macroscopic
observations, rather we could say that two types of macroscopic observer emerge and
that these differ because of the high or low spatial and temporal resolution of their
instruments of observation. Nevertheless the discovery of such new phenomena, on a
small macroscopic scale, forces us to substantially revise the fundamental concepts
of thermodynamics starting from the concept of equilibrium state.
The equilibrium state that we represent in the space of thermodynamical configurations will no longer correspond to the observations performed by an Accurate
Macroscopic Observer (AMO) but it represents the average point around which a
large number of observations by AMO will condensate with a certain probability
distribution. The whole thermodynamics of equilibrium states will keep unchanged
its validity but with the caveat that we are referring to average states from the point
of view of an accurate observer or to just “the state” from the point of view of a
low-accuracy macroscopic observer (LMO).9
Another comment concerns the Second Principle. As we have seen it classifies as
natural or unnatural processes, changes of state occurring between equilibrium states
in the sense of an LMO, that is between average states in the sense of an AMO and
when we speak of infinitesimal transformations we refer to extrapolations to zero of
transformations between equilibrium states gradually getting closer and closer.
This should not be confused with the changes of state, which will also be considered infinitesimal, by an AMO on the scale that is proper to him.
It should be understood, however, that this double figure of Macroscopic Observer
contains a potential conflict between LMO and AMO as regards the meaning of the
Second Principle and, therefore, the very foundation of Thermodynamics. If we think
of the latter in its original meaning of “impossibility of achieving perpetual motion”,
we see that the theory developed by the macroscopic observer (in the version of
LMO) and based on the definition of entropy, does not protect us from the risk of
using observations on a “small scale” as a possible way of getting perpetual motion.
This paradox known as the problem of the “Maxwell Demon” is the essential example
of how AMO may take advantage from his accurate observations.
This conflict has been solved with the formulation of the “Landauer’s principle”
within the theory of the information and which will be discussed in Chap. 17.
9 The reader should avoid the mistake of attributing the classification “accurate” and “less accurate”
any hierarchy of merit. It is not true that to be more accurate (in this scientific context) is “better”
than being less accurate. These are two different perspectives that, of course, must be integrated
but which lead to different representations. By analogy, we can observe the details of the single
stones (fossils and minerals) but if we did not consider the shape, altitude and the distribution of
the mountain ranges, we would not have conceived the theory of continental collision and then of
plate tectonics.
15 Irreversible Processes: Applications
It is clear that the first observations, reported in 1826 and universally known as
“Brownian motions” reveal the existence of an inhomogeneous and rapidly variable
structure under a static and homogeneous appearance but all this happens well within
that world which, in the first chapter, we have called “macroscopic world”. It is within
classical physics that the theory of fluctuations was set by Einstein in 1905 when the
bridge with quantum physics (i.e., with the atomic-molecular theory of matter) had
yet to be started.
With this premise, it is also natural that a hinge between the macroscopic and
microscopic observations find a suitable tool in the development Statistical Mechanics and, in particular, in that part of the theory that seeks to explain the macroscopic
observations as average results of microscopic phenomena. This is beyond the scope
of this discussion and in accordance with the views stated in the introduction of this
book, we will discuss the issue from a purely macroscopic point of view that is,
without regard to any hypothesis about the discrete structure of matter.10
The explanation of Brownian motions as due to collisions symmetrical on a large
scale but strongly asymmetrical on a small scale, leads us to consider all the state
parameters as parameters subject to such rapid random changes, on a small scale,
that will be called random fluctuations.
If we start considering the existence of fluctuations in the distribution of matter it
is natural to expect that all of the extensive quantities, which represent the state from
the thermodynamic point of view, are subject to rapid variations (fluctuations) in a
manner which depends on the nature of the constraints. Consequently, even intensive
variables such as density, pressure, and temperature will be subject to fluctuations.
For example, if we consider a system in thermal contact with a thermostat (say the
external environment) we can see, with accurate observations, that also the energy
contained in the system is subject to rapid changes around the equilibrium value.
Similarly, if the system is constituted by a cylinder confined by a piston free to move
against a constant pressure (for example of the environment), we will notice that
its position is subject to small variations and hence also the volume of the system
fluctuates around a mean value.
If, then, the piston is diathermic, then we will see fluctuations both in the volume
and in energy. These fluctuations may be correlated or unrelated with each other.
Similarly, we can imagine that a chemical reaction at equilibrium is such on average only but that, a detailed observation would show fluctuations in the degree of
advancement around the equilibrium value that is, rapid and small variations of the
concentrations of the components around the respective equilibrium values.
We will say that “the state of the system fluctuates around a reference equilibrium
state” meaning with this expression the fact that the state parameters, both intensive
and extensive, are subject to rapid and random variations around their equilibrium
values and these rapid variations are called fluctuations.
Returning to the description of the thermodynamic state by determining the extensive parameters (therefore considering intensive variables as derived quantities), if
we denote with ξ a generic extensive state parameter and with ξ e its value corre10 Indeed
the presence of fluctuations can be conceived equally well in a continuum.
15.3 Fluctuations
sponding to the equilibrium state, we will define fluctuation of the state parameter ξ
at the instant t, the quantity:
α (t) = ξ (t) − ξ e .
More generally, if we assume that the system under consideration is defined by n
independent state parameters ξρ with 1 ≤ ρ ≤ n we consider the fluctuation of each
state parameter at time t:
αρ (t) = ξρ (t) − ξρe ,
where the functions αρ (t) are rapidly varying functions of time with zero mean value.
By rapidly varying we mean that changes take place in a timescale which is short
if compared to the timescale characterizing the macroscopic observer in its LMO
version but still very long compared to the timescale of the microscopic phenomena.
Given a state of equilibrium, the mean value of any state variable of the system
will be denoted with the symbol between brackets. For instance the mean value of
the fluctuation of the ρth fluctuation will be
αρ (t) = 0 .
The mean value is zero by definition of equilibrium state, but we shall have also
αρ2 ≥ 0 .
The mean value of any function of the time, say g (t) is defined by the relation:
g = lim
T →∞ 2T
g (t) dt .
15.3.1 Theory of Fluctuations in a Isolated System
If we denote by S0 the value of the entropy of the system in the equilibrium state,
and with S (α) the value of the entropy in the time instant in which a fluctuation αρ
of the state parameters ξρ (1 ≤ ρ ≤ n) occurs, we may write the series expansion:
S (α) = S0 +
n ∂S
αρ +
∂ S
1 αρ αρ + · · ·
2 ρ,ρ =1 ∂ξρ ∂ξρ eq
Since we are dealing with fluctuations around an equilibrium state it will be
= 0,
and hence:
15 Irreversible Processes: Applications
∂ S
1 αρ αρ + . . .
S (α) = S0 +
2 ρ,ρ =1 ∂ξρ ∂ξρ eq
Let us define the matrix:
gρρ = −
∂ξρ ∂ξρ
then Eq. (15.142) may be written as
S (α) = S0 −
1 gρρ αρ αρ
2 ρ,ρ =1
where gρρ is the matrix of a positive-definite quadratic form.11 Notice, also, that
gρρ is a symmetric matrix:
gρρ = gρ ρ
We now introduce, explicitly, an additional hypothesis: we treat the temporal evolution of the fluctuations with the same formalism with which we describe, in a
macroscopic viewpoint, irreversible processes. With this in mind, the interactions
between parts in internal equilibrium (those interactions which we describe
as irreversible processes), produce entropy at the rate (recall Sect. 14.7) P = ρ Jρ X ρ .
From the latter relation we find that the entropy variation in the time interval dt will
X ρ dξρ .
d̂ i S =
We assume, therefore, that the same description can be adopted when we consider
the change (infinitesimal transformation) from a configuration in which the system
exhibits a fluctuation from the configuration
{α1 , α2 , . . . , αn }
to the nearby configuration
{(α1 + dα1 ) , (α2 + dα2 ) , . . . , (αn + dαn )} .
In addition we consider small fluctuations around the equilibrium state so that also
the forces X ρ can be expressed, with sufficient accuracy, as linear functions of the
11 We are dealing
with deviations of the state around a state of equilibrium in an isolated system. In
this case the entropy is maximum in the equilibrium configuration.
15.3 Fluctuations
X ρ (α1 , α2 , . . . , αn ) = X ρ (0, 0, ....., 0) +
n ∂ Xρ
ρ =1
with X ρ (0, 0, . . . , 0) = 0 for every ρ because configuration {0, 0, . . . , 0} corresponds to the equilibrium state. Since we are considering a thermally isolated systems
d̂ e S = 0, and dξρ = dαρ , it results that Eq. (15.146) may be written in the form
∂ Xρ
dS =
αρ dαρ .
∂αρ eq
ρ,ρ =1
Since the n variables are mutually independent the differential form Eq. (15.148) can
be easily integrated from {0, 0, . . . , 0} to {α1 , α2 , . . . , αn } and we obtain
S (α1 , α2 , . . . , αn ) = S (0, 0, . . . , 0) +
1 ∂ Xρ
αρ αρ
2 ρ,ρ =1 ∂αρ eq
and, if we compare the latter with Eq. (15.142) we obtain the identity
∂ Xρ
∂ξρ ∂ξρ
for every ρ and ρ , and then we have
Xρ =
From this relation we may write the expression for the force and for the amplitude
of the fluctuations as a function of the fluctuation matrix Eq. (15.143). If we take the
derivative of Eq. (15.144), we obtain
Xρ =
gρρ αρ .
ρ =1
This relation can be reversed to give
αρ = −
ρ =1
in the inverse matrix of gρρ .
where gρρ
Xρ ,
15 Irreversible Processes: Applications
15.3.2 Fluctuations Distribution Function
Remaining in the case of thermally insulated systems, we can draw the distribution
function for the fluctuations which is defined in this way: the infinitesimal probability
d P that, in a system at equilibrium,
the fluctuations
of the state parameters is within
the infinitesimal interval αρ ; αρ + dαρ with 1 ≤ ρ ≤ n, is
d P = P (α1 , α2 , . . . , αn ) dα1 dα2 . . . dαn .
According to Einstein’s theory of fluctuations the fluctuation distribution function
is related to the entropy as follows:
d P ∝ exp
i S
dα1 dα2 . . . dαn
hence, the distribution function is
P = ¯ +∞
exp (i S/k B )
exp (i S/k B ) dα1 dα2 . . . dαn
because of the normalization condition. Here i S is
i S = S (α1 , α2 , . . . , αn ) − S (0, 0, . . . , 0) .
As we have previously discussed i S < 0 and the exponential dependence justifies
the approximation to consider only small fluctuations.
15.3.3 Mean Values and Correlations
Now that we have established the form of the distribution function of the fluctuations,
we can calculate the mean values of all the macroscopic quantities of interest. In
particular, it will be interesting to calculate the mean value of a fluctuation, the mean
value of the decrease in entropy due to fluctuations and the correlation between a
fluctuation and a generalized force. Let us start with the latter term and, as will be
shown below, all the other mean values will be obtained from it using Eqs. (15.152)
and (15.153):
αρ X ρ P dα1 dα2 . . . dαn .
αρ X ρ =
From Eqs. (15.156) and (15.151) we have immediately
Xρ = kB
15.3 Fluctuations
and then the required mean value is
αρ X ρ = k B
dα1 dα2 . . . dαn .
This integral may be calculated by parts starting with the integration on the variable
αρ :
dα1 dα2 . . . dαn =
dα1 . . . dαρ −1 dαρ +1 . . . dαn
ˆ+∞ ∂P
dαρ ,
ˆ+∞ ˆ+∞ +∞
dαρ = αρ P −∞ −
The first term in the right side is zero for P goes rapidly to zero for high values of
the fluctuations and for the second term we have
= δρρ
δρρ being the Kronecker delta symbol, due to the mutual independence of the two
variables and to the normalization of the distribution function. Then we have:
αρ X ρ = −k B δρρ .
Second Moments
From Eq. (15.165) we can calculate other mean values. Using Eq. (15.153) we can
obtain the correlation between two fluctuations:
gρ ρ X ρ =
αρ αρ = αρ
ρ =1
ρ =1
αρ X ρ
15 Irreversible Processes: Applications
= kB
gρ−1ρ δρρ
ρ =1
= k B gρ−1ρ .
In particular the mean quadratic value is
αρ2 = k B gρρ
Average Entropy Decrease
We have denoted with S0 = S (0, 0, . . . , 0) the value of the entropy in the equilibrium state. If we imagine to make an accurate observation with a short time scale, we
would not find the system in the equilibrium state (0, 0, . . . , 0) but in a state more
or less close to it (fluctuation) to which a lower value of the entropy will correspond.
Let us calculate the mean value of the deficit of entropy caused by the fluctuations
of the state variables. If we go back to Eq. (15.144) and to Eq. (15.169) we find the
S − S0 = − k B n .
This can be interpreted as a kind of equipartition theorem of the decrease in entropy:
all degrees of freedom (mutually independent) contribute to the average decrease in
entropy due to fluctuations with the same amount k B /2.
15.3.4 Onsager Relations and the Decay of Fluctuations in
Isolated Systems
In this subsection we want to study, in some detail, the dynamics of the fluctuations
decay. Consider a system in a state of equilibrium in a thermally isolated system
and remember that the equilibrium state is characterized by a point with maximum
Fluctuations and Microscopic Reversibility
The term “microscopic reversibility” refers to the property of the laws of microscopic
physics to be invariant under inversion of the time axis. More precisely, we want to
15.3 Fluctuations
allude to the fact that the laws of motion, at the microscopic level, are invariant under
the transformation t → −t.
This means, in a problem of point mechanics, that if, in some instant of time, we
operate the change of variable t → −t in the equations of motion, the velocity of
the point in that instant changes sign (v → −v) and we will see the point retrace the
same trajectory that it had traveled so far and with the same temporal law but in the
“reversed” direction, i.e., backwards.
This requires that the interaction forces between the point and the rest of the
universe, do not depend on the velocity of the particles and thus remain unchanged.
We know a situation in which this property of the forces seems not to be true: it is the
case of the Lorentz force for the part that concerns the interaction with the magnetic
field B. The invariance of the force under the inversion of the velocity is resumed if,
at the same time, we also reverse the direction of magnetic field.
This must not be understood as a mere expedient invented to save a symmetry
law but, on the contrary, it is intrinsic to the time reversal symmetry in a theoretical
context that does not provide for the existence of magnetic monopoles. The magnetic
field is not generated by charges but only by currents and so if all the velocities are
reversed also the magnetic field changes sign.
The particular evolution of the fluctuations in a system at equilibrium, in a classical
(non-quantum) context, is determined by the initial conditions and by the laws of
interaction between the microsystems. Two distinct macroscopic systems but in the
same macroscopic state of equilibrium, will show two sequences of fluctuations
certainly different from each other but with two fundamental properties in common:
(a) the mean value of the fluctuations of each state parameter will be zero; (b) the
distribution function of the fluctuations will be precisely the same and will be given
by Eq. (15.156).
A succession of fluctuations which has these two characteristics is called a “normal
sequence of fluctuations”. An infinite number of “equal” systems from the macroscopic point of view, will all have different successions of fluctuations but each one
will be a normal succession of fluctuations and then all mean values will have the
same result.
The principle of symmetry under time reversal of the laws of microscopic physics,
applied to the thermodynamic context, means:
“If we consider a thermodynamic system in a state of equilibrium, the succession of fluctuations that we would observe if at a certain instant we would reverse
the values of the velocities of all the particles, would still be a normal sequence
of fluctuations”. Therefore, any mean value calculated along the development of
the observed fluctuations, would have the same result of the one calculated on the
succession of fluctuations that we would observe after applying the time reversal
operation t → −t.
For our purposes, this conclusion implies the following result: consider the time
evolutions of the fluctuations of two degrees of freedom say αρ (t) e αρ (t). We
want to calculate the correlation between the fluctuation of the parameter ρ with the
parameter ρ not at the same instant but taken with a constant delay say τ with τ > 0.
Formally we want to calculate:
15 Irreversible Processes: Applications
αρ (t) αρ (t + τ ) = lim
T →∞ 2T
αρ (t) αρ (t + τ ) dt .
As a consequence of the ergodic hypothesis, any mean value, defined in the form
of time average in a very long (∞) time interval, may be equivalently calculated by
averaging the measurements of the quantity of interest observed in a large number
(∞) of “equal” systems and in the same instant. This implies that, in order to calculate the above mean value, we observe, in many equal systems, the value of the ρ
parameter at the same instant t and the value of the ρ parameter τ seconds later.
Subsequently, with the same observation conditions, we measure the product of
the value of the αρ taken at the instant t and the value of the αρ taken τ seconds
In its expression as a time integral, the two mean values differ from each other only
for the temporal order of the observations of the two fluctuations. In other words, the
second mean value is obtained from the time integral which defines the first mean
value where we operated the change of variable t = −t.
In conclusion owing to the principle of symmetry under time reversal formulated
for a thermodynamical system, the two time averages must be equal:
αρ (t) αρ (t + τ ) = αρ (t) αρ (t + τ ) .
If we subtract from both members, the mean value of the product of the two fluctuations taken at the same instant, if we take into account that the difference of two
average values is equal to the average value of the difference and divide both sides
by τ we obtain
1 1 αρ (t) αρ (t + τ ) − αρ (t) αρ (t) =
αρ (t) αρ (t + τ ) − αρ (t) αρ (t)
αρ (t + τ ) − αρ (t)
αρ (t + τ ) − αρ (t)
αρ (t)
= αρ (t)
and passing to the limit for increasingly smaller values of τ we get
αρ (t) α̇ρ (t) = αρ (t) α̇ρ (t) .
Fluctuation Decay in the Irreversible Processes Formalism
Let us now apply the Onsager theorization for the treatment of irreversible processes
to the study of the time evolution of fluctuations. We identify the time derivative of
a state variable with a generalized flux:
Jρ = ξ̇ρ = α̇ρ ,
15.3 Fluctuations
and suppose that the deviations from equilibrium are small enough to be able to
express the fluxes as linear functions of n forces at play, namely:
Jρ =
L ρρ X ρ .
ρ =1
Then if we use Eq. (15.177) and take into account the fact that the mean value of a
sum is the sum of the mean values and that the mean value of a quantity multiplied
by a constant is equal to the product of the constant times the mean value of the
quantity, we can write
L ρ ρ αρ X ρ =
L ρρ αρ X ρ ,
ρ =1
ρ =1
and with reference to Eq. (15.165) we shall obtain
L ρ ρ δρρ =
ρ =1
L ρρ δρ ρ
ρ =1
L ρ ρ = L ρρ .
Making use of Einstein’s theory of fluctuations and of the invariance of the laws of
microscopic physics under time reversal, Onsager showed the symmetry properties
for linear phenomenological coefficients in the case of the decay of fluctuations in
the case of systems in a state of equilibrium.
The validity of this theorem is assumed also for systems that are maintained in
nonequilibrium configurations as we have seen in the examples that we have examined up to this point. This symmetry property is not demonstrated, but constitutes, to
a certain extent, a sort of “fourth postulate” of Thermodynamics and the experimental
observations will show whether there are contradictions or not.
In the case where we consider irreversible processes in the presence of magnetic
fields, it is possible that the linear phenomenological coefficients (or some of them)
depend on the value of the present magnetic field. In this case, the symmetry condition
must be applied in the form
L ρρ (B) = L ρ ρ (−B) .
This is to be expected as was seen from the discussion on microscopic reversibility
but the formal proof is more complicated and will not be pursued here.
Chapter 16
Thermodynamics of Continua
Abstract The extension of the fundamental equations of Thermodynamics to a
description of macroscopic systems in terms of continuous state variables is developed. All the basic relations as mass conservation in the presence of chemical reactions, the equation of motion and the equations for energy and that for entropy which
express, in the new formalism, the First and the Second Principles, respectively,
must be reformulated. The correct expression for the entropy production and the
consequent expressions for the fluxes and the corresponding generalized forces are
obtained. In the linear regime, the general relation between the mobility of ionic
species and the coefficient of diffusion (Einstein relation) is demonstrated. The
thermoelectric phenomena (Seebeck, Peltier, and Thomson effects) are discussed
together with the thermodiffusion processes. An appendix concerning the Gibbs–
Duhem relation closes the Chapter.
Keywords Continuous systems · Entropy balance · Entropy production ·
Mechanical equilibrium · Mobility · Diffusion coefficient · Einstein relation ·
Thermoelectric phenomena · Seebeck effect · Thermoelectric power · Peltier
coefficient · Thomson effect · Galvanomagnetic and thermomagnetic effects ·
Thermodiffusion processes · Dufour effects · Gibbs–Duhem relation
16.1 Introduction
We have seen how, in many practical situations, it is useful to treat nonequilibrium
systems as a collection of two or more portions, each in a state of internal equilibrium
but not in equilibrium with each other and assuming that the contribution attributable
to the zones of transition between the various portions can be neglected, in the
economy of the variations of extensive variables.
We called this way of describing the processes “discontinuous systems approximation” and the utility is in the fact of being able to treat a variety of irreversible
processes with the same formalism developed for systems in equilibrium and applying the additivity property of extensive thermodynamic potentials cutting away the
contribution of transition zones (boundaries).
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
16 Thermodynamics of Continua
Not always this cut-off is allowed and so it is necessary to rewrite the basic
equations of thermodynamics to adapt the theory to the need to describe the systems
with the “perspective of continua”.
We will call “continuous systems” the systems in which we will consider all
intensive state parameters defined point by point r in a certain instant t, and described
by continuous and differentiable functions:
x = x (r, t) .
For instance
(r, t) =
γ = γ (r, t) =
cγ (r, t) =
dm γ
are, respectively, the density, the density of component γ and the mass concentration
of component γ .1
16.2 Definition of System
As we have seen in the study of irreversible processes in the discontinuous approximation, all the processes consist of the transfer of extensive quantities between
interacting systems.
Consider a volume V delimited by a closed surface Σ. This volume and this
surface define the system and its extensive properties are
E (t) =
e(r, t) dV ,
is the density of E and, in general, it will depend on the coordinates and time. Likewise
we can define:
E (t) =
(r, t) E∗ (r, t) dV ,
1 In
general intensive quantities such as, for example, pressure, temperature, the chemical potential
have been defined for systems in thermodynamic equilibrium then the condition of Local Thermodynamic Equilibrium (LTE) must be verified point by point, see Sect. 16.6.
16.2 Definition of System
where E∗ (r, t) is the specific E, i.e., the amount of E per unit mass. Obviously
between the specific amount and its density, the relation is
e = E∗ ,
The time variation of E will be written as
dV .
V ∂t
This expression will always be brought to the following form in which it is expressed
as the sum of a volume integral plus a surface integral:
π [E] dV −
J [E] · d ,
with d = dΣ n̂ where dΣ is the area of the surface element and n̂ is the unit vector
perpendicular to the surface at that point and oriented outwards. Writing the integral
in this general form, the term π [E] will have the meaning of density of production of
the quantity E, i.e., it denotes the amount of E produced per unit volume and per unit
time at the point (r, t) by processes occurring within the volume V . The vector J [E]
is the flux density of the quantity E through the surface Σ. It represents the amount
of E entering the volume which defines the system, per unit area, and per unit time.
This is the term which correctly describes the interaction, as regards the exchange
of E, between the system and the external world. In this way we have formalized,
in this new context, the expression dE = d̂ i E + d̂ e E that we have postulated from
the beginning, for the change of any extensive quantity. Making use of the Gauss
dV =
π [E] dV −
∇ · J [E] dV
V ∂t
and hence:
= π [E] − ∇ · J [E] .
We define conserved quantities those extensive quantities for which
π [E] = 0 .
16.3 Mass Conservation
The mass of the system is written as
16 Thermodynamics of Continua
and the conservation of mass is expressed by the condition:
π [m] = 0
and hence:
= −∇ · J [m] ,
= −∇ · (v) ,
v being the flux density of mass if we consider, for the moment, the presence of
only one component.
If there are various components which may have different velocities and take part
in chemical reactions, the conservation of mass is no longer valid for each component.
Let us suppose that n chemical components are present and are denoted by the index
γ . Let us denote by δm γ the small amount of mass of component γ in the small
volume element δV at a certain instant. If γ takes part to one chemical reaction
within the volume element, the change of mass per unit time, due to this internal
process is
d̂ i
δm γ = νγ v ,
where νγ = νγ Mγ , νγ being the stoichiometric number (with sign) with which the
component γ takes part to the chemical reaction, Mγ its molecular weight and v =
v(r, t) is the rate of the chemical reaction in the region occupied by the small volume
under consideration (remember that the rate of a chemical reaction is an extensive
The term π γ , describing the production of component γ per unit volume, can
be written as
d̂ i γ
= π γ = νγ j ch ,
where j ch is the rate per unit volume of the chemical reaction. The continuity equation
for the component γ becomes
= −∇ · γ vγ + νγ j ch ,
where vγ is the velocity (macroscopic) of the component γ at that point, and in that
The center of mass velocity v of the fluid in that point, is linked to the velocities
of the individual components by the well known relation:
16.3 Mass Conservation
v =
γ vγ ,
γ =1
with =
γ =1
γ . The conservation of mass for each chemical reaction is written
νγ Mγ = 0 or
γ =1
νγ = 0
γ =1
and if we sum Eq. (16.20) over all components we obtain Eq. (16.17).
In many applications, for example, for nonviscous fluids, the entropy production
is not dependent on the velocity of the whole fluid (i.e., the center of mass velocity),
but rather on the velocities of each component relative to the other.
It is, therefore, appropriate to define the fluxes of each individual component
relative to the motion of the center of mass:
Jγ = γ vγ − v ,
γ vγ = Jγ + γ v .
Substituting in Eq. (16.20) we get
+ v · ∇γ = −∇ · Jγ − γ ∇ · v + νγ j ch .
The Lagrangian or Substantial Derivative
Let us introduce the Lagrangian or substantial derivative operator:
This operator, applied to any function of the space and time coordinates, expresses
the variation per unit time of that quantity as would be measured by an observer who
moves with barycentric velocity along a flow line of the fluid.
It is necessary to highlight one important point: the equations which will be
discussed in the following sections in order to formulate the fundamental equation
of thermodynamics for continuous systems, will all be written as seen by such an
observer (let us call him Lagrangian, or substantial, observer). Then the equation of
motion in Sect. 16.4, the equation for the energy balance in Sect. 16.5 and that for
entropy in Sect. 16.6 must be written in terms of Lagrangian derivatives.
For this (Lagrangian) observer, the continuity equation is written as
= −∇ · Jγ − γ ∇ · v + νγ j ch .
16 Thermodynamics of Continua
Eq. (16.27) over all the components and remembering that = Σγ γ and
therefore γ Jγ = 0, we obtain
= −∇ · v ,
which is a form entirely equivalent to the mass conservation law expressed in the
form of Eq. (16.17) for a Eulerian observer.
Using the definition of substantial derivative it is useful to derive a relation of
general validity for the substantial derivative of the densities and of the specific
values of extensive quantities, which will be used further on.
Consider any extensive quantity E and denote by e and by E∗ its density and its
specific value, respectively. Recalling Eqs. (16.8) and (16.28) we may write
d ( E∗ )
+ E∗
− (E∗ ) ∇ · v ,
but considering the definition of substantial derivative we have
∂ ( E∗ )
d ( E∗ )
+ v · ∇ E∗ ,
therefore equating the two expressions we obtain
∂ ( E∗ )
+ ∇ · E∗ v .
The relevance of this equation is in the physical meaning of the last term, in which
the divergence of the vector describing the convective transport of the quantity E
appears. We will meet this term when the convective energy and entropy transport
will have to be computed.
16.4 Equation of Motion
As we aim to reformulate the equations of thermodynamics for continuous systems
it is necessary to recall some fundamental notions in fluid mechanics, in order to
characterize the terms representing the work done on the portions of fluid that we
will consider.
Let us consider a small mass δm located in the point r at the time t, and having
the volume δV = δm/. This small mass will be subject to various kinds of forces.
First, let’s consider the forces on the surface of the small mass δm, that is, the
forces acting on the surface of the little volume δV . If the material is isotropic, nonviscous (and if we can assume the condition of local thermodynamic equilibrium to
hold) these forces can be completely described by the pressure from the surrounding
environment. In this case they will have as a resultant:
16.4 Equation of Motion
δF p = (−∇ p) δV .
In addition to pressure forces, we assume that there are also forces distributed through
the volume of the small mass. These forces are due to the interactions of the various
components present in δm with external fields. The resultant will be
δFvol =
γ δV Fγ ,
γ =1
Fγ being the force per unit mass acting on component γ in the point (r, t). Remembering that δm = δV and denoting by v the velocity of the center of mass of δm at
time t in the point r, the Newtonian equation of motion may be written as
= −∇ p +
γ Fγ .
γ =1
16.5 The Equation for Energy
Consider a volume V bounded by a closed surface Σ. We write the total energy
contained in the volume V at time t (that is, the total energy of the system at time t)
in the form
U ∗ dV ,
where U ∗ is the specific energy.
The work done per unit time on the system at the instant t:
d̂ W
V γ =1
γ vγ · Fγ dV −
p v · d .
The volume integral represents the work per unit time done by the volume forces on
the various components; the second term, the integral over the surface, represents
the work done by the pressure forces on the boundary of the system.
In addition to the work done by external forces, in order to determine the change
in the energy of the system, we must take into account also of the amount of energy
transferred, per unit of time, from the outside through the boundary. This contribution
had already been defined in Sect. 14.1.1 by Eq. (14.3). If we divide the infinitesimal
amount of transferred energy d̂ by the infinitesimal time interval dt we define the
total energy flux density vector Ju by the relation:
16 Thermodynamics of Continua
Ju · d .
The vector Ju is defined point by point at the surface Σ of the volume V which defines
the system. The First Principle of Thermodynamics will, therefore, be written as
which becomes
U ∗ dV =
γ vγ · Fγ dV −
V γ =1
pv · d −
Ju · d .
By making use of the Gauss’ theorem to transform the surface integrals into
volume integrals, we obtain
∂ ( U ∗ ) =
γ vγ · Fγ − ∇ · ( pv) − ∇ · Ju
γ =1
and making use of Eq. (16.31) in which the specific quantity is the specific energy
U ∗ , we have
dU ∗
γ vγ · Fγ − ∇ · ( pv) − ∇ · Ju + ∇ · U ∗ v .
γ =1
Recalling the definition Eq. (16.23) for the diffusion flows γ vγ = Jγ + γ v, the
above relation becomes
dU ∗
Jγ · Fγ − ∇ · ( pv) − ∇ · Ju − U ∗ v .
γ Fγ · v +
γ =1
γ =1
From the equation of motion Eq. (16.34) we have
+ v · ∇p ,
γ Fγ ⎠ · v = v ·
γ =1
and then after substitution and a little algebra, we obtain
dU ∗
Jγ · Fγ − p∇ · v − ∇ · Ju − U ∗ v .
v +
γ =1
16.5 The Equation for Energy
Equation (16.44) gives the lagrangian time derivative of the specific energy and it is
important to highlight two points.
One is the following: the first term on the second member, v2 /2, is the specific
kinetic energy due to the overall, i.e., of the center of mass, motion. Since the entropy
does not depend on the system being at rest or being in motion, it will be necessary
to define the specific internal energy as the total specific energy minus the specific,
center of mass, kinetic energy. The necessity of this new definition comes from the
requirement that in the Fundamental Relation per unit mass Eq. (16.55) the center
of mass kinetic energy does not influence the value of the entropy hence only the
internal energy must be relevant. Then Eq. (16.44) may be written in the form
d U ∗ − v2 /2
Jγ · Fγ − p∇ · v − ∇ · Ju − U ∗ v .
γ =1
It is necessary to comment on the last term in Eq. (16.44). It expresses the divergence
of a new vector resulting from the difference between the total energy flux and the
convective energy flux. This new vectorial quantity will be denoted by Jq and will
be called, by convention, the heat flux density in a generalized sense. It is defined as
Jq = Ju − U ∗ v .
This designation is justified if we consider the meaning of the vector
Ju,conv = U ∗ v .
It is the energy carried by the convective motion of matter that crosses the border.
As it is evident, for closed systems in which the transport of matter is inhibited, Jq
coincides with the already well-known flux density of heat.
Furthermore Eq. (16.45) needs another comment. It expresses the total time derivative of the specific energy to which the term describing the kinetic energy of the bulk
motion is subtracted. This is what defines the inter nal specific energy. This is precisely the term which enters in the equation for the entropy variations (as will be
commented in the following section) then from now on the notation U ∗ will refer to
the internal specific energy, Taking this warning in mind, the new expression for the
relation which summarizes the First Principle for the specific internal energy is
dU ∗
Jγ · Fγ − p∇ · v − ∇ · Jq .
γ =1
16 Thermodynamics of Continua
16.6 The Equation for Entropy
Consider a phase of volume V in a state of internal equilibrium. All intensive properties like density, pressure, and temperature are uniform throughout the system and
if some instantaneous perturbation destroys this configuration, internal processes are
activated in order to reestablish a new homogeneous state. The time interval required
to reabsorb the perturbation is of the order of:
τr V 1/3 /vsound ,
where vsound is the speed of sound and τr is called relaxation time of the system. If
we consider a system of volume V as composed by a large number of still macroscopic subsystems of much smaller volume, we expect that, normally, each subsystem
achieves a state of local equilibrium in a much shorter time lapse then the system as
a whole. In other words, in its way toward equilibrium the large system can be well
described by a collection of small parts instantaneously in local equilibrium. When
this condition is achieved we will talk about Local Thermodynamic Equilibrium
(LTE) and, in this case, it will be possible to consider every intensive parameter as
described by continuous functions of the space-time coordinates. Therefore, in order
to proceed, we must bear in mind the following points:
1. All state variables are defined locally;
2. The entropy depends on the other state variables in the same way as in a state of
equilibrium, that is, at every point the same fundamental equation that we have
observed at equilibrium, holds;
3. In particular, as a consequence of the preceding statement, in every point the same
equation of state that we have found for equilibrium states, applies.
To formalize the preceding requirements consider a small mass δm located in the
position r at the time t. Equation (4.23) can be rewritten in the form
T dS = dU + pdV −
μγ dn γ ,
γ =1
must be applied to the small mass for which all extensive quantities are written in
the general form
δE = E∗ δm ,
where E∗ (r, t) is the specific value of the state parameter E in the point (r, t).
For our purpose let us define, respectively, the specific entropy, the specific energy,
the specific volume:
S∗ =
U∗ =
V∗ =
16.6 The Equation for Entropy
Further let cγ be the mass concentration of component γ (cγ = δm γ /δm) and define
the symbol μ∗γ , called specific chemical potential of component γ , by the relation:
μ∗γ =
then the fundamental equation becomes
T dS ∗ = dU ∗ + pdV ∗ −
μ∗γ dcγ .
γ =1
The Fundamental Relation given in Eq. (16.54) that appears in the above differential
form will be written, in finite terms:
S ∗ = S ∗ U ∗ , V ∗ , cγ .
16.6.1 Entropy Balance in Continuous Systems
Consider the fundamental equation applied to the unit mass (Eq. (16.54)) and consider
the rate at which the changes occur
dU ∗
dV ∗ ∗ dcγ
dS ∗
γ =1
Inserting Eq. (16.48) we obtain
dS ∗
dV ∗ ∗ dcγ
Jγ Fγ − ∇ · v − ∇ · Jq + p
γ =1
γ =1
Recalling the definition of specific volume V ∗ = 1/ and the continuity equation in
the form Eq. (16.28), it is straightforward to show that
dV ∗
= ∇·v
and then the entropy equation simplifies to
1 dS ∗
Jγ · Fγ − ∇ · Jq −
γ =1
γ =1
16 Thermodynamics of Continua
A further step is to transform the last term of the second member in a form in which
it appears explicitly the contribution of chemical reactions. For this purpose we
use Eq. (16.31) where E = m γ and therefore the associated specific quantity is the
mass concentration cγ , so that
+ ∇ · γ v .
With regard to the first term of the second member we use Eq. (16.20) and recalling
the definition of the diffusion fluxes Eq. (16.23) we obtain
= −∇ · Jγ + νγ j ch .
Equation (16.59) for the entropy can now be written in the form
1 dS ∗
1 ∗
Jγ · Fγ − ∇ · Jq +
μγ ∇ · Jγ − j ch
νγ μ∗γ .
T γ =1
T γ =1
γ =1
The last term of the second member can be recognized immediately as the affinity
of the reaction:
νγ μ∗γ = −
νγ μγ = A .
γ =1
γ =1
As regards the other terms that appear in the form f ∇· b where f is a scalar and b
is a generic vector, they must be reported to the form
f ∇ · b = ∇ · ( f b) − b · ∇ f .
Working in this way we get to formula:
dS ∗
= −∇ · Js + π ,
Jq −
π = Jq · Xq +
γ =1
μ∗γ Jγ
Jγ · Xγ + j ch X ch .
γ =1
In this equation it is useful to report the expressions for individual generalized forces
16.6 The Equation for Entropy
Xq = ∇
Fγ − T · ∇
Xγ =
X ch
The vector defined in Eq. (16.66) undoubtedly has the meaning of entropy flux that
is of entropy transmitted per unit time through the border Σ of the system but, as
we shall see now, it does not represent the whole entropy which flows through the
border but just one part of it. To understand this very important point we need to go
back to the definition of entropy of the system:
S dV
and calculate the variation per unit time
d̂ i S
d̂ e S
∂ ( S ∗ )
dV =
If, once again, we make use of Eq. (16.31):
∂ ( S ∗ )
dS ∗
− ∇ · S∗v ,
we obtain
π dV −
∇ · S ∗ v dV
π dV
= − Js,tot · d +
d̂ i S
d̂ e S
From this we get
d̂ i S
π dV ,
d̂ e S
Js,tot · d
16 Thermodynamics of Continua
Js,tot =
Jq −
γ =1
μ∗γ Jγ
+ S∗v .
As we see, in order to obtain the total flux of entropy we had, in addition, to take into
account the convective transport term S ∗ v.
16.6.2 The Entropy Production
From the previous relations it is clear that the quantity π defined in Eq. (16.67)
together with Eqs. (16.68)–(16.70) has exactly the meaning of entropy production
per unit volume. This means that π δV is the production of entropy in the system
defined by the small volume δV .
The production of entropy is still given by the sum of various terms each of which
is the product of a generalized flux, which gives the name to the irreversible process,
times an associated generalized force.
The term that comes closest to its counterpart already studied in the discontinuous
systems approximation, is the term related to the chemical reaction: j ch δV is exactly
the rate of the chemical reaction within the volume considered and X ch is exactly
the relative generalized force already seen.
The first term in the second member of Eq. (16.67) describes a heat flow, in the generalized sense, and the corresponding force is virtually identical to the corresponding
term (1/T ) already seen previously.
The second term in Eq. (16.67) describes the diffusion motions of the individual
components with respect to the velocity of the center of mass (bulk motion of the
fluid). The corresponding fluxes are
Jγ = γ vγ − v ,
and the corresponding generalized forces are given in Eq. (16.69). Various causes
contribute to determining the force related to the diffusion of each component: external volume forces that act directly on the component itself, a possible temperature
gradient and possible gradients of concentrations (the latter is contained in the gradients of chemical potentials), but it is necessary to note that these causes concur, in
a well-determined way, to compose a unique force.
This result allows us to study the correlations between the effects of each partial
cause in absolute generality that is, regardless of the modeling. A very clear and
interesting example is provided by the relationship between the diffusion coefficient
in a two-component system and the mobility coefficient under an external volume
force. This relationship is known as Einstein relation.
Before addressing this problem we need to study some general properties that
arise in the condition known as mechanical equilibrium.
16.6 The Equation for Entropy
16.6.3 Mechanical Equilibrium
The expression “mechanical equilibrium” denotes a situation in which the motion of
the fluid is point-by-point stationary:
= 0.
For example, if we have an ionic solution and, at a given instant, we apply a constant
electric field, after a short initial phase a configuration in which the various components will flow with constant speed will be established. This is the situation named
condition of mechanical equilibrium.
The same thing happens when we apply a potential difference in a conductor:
Ohm’s law refers, in fact, to a configuration of “mechanical equilibrium”. This
approximation is justified by the fact that the timescale for the mechanical equilibrium to be reached is very short compared with the timescales of the thermodynamic
processes we are studying and, accordingly, represents the typical situation in which
the thermodynamical relations will be applied.
Prigogine (1947) showed that in this condition and in the absence of temperature
gradients, the following relation applies [25]:
γ Xγ = 0 .
γ =1
This relation is important because, in the expression of the entropy production
Eq. (16.67), it allows us to replace the fluxes Jγ , which describe the diffusion with
respect to the center of mass motion, with fluxes describing the motion of the different components with respect to any other reference velocity. The utility of this will
be shown in Sect. 16.6.4.
In order to prove Eq. (16.82) let us refer to the form given in Eq. (16.90), for the
forces and obtain the following expression:
γ Xγ =
γ =1
γ Fγ −
γ =1
γ ∇μ∗γ .
γ =1
In general we can write the chemical potentials as functions of temperature, pressure
and concentrations, that is μ∗γ = μ∗γ (T, p, c1 , c2 , . . . , cn ) and then the last term of
the above equation can be rewritten as
γ =1
γ ∇μ∗γ
γ ,γ =1
∂cγ ∇cγ + n
γ =1
∇p .
16 Thermodynamics of Continua
The first term of the second member is zero for the Gibbs–Duhem relation and this
shall be proved in Sect. 16.9. As regards the third term recall that
= Vγ∗ ,
Vγ∗ being the partial specific volume of component γ , then we obtain
γ =1
cγ Vγ∗ = V ∗ = 1 ,
γ =1
V ∗ being the specific volume of the fluid, hence we have
γ ∇μ∗γ = ∇ p .
γ =1
From the equation of motion Eq. (16.34) and requiring the condition of mechanical
equilibrium, we obtain
− ∇p +
γ Fγ = 0 .
In conclusion we find
γ =1
γ Xγ =
γ =1
γ Fγ −
γ ∇μ∗γ = 0 .
γ =1
In this demonstration, we have assumed that the temperature is uniform; it may be
extended to a more general situation but the issue will not be further pursued.
16.6.4 The Einstein Relation Between Mobility and Diffusion
Let us go back to the entropy production density expressed by Eq. (16.67) and restrict
ourselves to the case of uniform temperature and with
no chemical reactions. Then
the entropy produnction density reduces to π = nγ =1 Jγ · Xγ and the forces Xγ
will have the form
1 Fγ − ∇μ∗γ .
Xγ =
It is important to underline that the form Eq. (16.90) shows clearly how volume forces
and concentration gradients combine to give one resultant force.
The two particular situations in which one of these two causes is absent, leads
us to consider on the one hand the differential motion of one component relative to
16.6 The Equation for Entropy
the others due to the presence of an external force but with uniform concentration
and, on the other, the diffusion flow of one component relative to the others in the
absence of an external force but in the presence of a concentration gradient. The phenomenological study of these two situations leads us to define two phenomenological
coefficients: the mobility coefficient in the former case and the diffusion coefficient
in the latter.
Owing to Eq. (16.82) the density of entropy production gets the form
γ vγ − v · Xγ =
γ vγ · Xγ .
γ =1
γ =1
If we select an arbitrary reference velocity a and we define new fluxes with respect
to it as
Jγ(a) = γ vγ − a ,
we see that, owing to Eq. (16.82), the entropy production density, in the condition of
mechanical equilibrium, may be expressed as a function of these new fluxes:
Jγ(a) · Xγ .
γ =1
Consider now, for simplicity, a fluid composed of two components only and let’s
write the entropy production density. According to Eq. (16.91):
π = 1 v1 · X1 + 2 v2 · X2 .
We choose now as the reference velocity, the value a = v2 . The entropy production
density becomes
π = 1 (v1 − v2 ) · X1 .
Here the flux J1 = 1 (v1 − v2 ) now describes the diffusion motion of the component
1 relative to component 2. The corresponding force will be written as
X1 =
1 F1 − ∇μ∗1 =
(M1 F1 − ∇μ1 ) .
T M1
If we now denote by C1 the molar concentration of component 1 (mole/volume and
therefore 1 = M1 C1 ) and denote by Fm,1 = M1 F1 the molar force i.e., the force
acting on one mole of component 1, the linear relation between flux and force will
be written as
L Fm,1 − ∇μ1 .
C1 (v1 − v2 ) =
16 Thermodynamics of Continua
In order to go further, let us resort to the expression for the chemical potential already
seen for ideal solutions, in which the concentration dependence is put in evidence:
μ ( p, T ) = η ( p, T ) + RT ln C .
In this approximation the linear relationship between flux and force becomes
L RT
C1 (v1 − v2 ) = −
T C1
∇C1 −
In the following, we consider two particular situations.
System at uniform concentration (as well as at uniform T and p) then Eq. (16.99)
v1 − v2 =
Fm,1 .
T C1
Experimentally it is seen that, for fields not too intense, a constant rate of migration
of a component relative to the other settles quickly and that the relative velocity
is proportional to the intensity of the specific volume force. We define mobility or
mobility coefficient of component 1 relative to component 2, the parameter μd defined
by the equation:
v1 − v2 = μd Fm,1 .
Diffusion Coefficient
Consider now the other particular situation, the one in which there is a nonuniform
concentration (∇C = 0) of component 1 but the external volume forces are absent
(Fm,1 = 0). The diffusion coefficient describes how quickly an inhomogeneous concentration tends to be reabsorbed into a uniform configuration. Fick’s law defines
the diffusion coefficient D, with the relation:
C1 (v1 − v2 ) = −D ∇C1 ,
in which the diffusion velocity is related to the concentration gradient.
Now we can express each of these two coefficients as a function of the same
phenomenological coefficient L that appears in Eq. (16.99). This demonstrates the
power of the thermodynamic approach: two phenomena which are generally treated
as independent, appear to be two particular cases of one phenomenon in which the
force has a well determined overall expression.
16.6 The Equation for Entropy
As regards the mobility coefficient, from the comparison we have
μd =
T C1
while by analogy, for the diffusion coefficient we get
Einstein Relation
Comparing the Eqs. (16.103) and (16.104) we find a very general relationship
between the mobility coefficient and the diffusion coefficient:
D = RT μd .
This relation was first established by Einstein.
16.7 Thermoelectric Phenomena
This section is devoted to the study of thermoelectric phenomena and to the fundamental role played by the Onsager theory of irreversible processes. We know that a
temperature gradient in a material gives rise to a heat flux. More precisely, if only
a temperature gradient is present the heat flux density is locally described by the
Fourier theory of heat conduction according to which
Jq = −λ∇T .
This relation defines the heat conductivity λ of the material. It must be remembered
that the heat conductivity is defined in the condition when no other effect is present
and in particular, as regards the matters dealt within this section, in the absence of
electric currents.
Likewise, the electric conductivity σel of a material is defined by the relation:
j = σel E
for isotropic materials and for an isothermal configuration. The same relation for
stationary or slowly varying fields can be written as
j = −σel ∇ψ
16 Thermodynamics of Continua
Fig. 16.1 A thermocouple is
a circuit formed by two wires
A and B made of different
metals. They are welded in
the two junctions G 1 and G 2 .
Wire A is continuous from
G 1 to G 2 while the wire B is
made by two sections
coming out of the junctions
G 1 and G 2 and ending in
points P and Q, respectively
ψ being the electrostatic potential. When temperature gradients and electric fields
are simultaneously present, interference effects become dominant, the single-process
theories no longer apply and we have to refer to the Onsager theory of irreversible
processes developed in this and in the previous chapters for a deep, model independent, understanding.
Before entering into the study of the dynamic equations that govern the thermoelectric effects it is appropriate to define the parameters that describe the behavior
of the different conductors when particular conditions are realized. In order to measure the relevant quantities we want to deal with, we refer to a thermocouple as
described in Fig. 16.1. The two junctions G 1 and G 2 are in thermal contact with two
heat reservoirs at temperatures T and T + T , respectively. They may ensure either
a temperature gradient or an isothermal condition in wire A while the two free ends P
and Q of wire B are at the same temperature T0 or they may be used either to measure
the potential difference between them or to inject the desired electric current in the
16.7.1 Seebeck Effect—Thermoelectric Power
When the temperature in a conductor is nonuniform an electric field appears. This
effect is to be expected since the chemical potential of the electrons depends both on
their number density and on the temperature, so to compensate for a nonuniformity in
the latter it is necessary to realize nonuniformity in the distribution of the charges. We
define absolute thermoelectric power of the conductor A the quantity ω A defined by
E = −ω A
where, for simplicity, we have adopted a one-dimensional formalism.
A potential difference between point P and Q of the thermocouple is expected and
can be evaluated in terms of the absolute thermoelectric powers of the two metals
and as a function of the temperature difference of the thermocouple T .
16.7 Thermoelectric Phenomena
Suppose that P and Q are at the same temperature (for instance T0 ). In order to
calculate the potential difference we shall integrate the electric field Eq. (16.109)
along the path P-G 1 -G 2 -Q as seen in Fig. 16.1.
The path integral is changed into an integral in the temperature variation along
the circuit, then we have
V P − VQ = −
T+ T
ω B dT +
ω A dT +
T+ T
V P Q = V P − VQ =
T+ T
T+ T
(ω B − ω A ) dT =
ω B dT
ω AB dT
ω AB = ω B − ω A
is called thermoelectric power of the thermocouple. It is important to emphasize that
the potential difference V P Q , sometimes called thermoelectric emf, of the thermocouple, does not depend on the temperature T0 and, therefore, Eq. (16.111) gives the
following operational definition of the thermoelectric power of the thermocouple:
ω AB =
dV P Q
This effect was first studied by Seebeck in 1821. If we construct a thermocouple combining wire A with a superconductor as wire B, Eq. (16.113) gives us the
way to measure the absolute thermoelectric power of metal A. This is because the
absolute thermoelectric power of a superconductor (below the Curie temperature) is
substantially zero.
16.7.2 Peltier Coefficient—Phenomenology
Consider the thermocouple in Fig. 16.1 and let us keep it at uniform temperature
( T = 0). If we apply at points P and Q an emf and we circulate an electric current
I , we observe that in one of the two heat reservoirs at the two junctions, heat must
be supplied (the junction tends to cool) while at the other we have to take away the
same amount of heat per second (the junction tends to warm). For current intensities
not too high the power exchanged by each junction is proportional to I . If we invert
the direction of the electric current the effects change sign, i.e., the junction that
previously cooled will tend to warm and vice versa.
Consider the junction traversed by the current when it flows from metal A to metal
B. The amount of heat absorbed by this junction is written in the form
16 Thermodynamics of Continua
d̂ Q
= Π AB I ,
where Π AB is named the Peltier coefficient of the thermocouple and depends on the
temperature of the junction.
16.7.3 Thomson Effect—Phenomenology
Let us consider the case in which electric currents and temperature gradients occur
simultaneously. For simplicity let us refer to the same thermocouple depicted
in Fig. 16.1 where the two junctions are maintained at different temperatures as
in Sect. 16.7.1. Along wire A a temperature gradient will be established and will be
maintained by means of a suitable series of heat reservoirs at the same local temperature point by point. Let us, now, inject an electric current in the circuit. We observe
that, in order to maintain unaltered the temperature distribution, the heat reservoirs
must exchange heat with each portion of the wire. The amount of heat exchanged
is proportional to the length dx of the wire element but is not proportional to I 2
(electric current intensity squared) as expected from Ohm’s law. It results composed
of two terms: one as expected from Ohm’s law and proportional to I 2 , and another
proportional to the intensity of the flowing electric current. The latter term changes
sign under inversion of the direction of the current so that the total amount of heat
delivered to the reservoir may be larger or less than the Joule term. The experimental study of this component shows that the amount of heat delivered by the small
segment of wire to the local reservoir is proportional to dx and to the temperature
gradient ∇T :
d̂ Q
= τ ATh I dT ,
where dT is the temperature variation along the wire segment dx, τ ATh is called the
Thompson coefficient of metal A and depends on temperature.
In order to study the interference between heat flux and electric current we shall
refer to Eq. (16.67) for the entropy production density and Eqs. (16.68) and (16.69)
for the forces. In the case of a metallic wire we consider the presence of just two
components: the ionic structure which will be considered at rest (diffusion velocity
vion = 0) and the electronic component which diffuses with respect to ions with drift
velocity vel .
Let us, for convenience, rewrite these relations with the following assumptions:
chemical reactions are absent and only one diffusion flux Jγ is present describing
the motion of the electronic component and for this reason all quantities referring to
it will be denoted by the index “el”. We have
16.7 Thermoelectric Phenomena
π = Jq · Xq + Jel · Xel ,
Xel =
Fel − T · ∇
the expression for Fel which refers to the mass unit, and in the absence of magnetic
fields, shall be written as
Fel = qel∗ E ,
where qel∗ = −e/m el is the charge per unit mass of the electronic component, being
e the absolute value of the electron charge and m el the electron mass. The diffusion
density flux Jel of the electrons represents, in a particular form, the current density
in the wire and in order to make this more explicit let us rewrite its expression as
Jel = el vel = Nel m el vel = ∗ ,
where Nel is the number density of electrons and j the free current density. The
contribution of the electric current to the entropy production density is, then, made
explicit in the form
π = Jq · ∇
E−T ∇
qel∗ T
In Eq. (16.120) the two fluxes have clear meanings: one describes the energy flux
minus a very small correction due to the center of mass motion. The other one has
the clear meaning of current density. On the other hand the two conjugated forces
depend, respectively, the former on the temperature gradient only while the latter
depends on all the state variables. It is convenient to proceed adopting, as basic
forces, temperature gradients and electrochemical potential gradients separately, as
will be shown below in Eq. (16.128). This can be easily obtained by operating a
change in the definition of the fluxes similarly to what was shown in Sect. 14.6.1. In
that case, the aim was to separate the effects of temperature differences from those
of pressure differences, here we separate the role of temperature gradient from that
of chemical potential gradient.
This can be easily obtained by splitting the term ∇ μ∗el /T qel∗ in the following
= ∇
+ ∗ ∇
T qel∗
and then we substitute this expression into Eq. (16.120), the entropy production
density becomes
π = Jq · ∇
− ∗el j · ∇
16 Thermodynamics of Continua
The last term on the right gives a contribution equal to −μ∗el Jel · ∇ (1/T ) which is
put together with Jq · ∇ (1/T ) and the two terms give the expression of the new flux
and, consequently, of the new forces.
The heat flux Jq will be changed into
Jq → Jq − μ∗el Jel .
It is useful to interpret the new flux remembering the definition of entropy flux
density Js as given in Eq. (16.66) (here the notation for the flux has been changed
dropping the prime). The new flux Eq. (16.123). still maintaining the dimension of
flux density of energy, is more closely related to the density flux of entropy as is clear
by the relation:
Jq − μ∗el Jel = (T Js ) .
The flux Jel will remain unchanged and consequently its new force, say Xel , will
E−T ∇
+ ∗∇
Xel =
qel∗ T
and then the entropy production density, in the new representation, becomes:
π = Jq − μ∗el Jel · ∇
For static or slowly varying fields we may substitute E = −∇ψ where ψ is the
electrostatic potential and the density of entropy production may be written in the
− Jel · ∇ μ∗el + qel∗ ψ ,
π = (T Js ) · ∇
− Jel · ∇ μ̃∗el .
π = (T Js ) · ∇
In Eq. (16.128), a new thermodynamic potential appears, called specific electrochemical potential. Its definition is
μ̃∗el = μ∗el + qel∗ ψ .
We see clearly how the chemical potential, which is a function of temperature and
particle density, combines with the electrostatic potential in order to constitute one
force associated directly with the electric current in perfect analogy with the example
studied in Sect. 16.6.4.
16.7 Thermoelectric Phenomena
(T Js ) = L 11 ∇
Jel = L 21 ∇
− L 12 ∇ μ̃∗el ,
− L 22 ∇ μ̃∗el ,
with the condition:
L 12 = L 21 .
The diagonal phenomenological coefficients L 11 and L 22 describe the proportionality
of a flux with its own force and, therefore are connected, respectively, with the thermal
and electrical conductivities. First, let us consider the case of isothermal metals.
In Eq. (16.131) the chemical potential μ∗el is also uniform (∇μ∗el = 0) because we
consider homogeneous, isotropic metals. Then, the Eq. (16.131) becomes
Jel = −L 22
j = L 22
1 ∗ 1
∇ qel ψ = L 22 qel∗ E ,
qel∗ 2
E = σel E ,
then the electrical conductivity gives the value of the L 22 coefficient according to the
following relation:
σel T
L 22 = ∗ 2 .
Recalling Eq. (16.106), where the thermal conductivity of a medium is defined by
the proportionality relation between the flux of heat and the temperature gradient in
a medium with no electrical current, if we set Jel = 0 we get
L 21
∇ μ̃∗el =
L 22
and then:
L 21
(T Js ) = L 11 − L 12
L 22
and since (T Js ) = Jq − μ∗el Jel = Jq we obtain
Jq = −
1 L 11 L 22 − L 212
∇T .
L 22
We see that the thermal conductivity, defined in Eq. (16.106), is bound to the phenomenological coefficients L i j by the relation:
1 L 11 L 22 − L 212
L 22
16 Thermodynamics of Continua
The expressions (16.135) and (16.139) connect experimentally measured parameters
to the phenomenological coefficients. If we can connect another important physical
parameter to the phenomenological equations (16.130) and (16.131), remembering
the symmetry condition (16.132), we can completely determine the dynamical equations (16.130) and (16.131) and then describe all thermoelectric effects as a function
of three of them.
Let us consider a thermocouple as the one treated in Sect. 16.7.1and let us describe
the Seebeck effect in the language of the dynamical equations (16.130) and (16.131).
Since the circuit is open, Eq. (16.136) holds and may be written conveniently in the
L 21 1
∇T .
∇ μ̃∗el = −
L 22 T
Let us restrict ourselves to a one-dimensional formalism and integrate Eq. (16.140)
along the path P-G 1 , G 1 − G 2 , G 2 -Q (as in Sect. 16.7.1) we obtain the following
L 21
dT ,
μ̃el G 1 − μ̃el P = −
L 22 B T
L 21
μ̃el G 2 − μ̃∗el G 1 = −
dT ,
L 22 A T
L 21
dT .
μ̃el Q − μ̃el G 2 = −
22 B
If we sum the left hand terms of Eqs. (16.141)–(16.143) we obtain the difference of
the electrochemical potential between points Q and P. These two points are at the
same temperature and are made with the same metal then the chemical potential of
the electrons has the same value. It follows that
μ̃el Q − μ̃∗el P = qel∗ ψ Q − ψ P .
As for the right-hand side integrals we have
L 21
L 22
dT +
L 21
L 22
dT +
L 21
L 22
dT = 0 .
Finally, summing up Eqs. (16.141)–(16.143) we obtain
ψQ − ψP = −
L 21
L 22
− ∗
L 21
L 22
dT .
Let us denote by T the temperature at junction G 1 and by T + T that at junction
G 2 . For small values of T Eq. (16.146) can be written as follows:
16.7 Thermoelectric Phenomena
ψQ − ψP = −
L 21
L 22
L 21
L 22
If we put ψ Q − ψ P = ψ, then Eq. (16.147) shows that if we increase by an
infinitesimal amount the temperature of the hot junction (G 2 if T > 0) the increment in ψ is given by
d ψ
qel∗ T
L 21
L 22
L 21
L 22
qel∗ T
= ω AB ,
where ω AB is called thermoelectric power of the thermocouple. If we put
ωA =
L 21
L 22
qel∗ T
we refer to ω A as the absolute thermoelectric power of metal A (and the same for
metal B) we may write the thermoelectric power of the thermocouple as the difference
of the absolute thermoelectric powers of the two metals:
ω AB = ω B − ω A .
From Eqs. (16.132), (16.135), (16.139) and (16.149) we can determine the four phenomenological coefficients as a function of the electrical conductivity σel , the thermal
conductivity λ and the absolute thermoelectric power ω of a conductor. We find
L 11 = λT 2 + σel ω2 T 3 ,
L 22 = σel T
qel∗ −2
L 12 = L 21 = ω σel T
qel∗ −1
and then we may obtain the dynamical equations as follows:
(T Js ) = λT 2 + σel ω2 T 3 ∇
σel ωT 2
Jel =
qel∗ 2
σel ωT
∇ μ̃∗el ,
∇ μ̃∗el .
From Eqs. (16.154) and (16.155) any other thermoelectric effect can be quantitatively
explained as a function of the three selected basic properties of the conductor.
16 Thermodynamics of Continua
16.7.4 Peltier Effect—Explanation
Consider two wires A and B made with different metals and welded in the junction
G. They are kept at the same, uniform temperature T and for simplicity, we suppose
that their cross-sectional area is the same.
From Eqs. (16.154) and (16.155) we can write the following relation between the
entropy density flux and the current density for each metal:
(T Js ) = ωT qel∗ Jel = ωT j .
Since the current density is the same in the two conductors we see that the heat flux
density has different values. Their difference is
(T Js )wireB − (T Js )wireA = (ω B − ω A ) T j .
Since also the electrochemical potential is continuous when traversing the junction,
the change in (T Js ) corresponds to a variation in the heat flux density Jq and this
variation amounts to
Jq wireB − Jq wireA = (ω B − ω A ) T j .
This difference corresponds to the amount of heat absorbed or given off at the junction
per second. More precisely, if we integrate Eq. (16.158) over the cross section of the
wires, the amount of heat exchanged, per second, with the outside per unit current
intensity is
Π AB = (ω B − ω A ) T .
From Eq. (16.158) we see, for instance, that if ω B > ω A and the current is flowing
from A to B, the junction will tend to cool and then heat must be supplied to the wire
in order to keep it at constant temperature T . If the current flows from B into A the
junction must be cooled by an external thermostat because the junction will tend to
become hotter.
Let us consider, as an example, a thermocouple formed by Bi and Sb traversed by
an electric current with intensity I = 20 A. The Peltier coefficient of this couple is
ΠBiSb 40 mV and we can use the junction which tends to cool in order to produce
refrigeration effects. If we consider, for example, 20 wires of Antimony and 20 of
Bismuth welded in an alternate configuration and we put together the junctions which
absorb heat, the power subtracted to the ambient is
d̂ Q
20 × 4 × 10−2 × 20 16 W.
16.7 Thermoelectric Phenomena
16.7.5 Thomson Effect
In order to discuss the interference between temperature gradients and electrochemical potential gradients2 in metals, it is necessary to refer to the dynamical
Eqs. (16.154) and (16.155). If we consider a small segment of wire of length dx the
increment of heat flux along dx must be compensated by the equivalent amount of
heat absorbed from the external world per second. Then the effect we want to discuss
is directly linked to the divergence of Jq and this can be easily calculated from the
dynamical equations taking care to connect the heat flux density Jq to the electric
current density j and to the temperature gradient. Let us write the expression of ∇ μ̃∗el
from Eq. (16.155) and substitute in Eq. (16.154) and, after some calculations, obtain:
(T Js ) = λT 2 ∇
+ qel∗ ωT Jel
and remembering Eq. (16.124) we have
Jq = λT 2 ∇
+ qel∗ ωT Jel + μ̃∗el Jel .
The calculation of the divergence of the heat density flux gives rise to several terms
which must be analized one by one. Recalling that Jel = qel∗ −1 j is a divergenceless
(solenoidal) vector because of charge conservation, we obtain
∇ · Jq = ∇ · λT 2 ∇
+ qel∗ T ∇ω · Jel + qel∗ ω ∇T · Jel + ∇ μ̃∗el · Jel .
From Eq. (16.155) we obtain the expression for ∇ μ̃∗el :
∇ μ̃∗el
q∗ 2
= − el Jel + qel∗ ωT 2 ∇
and substituting in Eq. (16.163) we have
∇ · Jq = ∇ · λT 2 ∇
+ qel∗ T ∇ω · Jel −
qel∗ 2 2
J .
σel el
As far as the first term on the right is concerned we have to remember that the temperature distribution we are dealing with, is the one that was determined in condition
of zero electric current and in that case we have ∇ · Jq = 0. From Eq. (16.165) this
implies that
2 Electrochemical
potential gradients reduce to electrostatic potential gradients (i.e., electric fields)
in homogeneous, isothermal conductors as we have seen in the preceding examples. In more general
contexts it is the variation of electrochemical potential that acts as the emf in metals.
16 Thermodynamics of Continua
∇ · λT 2 ∇
= 0.
We know that the absolute thermoelectric power ω depends only on temperature then
its gradient may be expressed in the form
∇ω =
and, finally, Eq. (16.165) may be written as
∇ · Jq = T
∇T · j −
j .
The divergence of the heat flux density gives the amount of heat exchanged with
the external reservoirs in order to maintain the given temperature distribution. We
see that it is composed by two terms. The second on the right is proportional to the
current density squared and the electrical resistivity σel−1 and, clearly, describes the
heat given off by Joule effect.
The first term on the right depends on the scalar product between the current
density and the temperature gradient. This term describes the Thomson effect and
the Thomson coefficient τ Th is linked to the absolute thermoelectric power of the
metal, by the relation:
τ Th = T
Notice that, given one temperature distribution, the scalar product changes sign if the
electric current is inverted and then the Thomson heat may be absorbed or expelled
by the wire accordingly.
If we consider two metals A and B joined at the junction G we may relate the
Peltier coefficient of the junction to the Thomson coefficients of the two metals. The
Peltier coefficient defined in Eq. (16.159) depends only on the temperature (and of
the nature of the two metals of course) and hence, taking the temperature derivative
of the Peltier coefficient Π AB , we obtain
dΠ AB
= (ω B − ω A ) + τ BTh − τ ATh .
This relation was first obtained by Lord Kelvin and is called “the first Kelvin relation”.
16.7.6 Galvanomagnetic and Thermomagnetic Effects
This brief discussion of the thermomechanical effects clearly shows the power of
Thermodynamics in establishing general connections between different phenomena.
16.7 Thermoelectric Phenomena
It should be stressed that everything derives from the expression of the entropy
production and from the form of the generalized force connected to the diffusion flows
(in this latter case, the electric current) of the various components. In this expression
the role of the chemical potential and that of the volume forces are appropriately
combined to form “one force” and an example of what the consequences of this are,
had already been seen in the derivation of Einstein’s relation between the diffusion
coefficient and mobility.
Many other effects can be studied by modifying the expression of the volume
force acting on the various components such as, for example, adding magnetic or
centrifugal fields. In the former case, the volume force per unit of mass changes to
Fel = qel∗ [E + vel × B] .
In this case the phenomenological coefficients L i j depend, in general, on the magnetic field intensity and direction therefore the Onsager symmetry relations defined
in Eq. (16.132) must be changed into
L i j (B) = L ji (−B)
as was discussed in Sect. 15.3.4. It is clear that many effects arise from the simultaneous presence of temperature gradients, electric fields and megnetic fields and their
different mutual orientations.
The Nernst effects take place in a metal in which a heat flux is flowing along one
direction (say x axis) and a magnetic field is applied, for instance, perpendicular
(for simplicity) to the heat flux. In this cases a potential difference will appear in
the direction perpendicular to the previous two. The Ettingshausen effects consist
in the appearance of a temperature gradients along, for instance, the z-axis when an
electric current flowing along the x-direction and a magnetic field are imposed along
y-axis. When a magnetic field is applied normal to the current density in a conductor
a potential difference appears in the direction normal to both the current and field
directions; this effect is well known as Hall effect. All these phenomena are deeply
entangled with each other and can be explained if we start from the principle of the
impossibility of perpetual motion as it was made for thermoelectric phenomena. We
shall not go further into the dicussion of this subject.
16.8 Thermodiffusion Processes
In this subsection, we discuss briefly the interference between temperature gradients
and concentration gradients. It is experimentally observed that in a mixture initially at
uniform temperature, the presence of a concentration gradient will cause the diffusion
of a substance relative to the others and this gives rise to the establishment of a
temperature gradient. Conversely if a temperature gradient is created in an initially
homogeneous system, this will give rise to a concentration gradient. The former is
16 Thermodynamics of Continua
called Dufour effect and the latter is called Thermodiffusion or, in condensed phases,
Soret effect. The Onsager theory of irreversible processes establishes a deep (model
independent) correlation between these two phenomena and for this reason, they are
said to be “reciprocal effects” as we shall see below.
Let us refer to the general Eq. (16.67) for the entropy production density and
to Eqs. (16.68), (16.69) for the generalized forces. We consider the case in which
chemical reactions and external volume forces are absent. Then the entropy production density becomes
Jγ · Xγ ,
π = Jq · Xq +
γ =1
where the forces conjugated to the diffusion fluxes, Jγ , are
Xγ = −∇
Since we want to study the interference between flows originating from temperature
and concentration gradients, it is convenient that in the expression for the density
of production of entropy, the forces depend separately on these two gradients. For
this reason we shall consider the chemical potentials as functions of temperature,
pressure and of the concentrations in the form
μ∗γ = μ∗γ (T, p, c1 , c2 , . . . , cn ) ,
where it must be remembered that
cγ = 1.
γ =1
Moreover the pressure in the fluid will be considered uniform because we shall
consider only situations in which mechanical equilibrium holds as already considered
in Sect. 16.6.4 It is , then, convenient to arrange the gradient in Eq. (16.173) in the
following way:
= − 2 ∇ (T ) + ∇ T μ∗γ ,
where Hγ∗ is the enthalpy per unit mass and the operator ∇ T denotes the gradient of
a quantity keeping the temperature constant. With this modification of the form of
the forces. the entropy production density in Eq. (16.172) may be conveniently be
written as
1 (16.177)
Jγ · ∇ T μ∗γ
π = Jq · Xq −
T γ =1
16.8 Thermodiffusion Processes
and the new flux Jq , which still has the dimension of a “heat flux density”, is defined
Hγ∗ Jγ .
Jq = Jq −
γ =1
Before proceeding to examine, briefly, an application to the phenomena of thermodiffusion, it is appropriate to carry out some considerations on the formal structure
of Eq. (16.177) that provides the density of production of entropy.
One comes from the definition Eq. (16.23) of barycentric fluxes which leads to
the condition:
Jγ = 0 .
γ =1
As a consequence of Eq. (16.179) one flux, for instance Jn may be eliminated
from Eqs. (16.177) and (16.178) because only n − 1 fluxes are independent. Then
the density of entropy production may be written as
π = Jq · Xq −
1 Jγ · ∇ T μ∗γ − ∇ T μ∗n .
T γ =1
Similarly, for the n forces appearing in the equation of the density of entropy production Eq. (16.177) we may prove that only n − 1 of them are independent. This is
obtained from the application of the Gibbs–Duhem equation, which will be discussed
in Sect. 16.9 and summarized in Eq. (16.220). Here it will be rearranged in order to
be written in terms of the the specific chemical potentials and, as a consequence,
mass concentrations. It results in the form
cγ ∇ T μ∗γ = 0 .
γ =1
Equation (16.181) allows to eliminate one force, for instance ∇ T μ∗n , so that
Eq. (16.180) may be easily written as a function of the new heat flux plus n − 1 diffusion fluxes and n − 1 relative forces which result as linear combinations of the ones
appearing in Eq. (16.177). More precisely
cγ 1 .
· Xq −
Jγ · ∇ T μγ δγ ,γ +
T γ ,γ =1
The new form of the expression of the density of entropy production Eq. (16.182)
is necessary in order to give an exhaustive discussion of diffusion phenomena in
3 This flux of energy is analogous to the flux defined in Eq. (14.110) for the case of discontinuous
systems and which leads to the definition of heat of transfer
16 Thermodynamics of Continua
multicomponent systems. Here we shall limit ourselves to binary systems but for a
complete treatment the reader is addressed to [23, 26].
Since the diffusion phenomena and the relative coefficients are described with
reference to concentration gradients, it is appropriate to change further the form of
the forces in Eq. (16.177) in order to make the role of concentrations explicit.
Let us define the symbol:
μ∗γ ,γ ∂μ∗γ =
∂cγ (16.183)
T, p,cγ =γ so the gradients of the specific chemical potentials become
∇ μ∗γ =
μ∗γ ,γ ∇ cγ .
γ =1
We can, now, write a suitable expression for the density of entropy production in
such a way that one force depends on the temperature gradient and the others on the
concentration gradients only. If we insert Eq. (16.184) into Eq. (16.172) after some
simple algebra we obtain
π = Jq · Xq −
cγ μ∗γ ,γ ∇ cγ .
Jγ · δγ ,γ +
γ ,γ ,γ =1
16.8.1 Binary Systems
Let us consider, for simplicity, the case of a binary system, i.e., the case in which
only two components are present (n = 2) as we did in Sect. 16.6.4.
Let us go back to Eq. (16.185) and make it explicit for the case of a two-component
system. The expression for the density of entropy production becomes
π = Jq · Xq −
1 μ∗1,1
J1 · ∇ (c1 ) .
T c2
With reference to Eq. (16.186) for the density of entropy production, let us write
the linear relations between fluxes and forces:
Jq = −L 11
1 μ∗1,1
∇ (c1 )
T c2
J1 = −L 21
1 μ∗1,1
∇ (c1 )
T c2
16.8 Thermodiffusion Processes
with the symmetry condition:
L 12 = L 21 .
The four phenomenological coefficients are simply related to the various coefficients
measured experimentally when different situations are realized.
First, we may consider the case of homogeneous systems in which only heat conduction is active. If we write the expression of the heat flux obtained from Eq. (16.187)
in this particular configuration and compare it with the Fourier’s law of heat conduction (see Eq. (16.106)) we obtain
Jq = Jq = −L 11
∇ (T ) = −λ∇T
and then the phenomenological coefficient L 11 is related to the heat conductivity by
L 11 = λ T 2 .
The symmetrical situation occurs when we consider the presence of diffusion
flows in a system maintained at a uniform temperature. In this case the diffusion
coefficients with respect to a generic reference velocity a are defined by a general
relation of the type:
x1 (v1 − a) = −D ∇ (x1 ) ,
where a is the reference velocity (in the case we are considering here a = v is the
c.m. velocity) and x1 the diffusion property like, for instance 1 , C1 or c1 . In the case
we are discussing we observe the diffusion of the mass concentration c1 with respect
to the c.m. motion and then Eq. (16.188), for isothermal systems, gives
c1 (v1 − v) = −L 22
1 μ∗1,1
∇ (c1 ) = −D ∇ (c1 ) .
T c2
The phenomenological coefficient L 22 is related to the diffusion coefficient by
L 22 =
c2 T
The other two phenomenological coefficient describe the cross effects originating by
temperature gradients and concentration gradients.
16.8.2 Thermodiffusion
One consists of the onset of a diffusion flow generated by a temperature gradient in
the absence of a concentration gradient and is called Thermodiffusion.
16 Thermodynamics of Continua
The phenomenon of thermodiffusion was discovered by Ludwig in 1856 [27] but
was studied in a systematic way by the Swiss scientist Charles Soret in 1879 (see [28])
when he observed that in a tube containing a saline solution, initially homogeneous,
a different concentration was established, in the stationary state, when the two ends
were kept at different temperatures (in particular the cold end was the one in which
the concentration was higher).
Today the effect is generally referred to as the Soret effect. Starting from a homogeneous configuration, Eq. (16.188) shows that a temperature gradient gives rise to
a diffusion flux proportional to the temperature gradient:
J1 = c1 (v1 − v) = −L 21
∇ (T ) .
The Soret coefficient, or thermodiffusion coefficient, describes the proportionality
between the diffusion flux of component 1 due to the temperature gradient as
expressed in Eq. (16.194). It is denoted by DT and is defined by the relation:
J1 = − c1 c2 DT ∇ (T ) .
The reason for this definition in which the product of the two concentrations are
put into evidence is the following: it is experimentally observed that the thermodiffusion phenomena occur only in mixtures and are absent in system composed by one
component. It is expected that the thermodiffusion coefficient depends on temperature and also on concentrations and the latter must give a null value both for either
c1 = 0 or c2 = 0. Definition Eq. (16.195) satisfies this requirement but this does not
mean that DT does not depend on the concentration of one of the two components. Of
course DT depends also on temperature. If we compare Eqs. (16.194) and (16.195)
we find that the phenomenological coefficient L 21 is related to the thermodiffusion
coefficient by
L 21 = c1 c2 T 2 DT .
In general, when both concenration and temperature gradients are present, the
flux of matter results:
J1 = − c1 c2 DT ∇ (T ) − D ∇ (c1 )
hence in the stationary state4 , that is J1 = 0, the ratio between the variation in the
concentration and in temperature is
= −c1 c2
= −c1 c2 ST ,
4 For
the use of the term “stationary state” remember, by analogy, the case of the thermomolecular
pressure difference in Eq. (15.6) and its role in Sect. 15.2.
16.8 Thermodiffusion Processes
where ST is called Soret coefficient. It describes quantitatively the relative importance
between the thermal diffusion and the diffusion due to concentration gradients in
determining the resulting flux of matter. For organic mixtures or aqueous solutions
it is of the order of 10−2 − 10−3 K−1 .
16.8.3 Dufour Effect
The symmetrical effect to Thermodiffusion is the onset of a heat flow caused by a
gradient of concentration in a isothermal system and is called Dufour effect. If we
consider Eq. (16.187) and set ∇ (T ) = 0, we obtain: :
Jq = −L 12
1 μ∗1,1
∇ (c1 ) .
T c2
The Dufour coefficient DF is defined by the relation:
Jq = −1 μ∗1,1 T DF ∇ (c1 )
and hence, the phenomenological coefficient L 12 is related to the Dufour coefficient
L 12 = c1 c2 T 2 DF .
Remembering that c1 = 1 , we may, now, write the dynamical equations for
thermodiffusion phenomena as a function of three coefficient taken as independent:
Jq = −λ ∇ (T ) − 1 μ∗1,1 T DF ∇ (c1 ) ,
J1 = − c1 c2 DT ∇ (T ) − D∇ (c1 ) ,
DT = DF = D ,
where Eq. (16.204) comes from the comparison of Eq. (16.196) with Eq. (16.201)
and the application of the Onsager symmetry requirement for the phenomenological
Finally let us go back to the phenomenological Eqs. (16.187) and (16.188). The
Second Principle requires that the entropy production be positive, then if fluxes are
written as linear that the phenomenological coefficient satisfy the following conditions:
L 11 > 0 ,
L 22 > 0 ,
(L 11 L 22 − L 12 L 21 ) > 0 .
16 Thermodynamics of Continua
These conditions come from Eq. (14.127), when the symmetry property of the phenomenological coefficients is applied. Requirements of Eq. (16.207) have important
consequences on the various coefficients defined in this section. Following the order
of presentation we have
L 11 > 0 =⇒ λ > 0 ,
L 22 > 0 =⇒ μ∗1,1 > 0 ,
DT DF = D <
T c1 c2 μ∗1,1
Conditions given in Eqs. (16.204), (16.208) (16.209), and (16.210) are of general
validity, i.e., model independent. They follow from the impossibility of perpetual
16.9 Appendix—The Gibbs–Duhem Relation
Let us consider a phase in which, for simplicity, one work parameter only is present
(say the volume) and let it be composed by n independent chemical components.
Recalling the discussion in Sect. 4.3.1 the Fundamental Relation in the Entropy representation will be written as
S = S U, V, n γ , γ = 1, 2, . . . , n
and we see that the number of degrees of freedom is n + 2. It is clear, however, that
if we leave aside the amount of matter, the number of independent state parameters
will be n + 1. If we write Eq. (16.211) for one mole, and we denote by cγ = n γ /n tot
the molar fraction of component γ we obtain:
Sm = S Um , Vm , cγ , γ = 1, 2, . . . , n ,
where n tot is the total number of moles and the n mole fractions n γ /n tot , satisfy the
obvious relation:
cγ = 1 .
γ =1
It follows that if we decide to describe the system as a function of T and p plus the k
mole fractions (that is, we don’t take into account the size of the system), the number
of degrees of freedom is n + 1. This means that among the n + 2 quantities T , p and
n γ there must be one mutual dependence. This relation is called the Gibbs–Duhem
Relation. Let us write it explicitly.
It is convenient to refer to the Fundamental Equation in the Gibbs representation Eq. (4.32) and to remember expression Eq. (4.78) of the Gibbs potential integrated at constant temperature and pressure. If we differentiate the latter we have:
16.9 Appendix—The Gibbs–Duhem Relation
dG =
μγ dn γ +
n γ dμγ
n γ dμγ = −SdT + V d p .
γ =1
γ =1
and if we substitute Eq. (4.32) we obtain:
γ =1
This is the Gibbs–Duhem relation. In particular, for processes at constant T and
p, Eq. (16.215) becomes:
n γ dμγ = 0 .
γ =1
Let us go back to Eq. (16.84) and in particular let us consider the first term of the
second member. The first step is to recognize that the summation over the mass concentration gradients are equivalent to the gradients of the specific chemical potentials
as shown in the following relation:
γ =1
∂cγ n
γ =1
∂cγ ∇cγ = ∇μ∗γ .
Hence we may write
γ =1
∇cγ = k
cγ ∇μ∗γ
γ =1
and since μ∗γ = 1/Mγ μγ and cγ = m γ /m, we may replace cγ /Mγ with n γ /V
and, finally, Eq. (16.218) can be written as
cγ ∇μ∗γ =
γ =1
1 n γ ∇μγ
V γ =1
n γ ∇μγ = 0 ,
γ =1
by the Gibbs–Duhem relation for isothermal and isobaric processes as shown
in Eq. (16.216).
Part IV
Thermodynamics and Information
Chapter 17
Introduction to the Role of Information
in Physics
One must not think slightingly of the paradoxical…for the
paradox is the source of the thinker’s passion, and the thinker
without a paradox is like a lover without feeling: a paltry
Soren Kierkegaard
Abstract The paradox first proposed by Maxwell and well known under the name
of “The Maxwell’s Demon paradox” is discussed starting from the contribution to the
issue given by L. Szilard in 1929. The accurate analysis of the paradox shows that,
in order to violate the Second Principle, Maxwell’s Devil must perform observations
and adapt his own strategy accordingly. Performing observations involves creating
information. To save the Second Principle, it is required that the created information
should be deleted. The latter operation requires dissipation in the form of heat and
the Landauer Principle fixes the minimum amount of heat dissipated by erasing one
bit of information. The nonseparability of the observer from the observed constitutes
a new epistemological paradigm in which the theory of information, both classical
and quantum, ceases to be a purely technical discipline to become a fundamental
constituent of the theories of Physics. In this chapter, some elements of classical and
quantum information theory are briefly dealt with. Zurek’s theory, which explores the
possibility that thermodynamic potentials of thermodynamic systems should contain
additive contributions due to the description of the configuration by the observer
(algorithmic complexity), is reported as an example.
Keywords Information · Maxwell’s paradox · L. Szilard · Landauer principle ·
Classical information · Shannon’s and Weaver entropy · Entropy of information ·
Algorithmic complexity
17.1 The Maxwell’s Paradox
Maxwell considered a gas contained in a vessel divided into two parts by an adiabatic
septum, bearing a small opening provided with a very light cover which could be
easily opened or closed. Maxwell imagined the existence of a small being with
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
17 Introduction to the Role of Information in Physics
particular visual acuity and great rapidity of movement, located near the opening,
and that could open and close the lid quickly whenever a very fast molecule was
directed toward the opening on the side which he was. In this way, this being, which
for its extraordinary faculties was indicated as a devil (demon), would act selectively
in order to favor the passage of the faster molecules and thus obtain a temperature
difference from an isothermal situation.
This temperature difference could operate a thermal engine allowing to produce
work outside. The vessel is immersed in an environment at constant temperature and
in the overall process, the work produced by the engine would be compensated by
an equivalent quantity of heat supplied by the environment to the gas. By this way
we could get a perpetual motion (of the second kind), that is, the violation of the
Second Principle of Thermodynamics.
Various attempts were made to build some automatic mechanism, that is, that
would not require the presence of an imp (demon), to achieve this result: light doors
that can be opened in one direction only under the thrust of a suitable fluctuation and
close again very quickly before an opposite fluctuation nullify or even reverses the
sign of the phenomenon, but with no success.
It is not forbidden that a favorable fluctuation gives us an occasional advantage:
it would show, the probabilistic nature of the Second Principle. What it prohibits is
that this can be done in a repetitive and predetermined manner.
The paradox remained unsolved because these attempts did not grasp the essential point of the paradox itself. This step was taken in an article by Leo Szilard in
1929 [29]. This was a first step, necessary, but certainly not conclusive.
17.2 The Leo Szilard’s Article in 1929
L. Szilard, pointed out a key fact: physics could not solve the paradox because
it was not able to deal with the fundamental point that the devil, in addition to
having exceptional physical gifts, had to be an intelligent being [29]. The essence
of Maxwell’s paradox was schematized by L. Szilard in the following way. A gas
consisting of only one molecule is contained in a cylindrical container fitted at each
base with a piston.
Each piston can, if necessary, be moved in either direction without appreciable
dissipation. In correspondence to the cylinder centerline may be introduced, where
appropriate, a septum which would divide the volume of the entire cylinder into
two halves; also this operation can, ideally, be accomplished without an appreciable
energy dissipation by the observer. The cylinder is constructed with a material which
is a good heat conductor and is held in a thermal bath (environment).
The observer (a more modern version of Maxwell’s demon) can get perpetual
motion by making the gas to perform a cyclic transformation determined by the
following four equilibrium states:
17.2 The Leo Szilard’s Article in 1929
1. At the starting state the molecule occupies the entire volume of the cylinder and
is at ambient temperature.
2. The partition is introduced: the molecule will occupy a volume equal to half of
the initial volume, but the observer does not know in which part the molecule is;
this operation can be accomplished with an expenditure of energy which can be
made arbitrarily small.
3. The observer performs an observation to know in which side the molecule is,
then he moves the piston corresponding to the empty portion so as to bring it into
contact with the partition; subsequently, the latter is removed.
4. The piston that was brought in contact with the partition is slowly (reversibly)
moved to the initial position. In doing this the gas undergoes an isothermal expansion and produces work by absorbing an equivalent amount of energy from the
environment in the form of heat flow.
The various steps are resumed in Fig. 17.1. So we produced mechanical work with
a 100% efficiency (dissipations relative to the movement of mechanical parts are
considered negligible). There are other ways we can produce work with a yield
equal to 100%. One way is to expand isothermally and reversibly a gas, in conditions
Fig. 17.1 In (a) the initial state is represented: the molecule occupies the entire volume 2V and
is at ambient temperature T . In (b) a partition is introduced in the central part. In (c) the piston
corresponding to the empty part is moved to the center. In (d) the partition is removed and in (e) the
piston expands reversibly to the initial position. Finally, in (f) the molecule is again in the initial
17 Introduction to the Role of Information in Physics
close to those of ideal gas. Since the internal energy of the gas remains constant
the work done by the gas is compensated by an equivalent amount of heat absorbed
from the surrounding environment. Also in this case the complete conversion of
environmental heat energy into mechanical energy is achieved but this is not the only
result. In fact, the gas is in a state different from the initial state because it occupies a
larger volume and therefore, this procedure can not be repeated indefinitely. In order
to bring the thermal machine (the gas) back to the initial situation (in order to be
able to indefinitely repeat the operation) it becomes necessary to employ part of the
stored mechanical work during the expansion to bring the gas to the initial volume.
In the thought experiment by Szilard, instead, the gas at the end of the operations is exactly in the same initial state (that is, it occupies the entire volume of
the cylinder always at room temperature), and then this transformation of the gas
can be repeated endlessly. As regards the gas this is a cyclic transformation and the
work was produced by taking the corresponding amount of heat from the surrounding environment, and then with a 100% yield. So the Second Principle seems to be
It is necessary, at this point, to quantify the amount of work obtained and the
amount of heat supplied from the outside, i.e. from the thermostat at temperature T .
The energy of the molecule, limited to the terms describing to the motion of the
center of mass, is
ε = kB T ,
and then the energy contained in the expanding volume will have the same value that
will remain constant during the operation.
We know that, for a classical gas with perfectly reflecting walls, the pressure is
related to the energy density by the relationship (see Appendix B.1, (B.13)):
kB T
and then the amount of work we will obtain after the isothermal expansion is
W =
p dV = k B T ln 2 .
Consequently, as the expanding gas energy content is constant, the environment will
provide the cylinder the corresponding amount of heat:
Q = k B T ln 2 .
17.3 The Observer Creates Information
17.3 The Observer Creates Information
The key point in this “gedanken experiment” is the following: the operator must
know in which sector of the cylinder the molecule is in order to know which piston
corresponds to the empty part and, subsequently, start the operations as described
before. For this, we need an intelligent being. Intelligent in the sense that he has
to make an observation and then adjust his action according to the outcome.1 What
Szilard called “intelligent being” is, in a nutshell, what we call observer.
It will be a more careful consideration of the role that the observer plays in this
thought experiment and of its real nature, that will allow us to solve the paradox and
save the Second Principle. On the one hand this will cause us to lay the foundations
for a definition of the concept of information as a physical quantity and on the other
we will be led to a generalization of the concept of entropy.
It is assumed that the observer, in order to determine the location of the molecule
(it is to the right or to the left), has designed, built and used an appropriate detection
apparatus: for example he could illuminate the two rooms with a light beam of low
frequency waiting to see a diffused photon. In principle only the diffuse photon
energy would represent a cost for the observation and this could be selected so as
to possess a very small fraction of the kinetic energy of the molecule (the lower the
frequency the lower the energy of the photon “dissipated” by the observation). In
Physics we call resolving power of an optical instrument, the minimum distance d
at which two point sources are declared as “distinct”. We know the resolving power
of an optical instrument which uses a radiation with wavelength λ is of order λ.
Since the observer is interested to discover whether the molecule is on the right
or on the left, he may (in principle) use of a radiation that has a wavelength of the
order of the size of the two containers. Therefore the energy cost associated with the
operation of the apparatus of observation decreases with the increase of the size of
the cylinder and hence this energy cost cannot “save” the Second Principle because
the work produced is independent of the size of the machine (see Eq. (17.3)) [30, 31].
There is an additional step to consider. If we associate the idea of intelligent being to
the figure of the observer intended as a human being, we have to take into account
the fact that, after having observed the diffused photon, his mental state suffered a
modification. He has reached an awareness he had not before making the observation:
we can state that his mind, considered as a physical system, has undergone a change
of state.
Now he knows that the molecule is to the right (for instance). A process took place,
analogous to the collapse of the wave function which occurs in quantum mechanics
whenever you make an observation: in our case, at the beginning the state of the
1 Suppose
we want to avoid the need of an intelligent being in the sense said before i.e. we operate
the engine always in the same manner: for instance, first, we always compress the piston on the
right reducing its volume to one half. Then we remove the septum and expand. If the molecule was
on the left we gain positive work but in the case, the molecule was on the right we have to do work
for compression and the balance is negative. In one case out of two we gain, in the other we lose.
It is easy to see that on the whole we lose.
17 Introduction to the Role of Information in Physics
molecule was represented by a linear combination with coefficients 1/ 2 of the
states right and left. After the observation, the state collapses to one of the two. How
can we bring in the analysis of the phenomenon, the change of state that occurs in
the mind of the observer? Certainly, the observer must have stored the observation
datum in order to perform, as a consequence, the right procedure.
We suppose that the participation of the observer’s mind in this process can be
reduced to the setting of the datum in a memory, as is the case for example for the
memory of a computer.
We do not attribute to the original formulation of Szilard, about the necessity of
the presence of an intelligent being, any reference to the human mind.
We can, for the moment, limit ourselves to consider automaton which: first
observes the position; then it stores the datum in a memory and then, instructed
with a suitable program, it performs the right procedure, depending on the result of
the observation.
This should be considered an intelligent being, in the sense of Artificial Intelligence.
It should be noted that our emphasis on identifying the position of the molecule
before starting the operations, must not be overestimated.
Let us modify the apparatus in this way: the two pistons at the ends of the cylinder
are replaced by two end-walls that can be removed if desired. The central septum is
constructed in such a way as to be able to move without friction in either direction
once it has been introduced into the central position of the cylinder. In this way,
once introduced, it will move in the right direction depending on the position of the
molecule and the isothermal expansion will be realized.
To repeat the operation it will be necessary to remove the end-wall where the
expansion terminated and use it as the central partition in the new cycle.
It is at this point that the operator will have to perform a binary observation in order
to establish which of the two end-walls is to be removed, and hence the observer has
to create one bit of information in order to proceed correctly (this procedure could
also be considered a way to “observe” the position of the molecule even if someone
could, mistakenly in our opinion, call it “an indirect observation”). In conclusion,
sooner or later, a bit of information will be created and this is the fundamental point.
17.4 The Solution of Maxwell–Szilard Paradox
The need to store the result of an observation, suggested to Bennett [30] a possible
solution to the paradox “of Maxwell’s demon”.
As already noted, it is true that gas has made a cyclic transformation, but the
observer will now be, in a state different from his initial state.
The acquisition of the information about the position of the molecule has changed
the state of the memory used by the observer (for instance an automaton). In order to
store the datum, the observer had to change, or create, a physical system and, in so
doing, created information (in our example one bit). Therefore, the procedure used
17.4 The Solution of Maxwell–Szilard Paradox
by the observer may continue until all the entire available memory will have been
This situation is analogous to the example mentioned at the beginning in which a
gas, in ideal conditions, was made to expand at a constant temperature.
In the latter case the yield was maximum but, as we have noted, this result could
be obtained up to a certain amount of produced work, limited by the fact that the
expansion of the gas could not continue indefinitely. Similarly, in this case, we can
produce work with 100% efficiency until we have exhausted all available memories.
In conclusion in both cases, we produced work with the maximum efficiency but
some change has been produced in the universe. In the isothermal expansion case
the change is in the engine “strictly speaking” and therefore easily recognizable. In
the case of the Szilard engine, the change is in the memory of the observer.
The possibility of creating perpetual motion requires that the operation can be
repeated indefinitely, i.e., the engine “in the broadest sense” must perform a cyclic
transformation. The analogy between the two situations shows that it is necessary to
consider the memory of the observer, or better the observer as a whole, as a part of
the engine.
In conclusion, if we want to perform a cyclic transformation we have to erase the
information that we had to store during the operation.
17.5 Landauer Principle
The solution came from an intuition of Rolf Landauer in 1961 [32] (see also [31,
33]) in a paper devoted to the study of the constraints imposed by Physics to the
phase of information processing (by this we mean the realization of an observation,
the setting a result in memory and the processing of the data).
While the process of copying information (classical) can be done reversibly and
without any cost, the process of erasing information requires a minimum, non-zero,
production of entropy: for 1 bit of information this entropic cost Sbit is set to be
Sbit = k B ln 2 .
In the case of Szilard machine the entropic cost for the cancelation of the memory
results in an energy cost equal to (at least):
Q = T S = k B T ln 2 .
So to perform a completely cyclical process we should use, at least, all the work that
the gas has produced.
17 Introduction to the Role of Information in Physics
17.6 On the Separation Observer–Observed
The preceding discussion has forced us to consider the overall system [gas + observer]
as the overall “isolated” system to which the requirement to perform a cyclic transformation applies and, in this new perspective, we have verified the need of taking
into account the changes of state of the observer.
The conclusion we reach is, at first glance, rather surprising: the saving of the
Second Principle of Thermodynamics is based on the impossibility of separating the
observer from the observed. Staying within the optics of a “macroscopic” observer
that describes the system with the usual macroscopic variables volume, pressure
and temperature, the two possible states (molecule to the right molecule to the left)
constitute, before the acquisition of the information, one ensemble of states described
by the same macroscopic state variables. In exactly the same sense, by adopting the
statistical mechanical description, we consider the macroscopic state as an ensemble
of a (large) number W , of equivalent microscopic states.
From the macroscopic system, we cannot get any work. We have seen, however,
that this becomes possible if the observer can distinguish between the two “microscopic” states.
Generalizing this result we can say that if the observer increases the degree of
complexity of observation, which in this case means to distinguish between some
microscopic states of the ensemble, new results become possible.
With the words used at the beginning of this section, we will say that “some
processes that are unnatural to a certain level of complexity, can become natural at a
higher level of complexity.”
This conceptual change in the observer–observed relationship opens the way to
new developments, in the direction of broader generalizations of Thermodynamics
and Statistical Mechanics.
As the central point in this new conceptual development, is the need to fix a
given datum (that is, the result of an observation) in a memory, it is natural that the
great development of information theory, initially motivated by technological and
economic reasons connected with the transmission of messages, in fact, constitutes
a fundamental domain of knowledge.
17.6.1 Information as a Physical Quantity Which Acquires
a Physical Reality
This link between observer and observed is determined by the definition of a new
physical quantity called briefly information or more precisely amount of information.
In the above discussion, we have highlighted a crucial point: if we want that the
observation could lead to the production of work it is necessary that the datum
becomes an information and this implies
17.6 On the Separation Observer–Observed
1. It must be fixed in a physical medium (memory);
2. It must be able to be transmitted to another observer; the latter can reconstruct
the observation and eventually proceed accordingly.
The observer’s ability to create and process information is determined, and then it
is also limited, by the laws of mechanics, electromagnetism, quantum mechanics,
more in general by the laws of Physics.2 This means that the theory of information
is not a purely mathematical theory but is a physical theory and the information is
physical quantity (physical information) [34].
We will call information processing the set of two operations: (i) the deposit or
fixation of the information in some physical medium (the memory); (ii) the data
It is important to emphasize the role of the data transmission process in the definition of information processing because, as we can easily understand, this part will
inevitably introduce some corruption of the encoded information due, for instance,
to what is called “noise” present in all channels of data transmission. For example to
encode 1 bit we can use a system with two states (a relais with on/off positions, spin
up/down, an atom with two energy levels, etc.) but during the “transport” (transmission of the information) this system can interact with the environment in which it
is immersed and undergo a change of state. In this case, the information which will
arrive might be the opposite one. In the cases in which the observer is not able to control the phenomenon that generates the error we call this, uncontrolled interaction,
interaction with a “thermal noisy environment” (recall the definition of amount of
heat as a descriptor of the interactions “not controlled”). In other cases, the observer
recognizes a precise cause of error but is not able to remedy them due to external
constraints that can be either of technical or economic nature (such as in some phenomena of crosstalk between transmission lines or between electronic components).
In all cases, the observer will have to measure or at least to estimate the probability
that the stored information “flips” that is, changes the state during transmission. The
observer will, then, be able to put into play suitable defence strategies against these
errors and, in principle, he will be able to make the error probability as small as he
wants. To do this, however, he will have to pay a price: he will send more “binary
words” to transmit only one bit of information.
For example if he has to transmit the “datum” |1 and he knows that during the
transmission it can switch, due to the external noise, to |0 with probability p, the
observer will be able to transmit a message composed by three binary words |a |b |c
and agree with the recipient that it should be considered as |1 the message formed
by |1 |1 |1 and that it should be considered as |0 the message which arrives in the
form |0 |0 |0. With this agreement, the recipient will receive two types of messages
with certain identification (that is, those with three identical symbols) and six types
of message with uncertain identification (i.e. those in which only two symbols are
equal). Attributing to each of these last six “mixed” events the meaning relative to the
datum that appears twice he misses only in the cases in which, during transmission,
2 May be, in the next future by other scientific domains like neuroscience if, for instance, a reasonable
modeling of the operation of the human mind can be put into play.
17 Introduction to the Role of Information in Physics
two words have suffered an inversion but this event occurs with a probability that is
of order p 2 .
For example, if the probability of error on the single word is 10−3 , the recipient
will be in error once in a thousand (probabilistically speaking) if the transmission is
made with “one symbol message” but if the message is made by three binary words
the receiver will be in error about once in a million.
Therefore, the error can be made smaller then a predetermined amount but at the
price of transmitting less data given the same message length.
This brief account is intended to show that the two “constitutive” parts of the
concept of information processing, i.e., the storage and the transmission, are separated
only for practical convenience of discussion, but the two parts should be considered
complementary to each other in the definition of the concept of information. The
study of the noisiness of the transmission channels is therefore a fundamental part
in information theory. For a detailed discussion see, for example, [35–37].
Measure of the Quantity of Information (Classical)
From the above discussion we have seen that the “datum” that the observer defines
as the simplest one, that is what can be described in the most economical way, is a
binary choice: the molecule is to the right or to the left and, in this case, the datum
to be fixed expresses a choice between two equally likely possibilities. The system
is located in one of two equally probable situations and we will consider this one
as the elementary situation. We will say that the information possessed by such
a simple system is the unit information and we will call this unit of information
bit abbreviation that stands for the English term binary digit. If the system we are
observing can be found in n different and equally likely states, our uncertainty will
be larger and therefore an observation that chooses one possibility among many, will
provide “a more valuable information”. We say that it will provide more information
and we have to define, quantitatively, how much the contained information is.
This is, by definition, equal to the amount of information contained in the message sent to a receiver to enable him to reconstruct the observation. The amount of
information contained in a message is defined in the Theory of Information (see for
instance the seminal book [35]).
The transmission of a “given observation” which requires the minimum amount of
information is then the one for a binary situation (right–left, open–closed, up–down,
etc.), i.e., we are transmitting one bit. When the observation selects one outcome
out of a larger number of possibilities it will require a longer message, that will be
composed by a higher number of bits. In this sense we will say that the system
which can be in a higher number of possibilities “contains” a larger “quantity”
of information. What we can say about the codification of observational data can
be extended, in the theory of communication, to the definition of the amount of
information contained in a message.
The elementary message is composed of one symbol selected from two possibilities: this is a message composed of one bit and contains a unitary information. If
17.6 On the Separation Observer–Observed
the message is one choice among N equiprobable possibilities then we will say, by
definition, that the message contains an amount of information I given by
I = log2 N bits of information.
In general consider a message consisting of n symbols belonging to an alphabet
consisting of N different symbols (including spaces and punctuation).
If all the symbols were equiprobable in the composition of a message, then a
given message composed of n symbols would constitute one possibility out of N n
and therefore the amount of information I contained in such a message would be
equal to:
I = log2 N n = n log2 N bits of information.
As we may have to compose messages of different length, it is useful to observe
that, in this simple case, we can define the information per symbol which is equal to
log2 N . The information contained in a message composed of n symbols is n times
the average information per symbol. This apparently trivial step is useful in view of
the generalization to the case of non-equiprobable symbols which is based on the
concept of average information per symbol.
Example 17.1 Still remaining in the case of equiprobable symbols we consider a
very simple example. In the case in which the “elementary choice” is between two
equally probable states, i.e., N = 2. the amount of information contained in a message
composed by n symbols can be calculated with Eq. (17.8) and we find, as the result, the
value of n bits. The average information per symbol is, obviously, 1 bit log2 2 = 1 .
If the system can be found in 8 equally probable states, that is if the “elementary
choice” is one among eight possibilities, using the same expression Eq. (17.8) we see
that a message composed by n symbols contains 3n bits of information and that the
information per symbol is 3 bits (log2 8 = 3). In fact, it is easy to see that with three
binary memories we can describe the eight possible states. In the case of a number of
5 decimal digits each symbol contains 3.32 bits of information then a 5-digit number
contains 16.6 bits.
As we know, however, in the different languages that are used in communications,
the various symbols with which we construct the messages are not equally probable
as can be seen by a statistical analysis of the frequency with which each symbol
appears in a very long written text. Furthermore, the probability that a given symbol
occurs also depends on the occurrence of other symbols before it.
We are thus led to consider situations in which the probability of occurrence of
the various symbols in the message are different from one another. If we denote with
pi the probability with which a given symbol may occur, we define the amount of
information possessed by a message of n symbols, by the quantity:
H = −n
pi log2 pi .
17 Introduction to the Role of Information in Physics
Here, we see that the average amount of information per symbol is
I = −
pi log2 pi .
Therefore, the amount of information in a message composed of n symbols is n times
the average information per symbol. This is a generalized expression for the measure
of the amount of information contained in a message but, as we see, the notation to
indicate this amount has also been changed (I → H). The reason lies in the fact
that the expression for the amount of information coincides with the expression for
the entropy of a physical system when this is described in the mechanical-statistical
perspective as we shall see in the following subsection.
Entropy of Information (Alias “Shannon Entropy”)
In the formulation of the First Principle we have seen how the effect of all unknown
interactions is summed up in the so-called amount of heat Q. This step is sufficient
for the energy balance but cannot, by itself, constitute a measure of our degree
of ignorance with respect to a knowledge of the configuration that we consider
“complete”. The physical quantity to which we must attribute this function is entropy.
To understand why this represents the right choice, we need to define what we
mean by a complete knowledge and what we mean by saying that the level of knowledge that we have about a configuration, will increase or decrease as a result of a
change of state.
This step is made clear (within the atomic-molecular theory of matter) by the
basic relation due to L. Boltzmann in which entropy is linked to the number w of
microscopic configurations that are compatible with a given macroscopic state:
S = k B ln w ,
where k B = 1.38 × 10−23 J K−1 is the well known Boltzmann constant. Our macroscopic observation defines the macroscopic state with a small number of parameters
(the macroscopic properties) but we know that, from the microscopic point of view,
this macroscopic state corresponds to a large number, w, of microscopic configurations. It makes sense to establish that if after a change in the macroscopic state, the
number of the corresponding microscopic states increases, our level of knowledge
of the system has decreased.
Further, it makes sense to say that we will have the complete knowledge if the
number of possible configurations is equal to 1. As we know this happens in the
asymptotic situation to which the system is brought when its temperature is made to
tend to zero.
Thus the thermodynamic entropy is the suitable quantity to measure our “level of
ignorance” of the system with respect to its microscopic description.
17.6 On the Separation Observer–Observed
In the elementary mechanical-statistical perspective, all the w microstates that
correspond to a given macroscopic state are considered equally probable. We may
wonder what would be the correct generalization of Eq. (17.11) in the case of microscopic states with arbitrary probability pi . It is found that the correct expression
pi ln pi .
S = −k B
We see that in the case in which all microstates are equally probable, i.e., all
microstates occur with probability pi = 1/w, then the relation given by Eq. (17.12)
= k B ln w .
S = −k B ln
The analogy between Eqs. (17.9) and (17.12) is evident and so is evident the analogy
of the meanings. First we rewrite the information per symbol contained in Eq. (17.9)
so that it is expressed as a function of the natural logarithm:
= −K
pi ln pi ,
where we posed
K = log2 e =
ln 2
In this way both the amount of information Eq. (17.9) and the generalized Entropy Eq.
(17.12) are expressed in terms of natural logarithms and the different multiplicative
constants, in these two expressions, assume the meaning of a different choice of the
units of measure, being the bit for the amount of information and the joule/degree
for the thermodynamic entropy.
The quantity introduced by Eq. (17.14) was proved by Shannon to be the right
expression for the measurement of the amount of information in a message. The
same quantity is also the proper expression for the entropy measure of a macroscopic
system when the latter is measured in bits.
The microscopic point of view, then, justifies the analogy between entropy and
amount of information possessed by a system. When we say that the observer performs the measurements and “increases his knowledge” on the system, we say that
as a result of the measurements some microscopic states that were possible and
equally probable before are now excluded or become accessible but with decreased
We can verify that this corresponds to a decrease in the amount of information
possessed by the system, or to an increase in the knowledge of the system by the
17 Introduction to the Role of Information in Physics
Even L. Boltzmann proposed entropy as a measure of our remoteness from a
complete knowledge or, to use his expression, he had highlighted the relationship
between entropy and the missing information. He noted that an increase in entropy
corresponded to an increase in “missing information”.
The Qbit
The concept of information discussed so far, originated and was developed in the
context of classical physics (non quantum). For this reason sometimes, we refer to it
as classical information when we want to distinguish it from the information theory
that is developed in the context of quantum physics.
The substantial difference between the classical and the quantum cases lies in the
concept of superposition of states and in the consequences which derive from it.
Remaining in the elementary case of the binary choice, or bit, the fundamental
property that characterizes the classical information is that the “object” we choose to
represent, and subsequently transmit, the information, can be prepared in one of its
two possible states that we denote with |↑ or |↓ (they could represent, for instance,
the states of spin up and spin down of an electron).
In the quantum context, the system can be prepared in any of the infinite number
of states expressed, in general, by the relation:
|ψ = α |↑ + β |↓ ,
where α and β are two arbitrary complex numbers subject only to the condition that
|α|2 + |β|2 = 1.
In the classical case, the information is encoded in one of two alternative states
while in the quantum case the information is encoded in a state which can be, in
some sense, in both the alternative states.
Since in the quantum description |α|2 and |β|2 give the probability that a system
which is in the state |ψ, if observed, appears in the state |↑ and in the state |↓
respectively, we might think that the quantum theory of information constitutes just
a probabilistic generalization, we might say, of the classical theory. In other words,
it would be reasonable to think that the use of the state |ψ to encode information,
allows us to encode information inherently probabilistic: for instance,in our case it
would encode the information that the observation would show the state |↑ with
probability p = |α|2 and the state |↓ with probability (1 − p).
This would certainly be a useful generalization compared to our starting point
where, we recall, the observer encodes the result of his observation in a binary
system and in a deterministic way.
The point is that the α and β are complex numbers, and then possess an absolute
value (or module) and a phase. If we limit ourselves to states describing one particle,
only the relative phase between α and β, that is the difference of the two phases,
is relevant. Then for every value of |α|2 = p and of |β|2 = 1 − p (i.e. for given
17.6 On the Separation Observer–Observed
probabilities) we have an infinite number of possible values for the relative phase,
which is any real number in the interval (0, 2π ).
We can say that while the classical information can be represented with an object
whose state is described by a unit vector which can point either upwards or downwards, the system that contains the elementary quantum information is described as
a complex vector in a two-dimensional Hilbert space. It can be visualized as well
by a unit vector in a three-dimensional space (Bloch’s sphere). In the same Bloch’s
sphere the bit is represented by a unit vector that can have only one direction and the
two possible opposite orientations.
Either in the classical or in the quantum case the physical system chosen to encode
the minimum amount of information is a system of two states such as, for example,
an object of spin 1/2.
If this system is treated classically, it will constitute the classic representation of
the unit of information, the bit. If this system is treated according to the theoretical framework of quantum mechanics it constitutes, by definition, the basic unit of
quantum information that, by analogy to the classical case, will be called Qbit.
However, we must emphasize that this infinitely greater capacity to store information by one Qbit is strongly reduced by the information-processing operations.
After preparing the Qbit, Alice must send it and the recipient (Bob) must receive
and decode it, that is to say he has to reconstruct the original state (the transmitted
Qbit). Even if transmission occurred without loss of information, that is without
corruption during the transmission, Bob will not be able to completely rebuild the
original Qbit. In fact the only measurement that Bob can do is to choose an orientation
for its detector and determine if the received Qbit (for example we speak of a particle
of spin 1/2), will have positive or negative projection with respect to the direction
of the detector. In other words, when we want to extract information from one Qbit
we extract a classical bit. So, much of the information contained in the Qbit can be
elaborated but remains inaccessible.
The major difference between the quantum and classical information appears
completely when we use and send n Qbits.
Suppose that we want to transmit an object composed of n particles (for example
n particles of spin 1/2). In the classical context the complete knowledge of a system
composed of n particles is obtained if we know the state of each particle. Since each
particle is described by a vector in a two-dimensional space, the state of n particles
will be represented in a 2n-dimensional space.
In the quantum description, the perspective changes radically. The state that will
be prepared, transmitted, and measured (namely, the information) will be a vector
in a Hilbert space with 2n dimensions. For instance, in a three particles state the
quantum description requires a vector in an eightfold space: the state of a three
particles system is described by a vector of the type |α, β, γ , where α, β, γ can
have a binary value. We see that we can construct a basis of 8 mutually orthogonal
unit vectors which generate a eightfold Hilbert space while in the classical description
the dimensionality is 6.
Another fundamental difference with respect to the classical case regards the phase
of transmission of the information through a quantum channel that is a channel that
17 Introduction to the Role of Information in Physics
interacts with the “environment” according to interactions governed by quantum
physics. For instance, if the original message is encoded in “pure states” (mutually
orthogonal and analogous, in some sense, to the classical bits) these pure states can
be transformed in non-pure states: this limits the amount of information that can be
transmitted with one quantum state (Holevo bound) but we shall not enter in this
A thorough treatment of basic quantum information theory would require to go
through the foundations of Quantum Mechanics and this goes beyond the purpose
of this book, nevertheless it is worth pointing out the connection between Thermodynamics and some of the most important development of the fundamental Physics
in these years.
17.6.2 New Perspectives: The Physical Entropy According to
Maxwell–Szilard’s paradox solution has called into question the theory of information, or better, in essence, the concept of information. This concept and the theory
that has been developed around it, gives a new and fundamental representation of
the role that our knowledge and its transmission among observers, have in the study
of natural phenomena.
This path opens the way to further expansion of already established concepts and,
then, to the formulation of new paradigms. In this subsection we want to quote, as an
example, the new paradigm proposed by W.H. Zurek [38]. This new point of view
proposes, as a starting point, a new definition of entropy and, consequently, of other
thermodynamic potentials. This new paradigm has important consequences in the
way of opening up new horizons for statistical mechanics and suggests a contribution
to the solution to the secular problem of the “arrow of time”. As we have very briefly
seen, at a higher level of complexity (microscopic vision), Statistical Mechanics
defines Entropy according to the number of microscopic states in the two basic
Hst = ln w
for the Gibbs–Boltzmann case and:
Hst = −Tr(ρ̃, ln ρ̃)
for the quantum case where ρ̃ is the density matrix and Tr is the trace operator
(see [39, 40]).
This probabilistic description of reality certainly captures a profound aspect and
thus represents a real increase in the level of complexity but poses an apparently irresolvable problem. If a system evolves over time according to Hamiltonian dynamics,
the number of microstates (phase space volume) remains constant over time, and so
17.6 On the Separation Observer–Observed
the so-called entropy increase law is not justified. In other words, we can say that
the hope of reconciling reversibility by inversion of the temporal axis of microscopic
laws of motion with the irreversible “thermodynamic arrow of time”, fails.
Already von Neumann had seen, in his discussion about the measurement in
quantum mechanics, the relevance of entropy and information theory. The analogy
first between the thermodynamic entropy and that of Gibbs–Boltzmann and later,
between these two and the theory of communication developed by Shannon and
Weaver [35], laid the foundation for further generalizations and a possible solution
to the problem.
Maxwell, first, and Boltzmann, later, suggested that entropy was linked to a kind of
measure of our ignorance: the concepts of “order” and “disorder” are linked, starting
from the use we make of these terms in common language, to our ability to describe
our observations more or less fully. Important contributions come from Jaynes [41,
42] and Brillouin [43].
The increase in Entropy, measures the increase in the amount of information
contained in the system and this allows us to say that “the disorder has increased” but
does not measure the amount of order. This is related to the ability of the observer to
describe, with the desired accuracy, the observation. More precisely, a messy system
is a system that requires a very long description to be reproduced with the desired
accuracy. The length depends on the minimum number of bits a message must contain
so that the receiver can understand the description. The message will be considered
as understood when the receiver will be able to reproduce the configuration at the
required level of detail.
For example, the description of a microstate will be understood by the receiver if
he is able to print a two-dimensional plot or a graph with the desired accuracy, with
the aid of a normal computer that uses the data contained in the message.
The minimum length of the message required by a universal computer (the
receiver) to reproduce the observed state is called algorithmic information content,
algorithmic randomness, algorithmic entropy, or, sometimes, algorithmic complexity [30, 44–48].
This quantity, named algorithmic randomness or algorithmic entropy, measures
the level of randomness and the evolution of a state toward configurations that we
call more disordered might be described by the increase in this term. This constitutes a possibility, therefore, that the thermodynamic arrow of time, which can not
be explained by the probabilistic entropy for Hamiltonian systems, might then be
attributed to the increase of algorithmic entropy.
In typical (traditional) macroscopic treatment, the relevant properties are determined by a few macroscopic parameters such as pressure, volume, temperature.
From these data relating to the above parameters, we can calculate the entropy and
from this, the maximum amount of extractable work. More detailed knowledge of
microscopic states is absolutely useless because the machine was designed to ignore
microstate information (we say that the observer has chosen this level of complexity
of observation). We may think that a different design could benefit from a variable
strategy based on the information obtained from the observations. The question then
becomes: what physical quantity should an intelligent entity use to maneuver the
17 Introduction to the Role of Information in Physics
machine so that it can also benefit from opportunities that may arise (such as, for
instance, fluctuations)?
Zurek proposes a quantity that he calls “physical entropy” which is defined as the
sum of:
1. Statistical Entropy, Hst . The term, which in the macroscopic description coincides
with thermodynamic entropy and measures our ignorance of the system;
2. Algorithmic Entropy, K for the representation of known data; This term is the
measure of the minimum length of the string describing the data (the measure of
what we know).
This formulation of the entropy concept provides a proposition to formulate the
Principles of Thermodynamics from the point of view of an “information gathering
and using system” (IGUS) that is, an entity-observer that can make measurements,
process the acquired information and use the results to take the right actions to
increase the performance of the machine that he controls [38].
S = Hst + K .
In this way, the degree of knowledge on the system by the observer helps to define
the thermodynamic potentials of the system. This might seem to be a weird artifact
because we are fond of the idea that the properties of an object are, in fact, objective.
How can the same “thing” be described by different properties depending on whether
the observer knows more or less? Are these “different things”?
They are, indeed, different things in fact we can obtain different amounts of work
without violating the Second Principle.
In the paradigmatic case (Maxwell–Szilard engine and Bennett’s solution) discussed above the situation seems well described: the violation of the Second Principle
and its restoration depend on the observer–observed interaction and the observer’s
participation in the overall accounting.
In the Zurek’s proposal, the role of the observer in determining the properties of
the observed system becomes explicit. Algorithmic complexity helps to determine
the potentials of the observed system, and these become fully defined according to
the observer’s presence, the theoretical context achieved and the level of complexity
adopted for his observations. However, it must be emphasized that the presence of
the observer manifests itself from the “established results” and nothing is said in this
context about the observation process.
Appendix A
Math Tools
We recall here some useful mathematical relations widely used in the discussions
contained in this volume. They are simple relations of general validity between the
partial derivatives of functions of several variables.
Relation 1
Consider a real variable z expressed as a function of two other real variables x and
y in the following form:
z = z(x, y) .
Let us write the differential of z:
dx +
dy .
dz =
∂x y
∂y x
Let us suppose that the function can be inverted in the form y = y(x, z) or x =
x(y, z). The differential dy is written as
dy =
dx +
dz .
By substituting Eq. (A.3) in Eq. (A.2), we get
dz =
dx +
dx +
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
dz ,
Appendix A: Math Tools
and putting together the terms in dx and dz in Eq. (A.4) we obtain
dz =
dx +
dz .
Since Eq. (A.5) is an identity, the two following equations must hold:
= 0,
= 1.
Equation A.6 leads to the following very useful identity involving the three partial
= −1 .
∂ x y ∂ y z ∂z x
Relation 2
Let us consider a variable w which is a function of two other variables x, y:
w = w(x, y) .
Let us suppose as well that x and y may be expressed in the form x = x(y, z) or
y = y(x, z). The differential of w is
dw =
dx +
dy .
If we put the differential dy of the function y = y(x, z), in Eq. (A.10) we obtain
dw =
dx +
dx +
dz .
Putting together the terms in dx and dz in Eq. (A.11) we get
dw =
dx +
dz .
Considering that w = w(x, y) = w(x, y(x, z)) = w(x, z) and differentiating we get
Appendix A: Math Tools
dw =
dx +
dz .
Comparing Eqs. (A.12) and (A.13), we obtain the two identities:
∂z x
∂ y x ∂z x
∂x z
∂x y
∂y x ∂x z
The expression given in Eq. (A.14) is an obvious expression of the theorem on the
derivates of a composite function, while Eq. (A.15) is very useful since it establishes
the relation between the partial derivatives of the variable w with respect to the same
variable x, keeping constant either z or y.
Euler Theorem for Homogeneous Functions
Let us consider, for simplicity, a function of two variables in the following form:
z = f (x, y) .
This function is said to be a homogeneous function of degree n if the following
property holds:
f (t x, t y) = t n f (x, y) ,
where t is an arbitrary parameter. At every point (x, y), we differentiate (A.17) with
respect to t. If we pose x = t x and y = t y we may write
df =
∂ y
∂ y
dt = n t n−1 f (x, y) dt ,
and this leads to the identity
∂ y
∂ y
= n t n−1 f (x, y) .
Taking into account that ∂ x /∂t = x and that ∂ y /∂t = y, Eq. (A.19) becomes
∂t x
∂t y
= n t n−1 f (x, y) ,
Appendix A: Math Tools
and for the value t = 1 of the parameter we finally obtain
= n f (x, y) .
Obviously, in the case of homogeneous functions of an arbitrary number of variables
xi with 1 ≤ i ≤ k, Eq. (A.21) is immediately generalized to
∂ xi
= n f (xi ) .
In particular for homogeneous functions of the first degree, we obtain the following
fundamental relation:
f (xi ) =
∂ xi
Schwarz’s Theorem
Schwarz’s Theorem establishes a condition of symmetry between the partial derivatives of functions of n variables.
It can be stated as follows: we are given a real function f of n real variables
(x1 , x2 , . . . , xn ), differentiable and suppose that all partial derivatives are themselves
differentiable. Under this condition it can be proved that
∂x j
∂ xi
∂ xi
∂x j
This result shows that if we take the cross derivative of the function f with respect
to any two variables x j and xi , the order in which the two partial derivatives are
performed, is irrelevant. In the literature, the cross differentiation is also denoted by
the symbol ∂ 2 /∂ x j ∂ xi and then the Schwarz Theorem may be expressed by
∂2 f
∂2 f
∂ x j ∂ xi
∂ xi ∂ x j
It is rather common to find the symbol ∂i for denoting the partial derivative of the
function f with respect to the variable xi . With this notation the Schwarz Theorem
can be written in the following form:
∂ ji f = ∂i j f .
Appendix A: Math Tools
Differentials, Infinitesimals, Finite Differences
Throughout the treatise, we use different symbols to denote finite or infinitesimal
variations of physical quantities. Let us introduce the symbols we used and their
Finite Differences
In general, when we want to denote the variation of a property, i.e., a function of
the state, either between two equilibrium states or between two system, we use the
symbol . To be more precise, if we consider a change of state from an initial state
A to a final state B, we denote by the symbol the difference between the value
of the property in the final state B minus its value in the initial state A. Likewise, if
we are considering two systems which can be designated with the symbols I and II
(arbitrarily chosen) the symbol will designate the value of a quantity in system II
minus its value in system I. For instance U , S, T denote the variations of energy,
entropy and temperature respectively either in a process between two equilibrium
states or the difference of the property between two different systems according to
the convention:
U = U (B) − U (A) ,
S = S(B) − S(A) ,
T = TB − T A .
Finite Differences Small Compared to Characteristic
The use of the symbol δ denotes small quantities and this situation occurs in two
• Small variations of state functions;
• Small amounts of physical quantities, for instance work or heat, in equations
describing small transformations.
By small transformations, we mean finite transformations in which the variations
of the state functions are approximated by series expansions up to the linear terms.
In the equations which describe small transformations, quantities like the amount of
work done or the amount of heat exchanged which are not variations of a property,
are also considered small (at the same order of magnitude) and are denoted by the
same symbol δ. The same applies to denote small variations of properties between
two different systems.
Appendix A: Math Tools
For instance, if T = (TB − T A ) T A , TB in Eq. (A.29), then, the temperature
difference will be replaced by the expression δT . Similarly, for any other (small yet
finite) variation like δp, δU, δS, etc.
Differentials and Infinitesimals
In small transformations, in which small quantities are involved, we may imagine to
consider smaller and smaller quantities and pass to the limit for variations tending
to zero. One says that the quantities we are considering are infinitesimal. There are
two kinds of infinitesimal quantities:
• The ones which are variations of some state functions f and, therefore, are exact
differentials of a state function and are denoted with d f .
• The ones which are just infinitesimal but are not the differential of a state function,
and are denoted as d̂.
Mutual Exchanges Between Two Systems
To complete the list of the symbols which particularly deserve to be brought to
the reader’s attention let us refer to the case where two systems (e.g., labeled as I
and II) are considered. In writing the equations for energy and entropy variations
either in small or infinitesimal processes, we have to distinguish between quantities
referring to system I from quantities referring to system II. The notation we adopt is
the following:
1. If we deal with small or infinitesimal variations of properties (like for instance
energy, entropy, volume, mole numbers, and so on) the two alternatives are highlighted by the suffix placed on the state variable considered. For example dU I ,
dS I , or dV I , etc.
2. If we deal with small or infinitesimal quantities which are not variations of state
variables the distinction between quantities referring to one of the two systems is
made by putting the symbol I or II as a suffix on the symbol δ or d̂. For example,
d̂ I Q, δ I , etc.
Appendix B
Pressure Exerted by a Particle Gas
Mechanical Interpretation of the Pressure Exerted by
a Particle Gas
Consider a closed system consisting of a gas formed by a large number of elementary
constituents (atoms, molecules, photons, etc.). The mechanical effect of the interaction of the elementary constituents with the walls of the vessel containing the gas, is,
in general, a rather complicated problem but, in favorable conditions (for instance,
when viscosity does not produce any effect) is recorded by a macroscopic observer
under the name of “pressure”. The pressure, which is defined in the maximum generality as a function of the thermodynamic potentials in Eqs. (4.21) and (4.25), in
this particular context is measured as the force per unit of area, exerted by the gas,
perpendicularly, on the walls of the container. Its value depends on the momentum
distribution of the elementary constituents that form the gas, as well as on their
modality of interaction with the walls. It is interesting to highlight to what extent the
hypothesis of homogeneity and of isotropy of the gas contribute in determining the
expression of the pressure no matter what the energy distribution of the particles is.
In this modeling, the pressure has a purely mechanical interpretation and must
be traced back to the amount of total momentum exchanged by the elementary
constituents per unit time and per unit area of the walls. Let us denote with d
the area of a wall surface element that delimits the volume V within which the gas
is confined.
We will make the following assumptions:
1. The particles are distributed uniformly, for each range of energy, and do not
interact with each other;
2. The particles are distributed isotropically;
3. The particles have no internal degrees of freedom, that is their energy is given
by the kinetic energy associated with the motion of the center of mass. This
hypothesis can immediately be abandoned if in the expression of the energy of
a single particle, the term describing the motion of the c.m. can be additively
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
Appendix B: Pressure Exerted by a Particle Gas
separated from the others. In this case, the contribution to the total energy of the
particle due to the c.m. motion will be indicated by ε.
With this specification, we denote with N (ε) the spectral energy number density.
This means that the number of particles per unit volume with energy in the interval
(ε, ε + dε) is given by N (ε) dε. Isotropy means that the number of particles with
energy in the interval (ε, ε + dε) and moving within the solid angle d, is given by
dN = N (ε) dε
Let’s assume the z-axis normal to the elementary surface d, oriented outside the
volume. Consider the particles with energy in the interval (ε, ε + dε) and be v the
module of their velocity, with v = v (ε).
The number of particles in the latter energy interval, moving within the solid angle
d and impinging the surface d in the time interval dt will be given by
dN (ε, v, ϑ, ϕ) = dt d v cos ϑ dN .
dN (ε, v, ϑ, ϕ) =
dt d v cos ϑ d (cos ϑ) dϕ N (ε) dε .
Further let’s denote with P the momentum of one particle. Given the hypothesis of
homogeneity and isotropy, the module of the momentum will depend on the particle
energy only. We call dispersion relation the following:
ε = ε (P) .
For instance, for nonrelativistic particles with mass m, we have
For relativistic particle, the dispersion relation reads
P 2 c2 + m 2 c4 .
and for photons (or, approximately, for ultrarelativistic particles that is with energy
ε mc2 ):
ε = Pc.
We then make two extreme hypotheses: that the individual particles are completely
absorbed by the surface or that they are elastically reflected.
Appendix B: Pressure Exerted by a Particle Gas
Particles Completely Absorbed by the Wall
In this case, relevant only for the absorption of radiation, each (non-massive) particle
impinging d in the time interval dt, will transfer its momentum to the wall element.
Let us assume as z-axis of a local reference frame, the axis normal to the wall in
d and oriented outside. For symmetry reasons, we are interested only in the z
component of the transferred momentum, while the other components will balance.
If we multiply Eq. (B.3) by Pz = P cos ϑ, and we integrate over all directions with ϑ
in the interval (0, π/2), we find, for the z-component of the momentum transferred
to d in the time interval dt by the particles with energy in the interval (ε, ε + dε),
the expression:
d Pz (ε) =
dt d (Pv) N (ε) dε
d Pz (ε) =
cos2 ϑ d (cos ϑ)
dt d (Pv) N (ε) dε.
The total amount of momentum transferred to the wall per unit area and per unit time
will be given by
1 ∞
(Pv) N (ε) dε.
In the mechanical-statistical model of gases, this is the expression of the force per
unit area, i.e., the pressure, if the particles are completely absorbed by the surface of
the wall.
Particles Elastically Reflected by the Wall
In this case, the calculation proceeds as in the previous case with the only difference
that the value of the z-component of the momentum transferred by each particle in
the impact with the wall will have the value Pz = 2P cos ϑ that is twice the value
transferred in the case in which the particle is completely absorbed. Then we have
(Pv) N (ε) dε.
Both in the case of completely absorbed and perfectly reflected particles, no
hypothesis is made concerning the distribution function but that of isotropy and
homogeneity. In particular, no hypothesis concerning thermodynamical equilibrium
is necessary.
In both cases, the crucial point is the term (Pv) in the integrand and then the
dependence of the momentum on the energy of the elementary constituent must be
Appendix B: Pressure Exerted by a Particle Gas
known (in Eq. (B.11) the dependence of the velocity on energy is also given). We
consider the two extreme cases of the Newtonian and that of the ultra-relativistic
elementary constituent.
Nonrelativistic Case
Let us consider, as a first example, the case of classical non relativistic particles for
which the dispersion relation is ε = P 2 /2m. In this case, we have
Pv = mv2 = 2ε,
and hence Eq. (B.11) will give, for the exerted pressure, the value
ε N (ε) dε =
is the energy density of the particle gas. In the case we are considering, if we denote
with N0 the total number of particles contained in the volume V :
εi = N0 ε ,
where ε is the mean value of the particle’s energy (defined as U/N0 ). If the gas is
in a state of thermodynamical equilibrium we have
ε =
kB T ,
and then Eq. (B.13) will give the well known:
pV =
RT ,
where NA = 6.022 × 1023 is the Avogadro’s number. It is useful to highlight that
even far from thermodynamical equilibrium the pressure will obey to Eq. (B.13)
provided the conditions of homogeneity and isotropy postulated at the beginning,
are preserved.
Appendix B: Pressure Exerted by a Particle Gas
The Case of Radiation
In this case, the “particles” are photons for which the dispersion relation is given by
Eq. (B.7) with v = c for all particles. In this case Eq. (B.11) becomes
ε N (ε) dε =
It is necessary to highlight that Eq. (B.18) between the pressure of the radiation and its
energy density does not depend on choosing a particle model for radiation as we did
in the preceding subsection. If we adopt the point of view of classical macroscopic
electrodynamics, the only necessary condition for the validity of Eq. (B.18) is that the
relationship between the momentum density (let’s denote it with g) and the density
of energy flux (let’s denote it with S) of the electromagnetic field is
This is predicted by Maxwell’s theory and for a clear discussion on the argument
see [49]. The adopted particle model allows us to simplify the calculations and make
the analogy between different cases clear.
Solutions to the Problems
Solutions to the Problems of Chap. 3
3.1 The Coefficient of Performance of the refrigerating cycle is defined by the ratio
ξ = Q 1 /W1 of the heat extracted from the cell (the required effect) per hour to the
work performed by the compressor on the refrigerating fluid in the same time interval.
In our example Q 1 = 4.2 × 107 J and hence
W1 =
1.4 × 107 J.
This useful work done on the fluid corresponds to the average performed power
P1 3.9 × 103 J s−1 . Then the power consumed from electricity will be
3.9 × 03
4.3 × 103 W.
After an integer number of cycles, the variation of energy of the fluid will be zero
and the quantity of heat transferred to the environment will be qual to the quantity
of work performed on the fluid plus the heat dissipated by the compressor plus the
amount of heat extracted from the cell. In one hour, the latter will amount to Q 1 while
the contribution of the compressor will be about 1.55 × 107 J. The total quantity of
heat transferred to the environment by the refrigerator, per hour, will be
Q tot
5.75 × 107 J.
The entropy variations per hour of the cell and the environment will be
5.2 × 10 J K
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
Solutions to the Problems
The first result is due to the fact that the cell gives off to the engine the same amount
of heat it gains from the environment and consequently, the environment gains, every
hour, all the energy consumed by the engine. If the refrigerator worked as a reversible
engine the COP would be
COPrev = 5 ,
hence, the work required to the compressor, per hour, would be Wrev = Q 1 /ξrev
0.84 × 107 J. The power of the compressor would be P 2.6 × 103 W.
3.2 In our case the so called “Universe” is formed by the engine and the two heat
sources. The total entropy variation in every cycle is given by (Seng = 0)
Suniv = −
where Q 1 is the quantity of heat given to the engine by source at T1 . Moreover, the
First Principle for the engine gives (U = 0)
W = Q1 − Q2,
where W is the quantity of work produced by the engine per cycle. From Eqs. (C.7)
and (C.8) we obtain Q 1 247.5 cal and
47.5 cal
198.5 J
and the efficiency of the engine is
3.3 (a) The efficiency in the reversible initial conditions is
ηrev =
Q1 − Q2
= 0.25.
(b) From the Second Principle, we know that the entropy of state C is equal to
the entropy of the state B while the entropy variation in the irreversible adiabatic
transformation BC will be positive and hence SC > SC = SB . This implies that in
the transformation from C to C along the isotherm the entropy must decrease, i.e.,
a certain quantity of heat, say Q C C must be released by the engine to the thermostat
T2 . As a consequence the total quantity of heat given off to the source T2 will be
Q 2 = Q C C + Q 2 .
The value for the efficiency is
Solutions to the Problems
Q 1 − Q 2 − Q C C
Q C C
= ηrev −
η =
In particular,
η = 0.25 −
= 0.15.
3.4 Let’s denote with Q 1 , Q 2 , and Q 3 the absolute values of the heat quantities
exchanged with the three sources, respectively. In every cycle, the entropy variation
of the overall system ( engine plus the three heat sources ) must satisfy the inequality
Soverall > 0 which implies
+ Q2 .
Q3 ≥ Q1
For the amount of work produced we have
+ Q2 1 −
W = Q1 + Q2 − Q3 ≤ Q1 1 −
The efficiency depends (for instance) on the ratio ξ = Q 2 /Q 1 and must recover the
Carnot values in the limiting cases ξ = 0 and ξ → ∞ and for reversible cycles. We
may write
Q1 + Q2
(1 + ξ )
3.5 For the composite system, the entropy variation per cycle is
≥ 0
Q2 ≥ Q1 + Q3 .
Taking into account that W = Q 1 − Q 2 + Q 3 , we obtain
+ Q3 1 −
W ≤ Q1 1 −
Solutions to the Problems
For a reversible engine
+ Q3 1 −
W = Q1 1 −
and if we define the parameter ξ = Q 3 /Q 1 then the work produced is zero for the
particular value ξ0 such that
(T1 − T2 ) T3
ξ0 =
(T2 − T3 ) T1
while for 0 ≤ ξ ≤ ξ0 the engine produces positive work and for ξ ≥ ξ0 the engine
must absorb work from the outside.
3.6 From the Second Principle applied to the composite system,
≥ 0
Q1 ≥ Q2 + Q3
and for the work
W = Q2 + Q3 − Q1
+ Q3 1 −
≤ Q2 1 −
for any Q 2 and Q 3 . If we use the engine as a heat pump it is convenient to consider
the absolute value of W , which represents the amount of energy consumed by the
heat pump per cycle and write the expression for the COP:
+ Q3
COP =
|W |
− 1 + Q3
In the limit in which either Q 2 or Q 3 tends to zero the COP reduces to Eq. (3.70).
3.7 Clearly, we can extract some work only if the two bodies have different temperatures. If we denote by T f the final common temperature we may write the amount
of work from the First Principle:
W = [C1 T1 + C2 T2 − (C1 + C2 ) T f ] .
It is clear that the maximum amount of work will be obtained with the procedure
that leads to the minimum final temperature. The entropy variation of the universe is
Solutions to the Problems
Suniv = C1 ln
+ C2 ln
(C1 +C2 )
S = ln
T1C1 T2C2
and here we see that the minimum value for T f will be obtained for S = 0, that is,
(C +C )
T f = T1C1 T2C2 1 2 .
If the two bodies have the same heat capacity, we have
Tf =
T1 T2
and, in this case, the amount of work is
W = C T1 + T2 − 2 T1 T2 = C
T1 −
1. From the First Principle it results that Q 2 = 400 kJ, while from the Second Principle Stot = 0. Given that Seng = 0 (cyclic transformation), we find Ssource = 0
from which
= 200 K .
= 500 ×
T2 = T1
2. Answer: T2 = 100 K .
3.9 By using the First Principle
Q 2 = 400 kJ .
As for the second question, we must impose two conditions:
+ C ln
4 × 105 = 104 (T f − T i ) ,
where the first relation is due to reversibility (Stot = 0) and the second one expresses
the temperature variation of the body. The solution is T i 182 K and T f 222 K.
3.10 The availability coincides with the energy variation since we neglect the volume variations. The change in the energy transformation from initial state to the dead
state will be
U = −C × 60 ,
Solutions to the Problems
where C is the heat capacity of water. Its variation of entropy in the same transformation will be
C dT
= C ln
= 4.18 × 10 × ln(0.83)
S =
= −0.778 × 10 J K
The availability coincides with the available energy because the volume is assumed
to be constant and it will amount to
= −U + T0 S
= 25.08 × 10 − 22.82 × 10
= 2.26 × 10 J .
This result could be found also imagining using a series of infinitesimal Carnot
engines working between the generic temperature T of the water and the room
temperature T0 . The infinitesimal amount of work produced by one Carnot engine
will be
δW = η(T, T0 ) δ Q
=− 1−
C dT .
In our case δ Q = −C dT and hence the maximum amount of work we will be able
to obtain will be
= 4.18 × 106 × (60 − 54.58)
= 22.65 × 10 J .
W = C × 60 + C T0 ln
Suniv = √
T1 T2
3.12 We have to determine the quantity of work absorbed by the machine and this
will be given by the following relation:
C (T0 − Tcold ) + W = C (Thot − T0 ) ,
where C 41.8 kJ K−1 is the heat capacity of each vessel, T0 293 K is their initial
temperature and Tcold 281 K is the temperature of the cold body after 10 min. For
Solutions to the Problems
the temperature of the hot vessel, let’s apply Eq. (C.34) in the reverse operation:
Tcold × Thot = T02
which gives Thot
305.5 K. From the previous relation, we have
W = C (Tcold + Thot − 2T0 )
which can be written as
W =C
(Thot − T0 )2
(Tcold − T0 )2
= 21.4 kJ .
The power consumed by the refrigerator is roughly
21.4 ×
35.6 W .
3.13 Let’s call Q 2 and Q 3 the quantities of heat absorbed respectively from sources
at T2 and T3 . From the balance of energy we have
Q 1 + Q 2 + Q 3 = 600 J
and from reversibility the total variation of entropy, in every cycle, must be zero. The
entropy variation of the engine is zero (cyclic transformation) and we are left with
the total entropy variation of the three reservoirs. This can be written as
= 0.
We have two equations in the two unknowns Q 2 and Q 3 and we find Q 2 = −333.3 J
and Q 3 = −66.6 J. This means that the engine gives back heat both to reservoir 2
and to reservoir 3.
3.14 In the second transformation, we have Q II = 0 and W II = −100 J therefore
U A − U B = −100 J . For the first transformation we find U = 100 J, W I = 212 J
and hence Q I = −112 J. The change in entropy is
SB − SA = −0.265 J K−1 .
The second transformation cannot be quasi-static because we have an increase of the
3.15 The amount of heat given by the engine in order to melt the ice and to raise
the temperature of the water by 20 ◦ C is, respectively,
Solutions to the Problems
Q melting = m ice λ = 30 × 334
10 J
Q w,tot = (m w + m ice ) × 20 × 4.18
2 × 10 J .
The amount of heat withdrawn from the hot source Q hot , can be obtained from the
condition that the engine is reversible. The latter condition is guaranteed by the
equation Stot = 0:
Q hot
m ice λ
+ (m w + m ice ) × 4.18 × ln
The work obtained in this transformation will be given by
W = Q hot − Q melting + Q w,tot .
Solutions to the Problems of Chap. 5
5.1 H 1.6 × 104 J .
S 22.7 J K−1 .
5.2 Let us refer to Eqs. (5.44) and to (5.45). In the first instance, we can consider
the coefficient of compressibility constant in this range of pressures and express the
volume as a linear function of the pressure variation.
V ( p) = V ( pi ) [1 − χT ( p − pi )] .
At the lowest order in the change of volume we get
Q = − α T V ( pi ) [ pf − pi ] .
This is equivalent to keep the volume constant in Eq. (5.44); the successive correction,
in the integral, would be
V ( pi ) χ T
The ratio with the first term is of order χT p which, in our example, is ∼10−3 .
Similarly, for the calculation of the amount of work, we refer to Eq. (5.45) and we
calculate the integral to the lowest order getting
( pV ) d p = V ( pi ) p2
Solutions to the Problems
For the amount of work, we shall get the expression
W = χT V ( pi )
pf2 − pi2
Let’s replace the numerical values. Regarding the volume of water at 40 atm we refer
to Eq. (C.67) and evaluate the orders of magnitude. We obtain
Q = −0.2 × 10−3 × 293 × 10−4 × 39 × 105
22.8 J .
Similarly, for the calculation of the amount of work we have
W = 0.48 × 10−9 × 10−4 ×
1600 − 1
× 1010 = −0.384 J .
For the variation of energy, we simply have
U = −22.8 + 0.384
22.4 J .
In other words, we have to supply the water of 22.8 J under the form of heat transfer
and we get, from the water 0.384 J under the form of mechanical work which is due
to its expansion. We used the conversion factor 1 atm = 105 N m−2 for simplicity.
5.3 The amount of heat delivered to the water in this isothermal compression (using
the same formulas contained in the above example), is about
7.3 J
and, as we can see, its value is positive as in the case of the isothermal expansion
at 20 ◦ C. This is due to the anomalous behavior of water: its coefficient of thermal
expansion is α 0 at the temperature ϑ
4 ◦ C and becomes negative for temper◦
atures between 0 and 4 C.
This implies that if we could make a Carnot cycle between the temperatures
ϑ 20 ◦ C (as in Problem 5.2) and ϑ 0.2 ◦ C we could obtain perpetual motion
because in both isotherm we transfer positive quantities of heat to the system. This
raises the question whether it is possible to connect the two isotherms with two
adiabatic processes In order to look for an answer consider Sect. 5.5.4 and remember
that α changes sign when traversing the temperature ϑ 4 ◦ C.
W =−
pd V = −ma
p dp
− ma ( pf )2
U = mU ∗ = −mbT ( pf − p0 )
52 J .
−89.3 J .
Solutions to the Problems
Q = U − W
−141.3 J ,
which means that the heat must be released to the environment.
W = U = mc (Tf − T0 ) − mb ( pf Tf − p0 T0 )
CV =
∂U ∗
= c − bp − bT
1734 J ;
and from the equation of state we get
C V = c − bp −
Cp =
∂ H∗
∂U ∗
= c − bp + bp = c .
(c) We must choose one quasi static transformation connecting the initial state
( p0 , T0 ) to the final state ( pf , Tf ). For instance, we may go from the initial state
to the intermediate state ( p0 , Tf ) with an isobaric reversible transformation and
later from the intermediate state to the final state with an isothermal reversible
process. We have
Sisob = mC p
= mc ln
Sisoth =
11 J K−1 ,
Q isoth
where Q isoth is the amount of heat given to the solid in the isothermal transformation at T = 300 K.
Following the solution of the preceding exercise we find
Q isoth = (U )isoth − Wisoth = −mbTf ( pf − p0 ) − ma ( pf )2
−1.05 J K−1 .
−316.1 J ,
Solutions to the Problems
Finally, S
9.9 J K−1 .
5.6 The amount of work is given by
W =−
and the volume variation in an isothermal transformation is
dV = −χT V d p .
Then we have
W =
χT V p d p ,
where the volume on the coefficient χT can be assumed to remain constant. Hence,
the integral becomes
W = χT V
− 1
= 8.6 × 10−12 × 3 × 1.14 × 10−4 × 0.5 1016 = 14.7 J.
In order to find the amount of heat given to the copper it is necessary to determine
first, the entropy variation. This can be accomplished by using the Maxwell relation
in Eq. (5.15) and the consequent Eq. (5.44):
Q = −αT V p
−5.0 × 10−5 × 300 × 3 × 1.14 × 10−4 × 108
513 J.
The free energy variation in an isothermal transformation is given by the amount
of word done on the system:
F 14.7 J,
and for the variation of energy we may write U = F + T S = F + Q:
14.7 + 513 = 527.7 J.
Solutions to the Problems of Chap. 7
7.1 The energy balance is satisfied if we calculate the amount of work that is done
on the system from the outside in the infinitesimal transformation in which one mole
of saturated vapor, treated as a perfect gas, is brought to the temperature T + dT
keeping in saturation condition. The energy increase will be dU = C V dT while for
the amount of work, we have
Solutions to the Problems
d̂ W = − pdVm = −RdT + Vm
dT = −R dT +
dT ,
where we have made use of the identity
p dVm + Vm d p = RdT .
From the First Principle d̂ Q = dU − d̂ W , and therefore:
d̂ Q = (C V + R) dT −
Csat = C p −
dT .
7.2 The triple point is determined by the intersection of the two equilibrium lines.
For the triple point, we have
19.49 −
= 23.03 −
45.1 torr = 5993.8 Pa.
195.3 K ptr
For the latent heat of vaporization and sublimation, let us refer to Eq. (7.11). From
the data, we have
R × 3063
25.4 kJ mole−1
R × 3754
31.2 kJ mole .
Imagine three close transformations, near the triple point, from solid to vapor, from
vapor to liquid and from liquid to solid. With obvious meaning of notations we may
Hs→v + Hv→l + Hl→s = 0
Hfus = Hsub − Hvap
31.2 − 25.4
5.8 kJ mole−1 .
7.3 From Eq. (7.13), given one pair of temperatures Ta and Tb , the mean molar latent
heat of evaporation in the interval of the two temperatures is given by
Hm = R ln
For the interval (ϑ1 , ϑ2 ) Eq. (C.89) gives
Ta Tb
Ta − Tb
Solutions to the Problems
(Hm )(1,2) = R ln
373.16 × 363.16
= 8.31 × 0.365 × 13551.7 = 41.1 kJ .
For the interval (ϑ2 , ϑ3 ) we obtain
(Hm )(2,3) = R ln
363.16 × 353.16
= 8.31 × 0.391 × 12825.3 = 41.6 kJ .
Let us calculate the same result in the temperature interval (ϑ1 , ϑ3 ):
(Hm )(1,3) = R ln
373.16 × 353.16
= 8.31 × 0.756 × 6589.3 = 41.4 kJ .
These results show that the latent heat is fairly constant in the interval (80 ◦ C 100 ◦ C) .
Solutions to the Problems of Chap. 8
8.1 In the free and adiabatic expansion the energy of the final state remains constant.
If we treat the small expansion as an infinitesimal transformation we may refer to
Eq. (8.31)and pose dU = 0. Then
1 a n2
n CV V 2
The above relation allows, for a small re-expansion, to write
1 a n2
n CV V 2
363.96 × 10−3 × 102
× 2 × 10−3
10 × 28.85 × 4 × 10−4
0.63 K .
8.2 Let us adopt the simplified expression Eq. (8.39) for the Joule–Thomson coefficient:
1 2a
Vm T RT
−b .
CH =
Vm T
C p RT
We have
CH =
0.324 × 10−3 − 0.0427 × 10−3
7 × 10−6 K Pa−1 .
The temperature drop is
7 × 10−6 × 19 × 105 = 13.3 K .
Solutions to the Problems
8.3 If we denote by V1 and V2 the volumes in the initial and final states, respectively,
from definition Eq. (8.58) we get
V1 =
n Z 1 RT
n Z 2 RT
V2 =
The condition
simply means that Z 2 Z 1 but this does not imply that Z 2 Z 1 1. Indeed, we
have to refer to the compressibility chart and calculate the reduced pressures and
temperature for the two states. We have, respectively,
2 p̃2
4.25 ,
and the reduced temperature is t˜ 164/126.2 1.3. From the compressibility chart,
we find
Z 1 p̃1 , t˜
0.7 × Z 2 p̃2 , t˜
0.7 ,
and this shows that the gas cannot be treated as an ideal gas. The two volumes are
respectively V1 = n Z 1 RT / p1 15 l and V2 = n Z 2 RT / p2 7.08 l, the number of
moles being n 3000/28 107.1.
8.4 Let us write the volume of the fluid as a function of the compressibility factor
V = n RT Z / p and differentiate at constant temperature. We arrive to the following
1 ∂Z
dp .
dV = −V
Z ∂p T
From the general stability condition (4.116), we must have
and hence
> 0,
< ,
where the total derivative is calculated along an isothermal transformation. Integrating the above relation leads to the required result.
8.5 To calculate the coefficient of thermal expansion, let us differentiate the volume
along an isobaric process:
Solutions to the Problems
dV = V
and we obtain the result
α− =
∂ ln Z
dT ,
8.6 The experimental curve Eq. (8.89) can be written as
= 8.75 − 0.0437
Tcr − T
The critical pressure for Neon may be expressed in torr:
= 27 × 760 = 2.05 × 104 torr ,
and hence Eq. (8.89) may be written, in reduced variables, as follows:
log p̃ = 4.438 − 1.923 −
The temperature T = 135 K corresponds to a reduced temperature t˜ = 0.89 and then
the vapor pressure, for Argon, may be calculated from the relation log p̃ = −0.52
from which we obtain p̃ 0.3 and pAr 14.4 atm.
8.7 Before making use of the equation of state of ideal gases we verify, by means
of the law of the corresponding states, if the compressibility factor is near to unity.
The reduced variables, in the state we are considering, are
t˜ =
= 1.3
p̃ =
= 0.65 .
If we refer to the compressibility chart in Fig. 8.6 we see that, with the above coordinates, the compressibility factor results in
Z (0.65, 1.3) = 0.69 .
Now we may calculate the volume making use of Eq. (8.58):
Vm = Z
8.31 × 395.46
= 0.69 ×
= 472.4 × 10−6 m3 ,
4.8 × 106
and finally the requested volume is
103 × 472.4 × 10−6 = 472.4 l .
Solutions to the Problems
8.8 We refer to the expression of the energy as a function of temperature and volume Eq. (8.27). Since in this process energy remains constant we have
n C V (T − T0 ) = n a
363.96 × 10−3
10 ×
T − T0 = n
CV V
1.2 0.47
T 583.6 K .
−16.4 K ,
Solutions to the Problems of Chap. 12
12.1 The energy content of the cavity is doubled. The initial value is
U0 = a T 4 V0 = 1.5 × 10−2 J .
As regards the radiation pressure the value (constant during the expansion) is
a T4
The energy variation is
2.52 × 10−4 Pa .
1.5 × 10−2 J
and the work done on the radiation field amounts to
W = − pV
−0.5 × 10−2 J .
The quantity of heat supplied to the cavity is
Q = U − W
2 × 10−2 J .
The same result can be obtained calculating first the variation of entropy of the cavity:
S = s V =
a T 3 V
2.0 × 10−5 J K−1
and the amount of heat can be obtained from the relation Q = T S
2.0 × 10−2 J.
12.2 Denote with T1 and T2 < T1 the temperatures of the two isotherms and with V1
and V2 > V1 the volumes which define the expansion at isotherm T1 . As regards the
volumes V3 and V4 , we find
Solutions to the Problems
V3 = V2
V4 = V1
and then the amount of work done by the gas in the adiabatic processes will be,
= 3 p2 V2 1 −
= 3 p4 V4 1 −
We number the four transformations as shown in Fig. 12.5 with indexes from 1 to 4.
Let us denote with W1 , W2 , W3 , W4 the amount of work done by the radiation to the
outside world in the four transformations and with Q 1 , Q 2 , Q 3 , Q 4 the analogues
amounts of heat supplied from the outside world to the radiation. We will have,
aT (V2 − V1 ) ,
W2 = a (T1 − T2 ) T13 V2 ,
W3 = − aT2 T13 (V2 − V1 ) ,
W4 = −a (T1 − T2 ) T13 V1 ,
W1 =
and for the quantities of heat:
Q 1 = T1 (S)1 =
aT (V2 − V1 )
Q2 = 0
Q 3 = T2 (S)3 =
aT2 (V4 − V3 ) = − aT2 T13 (V2 − V1 )
Q4 = 0 .
Adding up, for the total value of work done by the radiation we get
Wtot =
aT (T1 − T2 ) (V2 − V1 ) .
For the efficiency of the engine, η = Wtot /Q 1 , we find the value
η =1−
1. Boniolo, G., Faraldo, R., Saggion, A.: In: Illari, P.M., Russo, F., Williamson, J. (eds.) Causality
in the Sciences, pp. 502–525. Oxford University Press, Oxford (2011)
2. Born, M.: Physikalische Zeitschrift 22(21), 224 (1921)
3. Guggenheim, E.: Thermodynamics: An Advanced Treatment for Chemists and Physicists.
Monographs on Theoretical and Applied Physics. North-Holland Publishing Company, Amsterdam (1949)
4. Callen, H.B.: Thermodynamics and an Introduction to Thermostatistics, 2nd edn. Wiley, Hoboken (1985)
5. Fuchs, H.U.: The Dynamics of Heat. Springer, Berlin (1996)
6. Kline, S., Koening, F.: J. Appl. Mech. 24(1), 2 (1957)
7. Sommerfeld, A.: Thermodynamics and Statistical Mechanics-Lectures on Theoretical Physics,
vol. 5. Academic press, New York (1964)
8. Landau, L.D., Lifshitz, E.M.: Course of Theoretical Physics. Elsevier, Amsterdam (2013)
9. Jackson, J.D.: Classical Electrodynamics. Wiley, Hoboken (2007)
10. Shavit, A., Gutfinger, C.: Thermodynamics: From Concepts to Applications. CRC Press, Boca
Raton (2008)
11. Lee, B.I., Kesler, M.G.: AIChE J. 21(3), 510 (1975). https://doi.org/10.1002/aic.690210313
12. Van Wylen, G., Sonntag, R., Borgnakke, C.: Fundamentals of Classical Thermodynamics.
Fundamentals of Classical Thermodynamics, vol. 1. Wiley, Hoboken (1986)
13. Hendricks, R., Baron, A., Peller, I., Pew, K.: XIII International Congress of Refrigeration.
NAS/NRC, Washington, DC (1971)
14. Hendricks, R.C., Peller, I.C., Baron, A.K.: Joule-Thomson inversion curves and related coefficients for several simple fluids. Technical report. NASA TN D-6807, NASA, National Aeronautica and Space Administration, Washington, D.C. (1972)
15. Sychev, V.: Complex Thermodynamics Systems. Mir, Moscow (1981)
16. Weast, R.C.: Handbook of chemistry and physics: a ready-reference book of chemical and physical data (1969). https://www.worldcat.org/title/crc-handbook-of-chemistry-and-physics-aready-reference-book-of-chemical-and-physical-data/oclc/707093945
17. Feynman, R.P., Sands, M., Leighton, R.B.: The Feynman Lectures on Physics, Boxed Set: The
New, Millennium edn. Basic Books, New York (2015)
18. Glaser, W.: Aus dem Institut fuer theor. Physik der Univ. Wien. Vorgelegt in der Sitzung am
17 April 1947, 87 (1947)
19. Rocard, Y.: Thermodynamique. Masson et Cie, Editeurs (1952)
20. Laue, M.V.: Annalen der Physik 43(5), 220 (1943)
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
21. Prigogine, I.: Thermodynamics of Irreversible Processes. Thomas (1955)
22. Prigogine, I.: Introduction to Thermodynamics of Irreversible Processes. Interscience Publishers, Geneva (1961)
23. Groot, S.D., Mazur, P.: Non-Equilibrium Thermodynamics. North-Holland Publishing Company, Amsterdam (1962)
24. Groot, S.R.D.: Thermodynamics of Irreversible Processes. North Holland, Amsterdam (1951)
25. Prigogine, I.: Etude Thermodynamique des Phenomenes Irreversibles. (Desoer, 1947)
26. De Groot, S.R., Mazur, P.: Non-equilibrium Thermodynamics. Courier Corporation, North
Chelmsford (2013)
27. Ludwig, C.: Sitz. ber. Akad. Wiss. Wien Math.-Nat. wiss. Kl 20, 539 (1856)
28. Soret, C.: Arch. Sci. Phys. Nat. Geneve 2, 48 (1879)
29. Szilard, L.: Zeitschrift für Physik 53(11–12), 840 (1929)
30. Bennett, C.H.: Int. J. Theor. Phys. 21(12), 905 (1982)
31. Bennett, C.H.: Stud. Hist. Philos. Sci. Part B: Stud. Hist. Philos. Mod. Phys. 34(3), 501 (2003)
32. Landauer, R.: IBM J. Res. Dev. 5(3), 183 (1961)
33. Bub, J.: Stud. Hist. Philos. Sci. Part B: Stud. Hist. Philos. Mod. Phys. 32(4), 569 (2001)
34. Plenio, M.B., Vitelli, V.: Contemp. Phys. 42(1), 25 (2001)
35. Shannon, C.E., Weaver, W.: The Mathematical Theory of Information. University of Illinois
Press, Champaign (1949)
36. Hey, A., Allen, R.: The advanced book program. Feynman Lectures on Computation. CRC
Press, Boca Raton (1996)
37. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, Hoboken (2012)
38. Zurek, W.H.: Phys. Rev. A 40(8), 4731 (1989)
39. Von Neumann, J., Boniolo, G.: I fondamenti matematici della meccanica quantistica (Il
poligrafo, 1998)
40. Neumann, J.: Mathematische grundlagen der quantenmechanik. Verlag von Julius Springer,
Berlin (1932)
41. Jaynes, E.T.: Phys. Rev. 106(4), 620 (1957)
42. Jaynes, E.T.: Phys. Rev. 108(2), 171 (1957)
43. Brillouin, L.: Science and Information Theory. Academic Press, Cambridge (1956)
44. Kolmogorov, A.N.: Probl. Inf. Transm. 1(1), 1 (1965)
45. Kolmogorov, A.: IEEE Trans. Inf. Theory 14(5), 662 (1968)
46. Chaitin, G.J.: J. ACM (JACM) 13(4), 547 (1966)
47. Chaitin, G.J.: J. ACM (JACM) 22(3), 329 (1975)
48. Chaitin, G.J.: Information, Randomness & Incompleteness: Papers on Algorithmic Information
Theory, vol. 8, p. 86. World Scientific, Singapore (1987)
49. Jackson, J.D.: Classical Electrodynamics. Wiley, Hoboken (2007)
50. De Gennes, P.-G., Brochard-Wyart, F., Quere, D.: Capillarity and wetting phenomena: drops,
bubbles, pearls, waves. Springer Science & Business Media (2013)
Absorptivity, 220
spectral absorptivity, 227, 228
Additivity, 31
Adiabatic compressibility
stability of equilibrium states, 78
Adiabatic demagnetization, 216
Adiabatic systems, 73
Adiabatic transformations, 14, 88, 93
Adiabatic walls, 14
chemical reactions, 268
Availability, 50, 52
Available energy, 51
Available work, 51
Black Body, 222, 224
low frequencies, 244
Planck’s interpolation formula, 247
Planck’s spectral energy density, 247
spectral emissivity, 224, 225, 227, 228
spectral energy density, 226, 235
Wien’s interpolation for high frequencies, 244
Boyle-Mariotte law, 97
engine, 44
Carnot cycles, 41
Causality, 9
Chemical affinity, 285
Chemical potential, 61, 62, 66
dilute gases, 111, 112
electrochemical potential, 276
ideal gases, 204
ideal systems, 285
phase equilibrium, 68
photon gas, 239
supersaturated vapor, 180
Chemical reactions, 265
affinity, 267–269
continuous systems, 344
coupled reactions, 270
degree of advancement, 266, 267
entropy production, 268, 270
equilibrium constant, 286–288
generalized flux, 267, 269
generalized force, 269, 270
interference, 269, 270
kinetic constants, 287, 288
law of mass action, 286
linearity, 286
linear relation, 288
rate, 266, 267
reaction rate, 286, 287
reaction rate and affinity, 288
relaxation time, 290
saturation, 288
stoichiometric numbers, 266
velocity, 267
Clapeyron, 121
equation, 121
Closed systems, 14
Coalescence, 173
Coefficient of diffusion, 350
Coefficient of mobility, 350
Coefficient Of Performance (COP), 47
Coefficient of thermal expansion, 83
Coldness, 6
© Springer Nature Switzerland AG 2019
A. Saggion et al., Thermodynamics, UNITEXT for Physics,
adiabatic, 77, 87
isothermal, 77, 83, 87
Compressibility chart, 156, 168
Compressibility factor, 156, 168
electrostatic, 18
Conserved quantities, 262
Constitutive relations, 210
Contact angle, 189
Continuity equation, 335
mass, 336
Continuity of states, 133
Continuous phase transitions, 217
Continuous systems, 333
center of mass velocity, 336
chemical reactions, 344
continuity equation, 335
continuity equation for mass, 336
continuous state variables, 334
convective energy flux, 341
diffusion fluxes, 337, 346
Einstein relation, 349
energy flux, 339
entropy flux, 346
entropy production, 343, 344, 346
equation of energy, 339
equation of entropy, 342
equation of motion, 338, 339
extensive variables, 334
first principle, 340
fundamental equation, 343
generalized forces, 344
heat flux density, 341
Lagrangian derivative, 337
mechanical equilibrium, 347, 348
reaction rate per unit volume, 336
substantial derivative, 337
Corresponding states, 155
generalized compressibility chart, 156
inversion curve, 162, 164
latent heat of vaporization, 160
second virial coefficient, 156
surface tension, 189
triple point, 162
van der Waals equation, 164
vapor pressure, 160
Critical exponents, 167
Critical point
compressibility, 165
Curie symmetry principle, 284
Degrees of freedom
classical, 116, 117
harmonic, 117
internal, 73
macroscopic, 55, 56, 58, 59
microscopic, 113, 117
oscillations, 114, 115
rotational, 114
Diamagnetic materials, 213
Dielectric constant, 196
Differentials, 397
Diffusion, 350
binary systems, 366
linear relations, 366
Diffusion coefficient, 367
Diffusion fluxes, 337
continuous systems, 346
Discontinuous systems, 33, 262, 271
Dispersion relation, 400, 402, 403
nonrelativistic particles, 400
relativistic particles, 400
ultra relativistic particles, 400
Distribution function
fluctuations, 326
Drift velocity, 8
Dufour effect, 363, 369
Efficiency, 27, 38
carnot cycles, 41
endoreversible engine, 47
maximum, 41
Ehrenfest, 135
equations, 135
Einstein relation, 349
Elastic constant, 7
Elasticity, 7
Electric displacement, 208
Electric field, 208
polarization field, 212
Electric permittivity, 195
ideal gases, 200
linear material, 196
vacuum, 195
Electrochemical affinity, 277
Electrochemical equilibrium, 277
Electrochemical potential, 277
Electrochemical reactions, 275
current intensity, 276
degree of advancement, 275
electrochemical affinity, 276
electrochemical potential, 276
entropy production, 276
Electrokinetic effects, 305
cross effects, 308
electric current, 306
electrochemical affinity, 305
electro-osmotic coefficient, 307
electro-osmotic pressure, 308
entropy production, 305
flux of volume, 306
generalized force, 306
linear relations, 307
Saxen’s relations, 307
streaming current, 307
streaming potential, 307
Electromagnetic fields
density of energy flow, 403
momentum density of electromagnetic
fields, 403
Electrostatic field, 193
dielectric constant, 196
electric displacement, 195
electric permittivity, 195
electric polarization, 194, 195
electric susceptibility, 195
electrostatic work, 194
electrostriction, 197
energy, 198
energy density, 199
entropy, 199
entropy density, 199
free energy, 197
linear dielectrics, 197
response of matter, 194
thermodynamic potentials, 196, 197
Electrostatic polarization, 194
electric susceptibility, 195
polarization charges, 194
Electrostriction, 205
Emissivity, 220, 228, 229
spectral emissivity, 224, 225, 227, 228
Emittance, 228
adiabatic systems, 16
condenser, 18
continuous systems, 339
definition, 15
electrostatic field, 19
energy density, 19, 402
energy flow, 264
energy flux, 263, 273, 300, 302
energy of transfer, 302–304
equipartition, 116, 302
point charges at rest, 20
point mass, 16, 17
potential energy, 20
Energy equipartition, 117
Energy flux, 273
generalized force, 273
Energy flux density
convective, 341
Energy of transfer, 305
Engine, 41
Carnot, 44
efficiency, 38
endoreversible, 44, 45
Enthalpy, 63, 68
Entropy, 29, 30
continuous systems, 342, 343
contribution by external interactions, 32
contribution due to external interactions,
contribution due to internal processes,
30, 32
entropy density, 233
entropy flux in continuous systems, 346
entropy production, 263, 274, 344
fluctuations, 324
measurement, 32, 82
pressure dependence, 84
quasi-static processes, 30
Shannon’s entropy, 386
spectral entropy emissivity, 233
temperature dependence, 84
volume dependence, 84
Zurek’s physical entropy, 390
Entropy and information, 375
Entropy production, 274, 281
chemical reactions, 267–270, 277
continuous systems, 346
electrochemical reactions, 276
electrokinetic effects, 305
engine, 43
open systems, 271, 273, 274, 279, 294
quadratic form, 283
stationary states, 309, 311
time derivative of entropy production,
Equation of state, 83
reduced, 155
Equilibrium, 6, 8, 342
dynamic, 7
electrochemical equilibrium, 277
liquid vapor, 123
solid–liquid, 124
solid–vapor, 125
Equilibrium constant
chemical reactions, 287
Equilibrium hydrostatic, 66
energy, 116
Equivalent systems, 279
Ettingshausen effects, 363
Euler theorem, 395
Extensive quantities, 30, 31, 57, 262
role, 60
Ferromagnetism, 217
Finite differences, 397
First Principle, 13, 23
closed systems, 22
continuous systems, 340
rephrased, 262
First virial coefficient, 98, 102
calorimetric measurement, 101
Fluctuations, 320
accurate macroscopic observer, 321
average entropy decrease, 328
correlations, 326, 327, 329, 330
definitions, 323
distribution function, 326
entropy variation, 324, 326
fluctuation decay and Onsager relations,
330, 331
fluctuation decay in isolated systems,
fluctuation matrix, 324
generalized forces, 324, 325
low-accuracy macroscopic observer, 321
mean values, 323, 326
mean values in fluctuating systems, 327
microscopic reversibility, 328
normal sequence of fluctuations, 329
second moments, 328
determination of fluxes, 277
different choice of fluxes, 278, 280
flux of matter, 297
generalized, 277
generalized flux, 264
linear relations, 282
Flux of matter, 274
entropy production, 274
generalized force, 274
chemical reactions, 269
continuous systems, 344
determination of forces, 277
different choice of forces, 278, 280, 281
generalized, 277
generalized force, 264, 273, 311
Lorentz force, 208
Free charge, 208
Free current, 208
Free current density, 208
Free energy, 63
isothermal processes, 65
pressure, 65
Free expansion, 90
Function of state, 30
Fundamental equation
continuous systems, 343
energy representation, 59
entropy representation, 59
small changes, 58, 60
Fundamental Relation, 55, 59, 82
integral form, 60
Generalized flux, 311
chemical reactions, 269
Generalized force, 311
Gibbs–Duhem relation, 365
Gibb’s potential, 63, 69
isothermal and isobaric processes, 69
conversion of heat into work, 26
definition, 21
heat flow, 263, 264
quantity of heat, 21, 22
sources, 39
Heat capacity, 84
constant field, 256
constant pressure, 69, 85, 86
constant volume, 85, 86, 113
linear molecules, 119
nonlinear molecules, 119
stability of equilibrium states, 78
Heat flux density, 341
Heat of transfer, 294, 295, 304
Heat pump, 48
coefficient of performance (COP), 48
Homogeneous functions, 60, 395
first degree, 396
Hotness, 6
Infinitesimal, 397
Infinitesimal transformations, 22
Information, 379, 382
algorithmic complexity, 391
algorithmic entropy, 391, 392
algorithmic information content, 391
algorithmic randomness, 391
amount of information, 384, 385
bit, 384
Bloch’s sphere, 389
entropy of information, 386, 387
information per symbol, 386, 387
noisy channels, 383
physical entropy, 392
Qbit, 388, 389
Shannon’s entropy, 387
statistical entropy, 392
statistical mechanics, 387
Zurek’s physical entropy, 392
Information theory, 375
Intelligent being, 376
Intensive quantities, 31, 57
Interaction, 9
external world, 29
Interfacial tension, 171
Interference, 282
Internal energy, 341
Inversion curve, 107, 109
corresponding states, 162
Irreversible processes, 261
Isobaric processes, 68
enthalpy, 68
Isothermal compressibility
stability of equilibrium states, 77
Isothermal processes, 65, 90
Isotherms of a gas, 97
Isotropy, 400
coefficient, 105
coefficient ideal gases, 106
experiment, 98
experiment ideal gases, 106, 107
Kelvin’s equation, 181
curvature effect, 178
Kirchhoff’s law, 220, 224, 225
extended form, 227, 228
Knudsen gas, 295, 297
backward matter flux, 298
energy flux, 300, 302
energy of transfer, 302, 304
flux of matter, 297
forward matter flux, 298
heat of transfer, 304
thermomolecular pressure difference,
299, 304
Lagrangian derivative, 337
Landauer Principle, 381
Langevin function, 202
pressure, 178
Laplace equation
surface tension, 178
Larmor precession, 213
Larmor rotation, 211
Latent heat, 122
fusion, 124
sublimation, 125
vaporization, 123
Latent heat of vaporization
law of corresponding states, 160
Latent heats
temperature dependence, 132
Law of corresponding states
equation of state, 155
latent heat of vaporization, 160
Law of mass action, 286
Le Chatelier–Braun principle, 315
Le Chatelier principle, 314
Linear phenomenological coefficients, 282
Linear phenomenological matrix, 282
Linear phenomenological relations, 282, 283
chemical reactions, 285
Curie symmetry principle, 285
phenomenological matrix, 283
Linear relations, 282, 288
Liquefaction of gases, 108
attainability of low temperatures, 108
Local Thermodynamical Equilibrium (LTE),
Lorentz force, 208
Luminosity, 228
Macroscopic degrees of freedom, 55
Macroscopic system, 3
Magnetic coefficient, 212
Magnetic constant, 212
Magnetic field, 207, 208
contribution by free currents, 211
contribution by magnetization, 211
energy density, 216
entropy density, 216
free energy, 215
free energy density, 216
linear medium, 216
magnetic coefficient, 212
magnetic constant, 212
magnetic moment per unit volume, 210
magnetic susceptibility, 212
magnetization field, 212
thermodynamic potentials, 215
uniform medium, 211
uniform solenoid, 210
Magnetic moment, 210
Magnetic permeability, 212
Magnetic work, 214, 215
Magnetization vector, 211
Magnetizing field, 208, 211
uniform solenoid, 210
Maximum work, 50, 51
Maxwell equations
macroscopic, 208
Maxwell relations, 81
Maxwell’s demon, 376
Maxwell’s paradox, 376
Maxwell’s velocity distribution, 295
Maxwell–Szilard’s paradox, 27, 376, 379,
solution, 381
Mean field
van der Waals, 143
Mechanical equilibrium, 347, 348
Metastability, 183
Mobility, 350
Molar heat
at equilibrium, 130
constant volume, 113
saturation, 131
temperature dependence, 117
Molecular flow, 297
Nernst effects, 363
Nucleation, 183
Observer, 27, 379
macroscopic, 3, 4
microscopic, 4
observer–observed separation, 382
Ohm’s law, 7
Onsager relations, 281–283
conditions of linearity, 285
fluctuations, 330
Open systems, 271
different choice of fluxes, 280
different choice of forces, 281
entropy production, 271, 273, 279, 294
thermomolecular pressure difference,
Maxwell–Szilard’s paradox, 380
Paramagnetic materials, 213
Peltier coefficient, 353, 354
Peltier effect
explanation, 360
Peltier coefficient, 353, 360
refrigerator, 360
Perpetual motion, 26, 363
Perpetuum mobile, 25
Phase diagrams, 126
Phase equilibrium, 68
Phase transitions, 121, 136
continuous, 133
discontinuous, 121
first-order, 121
second-order, 133
Pressure, 5, 61, 65, 399
completely absorbed particles, 401
elastically reflected particles, 401
mechanical interpretation, 399
non relativistic case, 402
particles absorbed by the wall, 401
photons, 403
radiation, 403
Problems, 405
internal, 29
natural, 27, 28
quasi-static, 28, 29
reversible, 29
unnatural, 27, 28
Radiation, 219
adiabatic processes, 240
emissivity, 230
emittance, 231
energy density, 229, 230, 239
entropy density, 239
free expansion, 242
isochoric processes, 241
isothermal processes, 240
pressure, 403
spectral emissivity, 230
spectral energy density, 229, 230
spectral entropy emissivity, 233
thermodynamical processes, 239
thermodynamic potentials, 237
chemical reactions, 266, 267
Reduced variables, 155
Refrigerator, 48
coefficient of performance (COP), 48
Relation between C p and C V , 86
Relaxation time, 342
chemical reaction, 290
Resistivity, 8
Saturation pressure
curved surface, 181
Scale invariance, 225, 227
Schwarz theorem, 396
Second principle, 25, 29, 224
Clausius, 26
Lord Kelvin, 26
Second virial coefficient
temperature dependence, 109
Seebeck effect, 352, 353
Simple systems, 58
Solar constant, 236, 237
Solutions, 405
Some properties of materials, 82
Soret coefficient, 369
Soret effect, 363, 368
Spreading parameter, 189
Stability in a thermodynamical system, 71
Stability of equilibrium states, 71
adiabatic compressibility, 78
adiabatic constant pressure, 73
adiabatic constant volume, 73
constant entropy and pressure, 76
constant entropy and volume, 75
constant temperature and pressure, 74
constant temperature and volume, 74
heat capacities, 78
isothermal compressibility, 77
equilibrium, 6, 8
State functions, 30
State parameters, 6
extensive quantities, 60
reduced, 155
State postulate, 56, 58
Stationary states, 308
determination of the stationary state, 313
entropy production, 311
Le Chatelier, 313, 314
Le Chatelier-Braun, 313
minimum entropy production, 310, 312,
open systems, 309
Prigogime-Wiame model, 317
stability, 313, 317
Sublimation, 125
Substantial derivative, 337
Superheated liquid
stability, 183
Supersaturate vapor
stability, 183
Surface layer, 172
energy, 172, 173
energy density, 172, 173
entropy, 172, 173
entropy density, 172
free energy, 173
heat capacities, 173
specific heat, 173
stability, 175, 176
thermodynamic potentials, 172
Surface systems, 169
Surface tension, 170, 171
critical radius in supersaturated vapor,
effect on pressure, 178
Laplace equation, 178
law of corresponding states, 189
phase equilibria, 176, 177
supersaturated vapor, 180, 183
temperature dependance, 187
water, 187
Szilard L., 376
Temperature, 5, 33, 35, 44, 61, 102
absolute, 30, 33
determination of the scale, 44, 102–105
empirical, 6, 8
temperature of the Sun, 236
Thermal conductance, 45
Thermal conductivity, 367
Thermal diffusion coefficient, 368
Thermal effects
charging a condenser, 205
Thermal engines
efficiency, 38
Thermocouple, 352
Thermodiffusion, 363
Thermodynamic state, 6
Thermoelectric power, 352
absolute thermoelectric power, 352
thermocouple, 353
Thermomechanical effects, 294
flux of matter, 294
heat of transfer, 295
thermal flux, 294
thermomolecular pressure difference,
Thermometer, 10
Thermomolecular pressure difference, 294,
295, 299, 304
Third law, 251
coefficient of thermal expansion, 253
helium melting curve, 254
Nernst–Planck, 252
specific heats, 253
statistical mechanics, 257
tension coefficient, 253
unattainability of absolute zero, 255
Thomson effect
explanation, 361
Thomson coefficient, 354
Throttling experiment, 98
Triple point, 125
Van der Waals, 139
Boyle temperature, 145
coefficient of thermal expansion, 149
coexisting phases, 153
critical point, 146, 147
energy dependance on volume, 148
equation of state, 143
free expansion, 168
inversion curve, 149–151
inversion temperature, 151
isotherms, 144
Joule-Thomson coefficient, 150, 167
mean field approximation, 143
molar heats, 149
pressure correction, 142
proper volume of molecules, 141
successes and failures, 144
vapor pressure, 151
Van der Waals equation
corresponding states, 164
inversion curve, 164
Vapor pressure
curvature effect, 178
Virial coefficients, 98
Wetting, 189
Wien’s law, 231, 232, 234
Wien’s diplacement law, 231, 234
electric work, 208, 209
electrostatic work, 194
general form, 57
magnetic work, 208, 210, 214
total work on electric charges, 209
Work parameters, 57, 58
equation, 189
Zeroth principle, 9 | {"url":"https://studylib.net/doc/27237772/-unitext-for-physics--antonio-saggion--rossella-faraldo--...","timestamp":"2024-11-10T18:50:17Z","content_type":"text/html","content_length":"984032","record_id":"<urn:uuid:77f7594d-686e-4a32-9f08-655d7db364b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00261.warc.gz"} |
Find the points of extrema of the function f(x)=2 sec x+3 cosec... | Filo
Question asked by Filo student
Find the points of extrema of the function f(x)=2 sec x+3 cosec x
Not the question you're searching for?
+ Ask your question
To find the points of extrema of the function , follow these steps: Step 1. Express in terms of and : Step 2. Differentiate with respect to : Step 3. Find the critical points of : To find the
critical points of , set : Simplify this equation to get: Step 4. Solve for the critical points: This equation doesn't have an exact solution, but the values of that satisfy it can be approximated
numerically. The critical points are: Step 5. Classify the critical points: To classify the critical points, we need to examine the sign of around them. Note that is undefined at . For each interval
within that doesn't contain critical point , determine whether is positive or negative to the left and right of it. This will help determine whether the critical point corresponds to a minimum, a
maximum, or neither. - If is in the interval and is even, then is a local minimum. - If is in the interval and is odd, then is a local maximum. Thus, the point is a point of minima if is even and a
point of maxima if is odd.
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
5 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Functions
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Find the points of extrema of the function f(x)=2 sec x+3 cosec x
Updated On Nov 7, 2023
Topic Functions
Subject Mathematics
Class Class 12
Answer Type Text solution:1 | {"url":"https://askfilo.com/user-question-answers-mathematics/find-the-points-of-extrema-of-the-function-f-x-2-sec-x-3-36303535373631","timestamp":"2024-11-03T03:53:35Z","content_type":"text/html","content_length":"367620","record_id":"<urn:uuid:59f813f1-8dc8-4b68-9970-f2acfa484874>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00500.warc.gz"} |
Gear Scale
From Wiki
The gear scale parameter is used for converting between physical positions (in mm or degree) to motor steps (or encoder ticks). This article explains how to calculate the value depending on the axis
The gear scales of robot axes are defined in the Robot Configuration File, the gear scales of external axes are defined in the Project Configuration File (configured via the CPRog/iRC PC application
). If you know a robot type that got the same axis you are using you can copy the value from there.
Calculating from hardware specification
The following are the general equations for rotational and linear axes. Note the factor 4 which is necessary for stepper motors but may not be needed for other motor types like BLDC (e.g. ReBeL).
Gear Ratio x Encoder Ticks x 4 : 360 = Gear Scale
Gear Ratio x Encoder Ticks x 4 : Linear Transmission Ratio = Gear Scale
Explanation: Encoder Ticks x 4 gives the steps per full rotation of the motor. Encoder Ticks should be 500 for all igus stepper motors. Multiplied with the gear ratio you get the steps for a full
rotation at the gear output. Divide that by 360° to get to steps per degree at the output.
If you are not using gears set the gear ratio to 1 (this is the case for most linear axes). If you are calculating a linear axis use the linear transmission ratio (in mm per revolution) instead of
360 to get steps per mm.
An example for a rotational axis:
Gear Ratio: 48
Encoder Ticks: 500
48 x 500 x 4 : 360 = 266.667
Another example for a ZLW-1660 linear axis. Find the transmission ratio in the specs: 120 mm/rev
Gear Ratio: 1
Encoder Ticks: 500
1 x 500 x 4 : 120 = 16.667
Calculating from relative error
If you do not know the hardware specification you can try to measure the error of a gear scale value, then calculate the correct one from it using cross-multiplication.
1. Use the gear scale of a similar axis (warning, if the value is too far off the axis might move faster than expected!)
2. Jog the joint manually by a certain distance. Measure the actual distance (in degree or mm) and note the displayed distance.
3. Calculate the new gear scale:
old Gear scale / displayed distance x actual distance = new Gear Scale | {"url":"https://wiki.cpr-robots.com/index.php/Gear_Scale","timestamp":"2024-11-14T11:15:57Z","content_type":"text/html","content_length":"23569","record_id":"<urn:uuid:08ef4db9-aff8-40fd-a441-78be64c9c6f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00362.warc.gz"} |
What is the role of mixed effects models in Pearson MyLab Statistics for longitudinal data analysis? | Hire Someone To Take MyLab Exam
What is the role of mixed effects models in Pearson MyLab Statistics for longitudinal data analysis? In this paper, Pearson MyLab Stressors for Longitudinal Visit This Link Analysis is presented. The
model is concerned with the choice of the measure to be used for the study, such as pain rating, and it is established that the model supports the concept of hierarchical multiple effects. It is
possible to define the parameters to be used for more than one study (e.g., scale, type 1 = ‘normal’ are parameters), and to examine the influence of the different models and the data obtained. We
also present a definition of stressors and consider their influence on the level of stress. To carry out the analysis a time series, the relationship between the four types of variables (variable
name, pain rating, sum of the mean, and the standard error of the mean) can be identified at the first stage of analysis. The first study (sum of mean, pain rating, sum of the standard error of the
mean) is the first study (and has the advantage of an informative measure) considered in this study. For this study we consider factor variables, and we assume that the focus will be on its analysis.
Each of the five types of variables will be collected at the first step of analyses, such as classification, unidimensional scaling, Cox’s regression, binary logistic regression, etc. No time series
will take place until the analysis at the end of this study. Furthermore, the analysis will take place while its sample size is below 5 000. In order to be a very large population, in many cases
multiple factors (class) of each outcome (pain rating, sum of the standard error of the mean, etc) take part. Moreover, multiple read more may be significant not only for those factors investigated
only once but also in combinations of them. Our aim to show that including all such combinations of factors up to 4 factors can enhance look at this web-site both statistical power and a
betterconcordance of positive associations between variables with the studied types of variables.What is the role of mixed effects models in Pearson MyLab Statistics for longitudinal data analysis?
Most of present day research methods collect into a simple descriptive language. One commonly used example is: In a data warehouse all arrays such as an array of the data is de-altered. One way to
calculate the sum of such arrays is to count the number of views of a project, and then aggregate such averages. This was the ‘gold standard’ methodology in the 2000-01/2003 edition of Pearson. In
the next few months the data transformation methodology changed to a tool format such as MATLAB.
Why Am I Failing My Online Classes
For CELDB, there are similar tools to the data transformation to the method described earlier. This ‘transform’ method takes a list of non-overlapping views from a project, and then its moreover
aggregates the remaining data. The data model then predicts the value of the sum of the views, and then takes the corresponding value of the sum of the views. This is similar to the underlying
formula, but for example, we also incorporate these views in the value output of a model-based model such as my link This method does not have a big advantage – as is being suggested by the number of
people in the project- to deal with non-sensical orders and so on. Therefore, there are alternatives out there, that do not have double -saurus terms, where the items in the data are treated as input
for the transformation based on the view that is just the first item in the data. The data-model why not try these out now adapts the formula used by the data transformation. One of our main
experiments here read the full info here to introduce a model-based method for integrating data. And for short, in the future we will take this change to another form, which works in direct
correlation. Materials and methods In our experiment we have used the data for 3 experiments, here two components slug: A2 and A1, and A4 and A5 split into an entirely new data/tool: A4What is the
role of mixed effects models in Pearson MyLab Statistics for longitudinal data analysis? =================================================================================== Hereafter “Pearson” is
assigned, for the first time, a characterisation of the effects on which this paper and subsequent papers may be derived. It is characterized by simple statistics,———–from the statistical,
computational, and analytical aspects of the entire methodology and its associated modeling and simulation steps are discussed. It is then shown that mathematical induction controls, in certain
methodological mechanisms, the dependence of measurements on the underlying assumption, which are characteristic properties of the model. For the methodological mechanisms we see that the concept of
regression is derived from the “regression:theta” rule which is the rationale of Pearson measurements. Data are modeled in terms of regression models (namely the Poisson model) and the data is
estimated with the help of machine learning based models based on some specific approaches used in measurement sciences. These models are then trained with or estimated with some model generating
measure, as a well-known method. This is followed by a statistical evaluation (e.g. means, standard deviations). Hereby a model is discussed to show the effects in the physical world. The classifier
is integrated in the process of the mathematical induction in determining how the model effects are observed, as this example of regression models is too complex interdependent.
How Can I Legally Employ Someone?
In fact, the contribution from regression models to my laboratory community is one of the many articles published which attempt to use regression to give a statistical evaluation (e.g. means) of the
different features of an example (e.g. correlation estimates) of the effect in the physical world. The research is on the physical world and some methodological mechanisms (namely experimental
variability) are present in this theory. Secondly, this paper and the present papers are used in some statistical analyses. It is also important to emphasise to a certain extent that the biological
laws are defined as empirical, i.e. the statistical properties of experiments are only appropriate and therefore Narendra is view website studied in this context. The statistical tests are used to | {"url":"https://takemylab.com/what-is-the-role-of-mixed-effects-models-in-pearson-mylab-statistics-for-longitudinal-data-analysis","timestamp":"2024-11-13T14:09:26Z","content_type":"text/html","content_length":"132015","record_id":"<urn:uuid:55f1c928-4cf6-4a95-a085-ff003b8bedf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00824.warc.gz"} |
ISIS Application Documentation
svfilter Printer Friendly View | TOC | Home
Apply a variance or standard deviation filter to a cube
Overview Parameters Example 1 Example 2
This program applies either a variance or standard deviation filter to a cube. The standard deviation filter is simply the square root of the variance and is selected by setting the FILTER parameter
to STDDEV.
• Filters
Related Applications to Previous Versions of ISIS
This program replaces the following application existing in previous versions of ISIS:
• boxfilter
Eric Eliason 1988-05-20 Original version
Tracie Sucharski 2002-08-13 Ported to Isis 3.0
Jeff Anderson 2002-11-26 Fixed bug which disallowed the standard deviation filter
K Teal Thompson 2002-12-03 Add examples.
K Teal Thompson 2003-03-28 Make images smaller.
Stuart Sides 2003-04-04 Fixed problem with isiscvs not checking in the thumb and image directories.
Kim Sides 2003-05-13 Added application test
Stuart Sides 2003-05-16 Modified schema location from astogeology... to isis.astrogeology..."
Stuart Sides 2003-07-29 Modified filename parameters to be cube parameters where necessary
Parameter Groups
Name Description
FROM Input file
TO Output svfilter cube
Filter Type
Name Description
FILTER Filter type
Boxcar Size
Name Description
SAMPLES Number of samples in boxcar
LINES Number of lines in boxcar
Boxcar Restrictions
Name Description
LOW Valid minimum pixel
HIGH Valid maximum pixel
MINIMUM Minimum boxcar pixel count
Special Pixels
Name Description
PROPAGATE Propagate special pixels
Files: FROM
Input cube to filter
Type cube
File Mode input
Filter *.cub
Files: TO
The resultant filtered cube
Type cube
File Mode output
Pixel Type real
Filter Type: FILTER
The output of the filter operation is either the variance value or the standard deviation value.
Type string
Default VARIANCE
Option List: Option Brief Description
VARIANCE Perform variance filter This will output the variance at the center of the boxcar.
STDDEV Perform standard deviation filter This will output the standard deviation at the center of the boxcar.
Boxcar Size: SAMPLES
This is the total number of samples in the boxcar. It must be odd and can not exceed twice the number of samples in the cube. In general, the size of the boxcar does not cause the program to operate
significantly slower.
Type integer
Minimum 1 (inclusive)
Odd This value must be an odd number
Boxcar Size: LINES
This is the total number of lines in the boxcar. It must be odd and can not exceed twice the number of lines in the cube. In general, the size of the boxcar does not cause the program to operate
significantly slower.
Type integer
Minimum 1 (inclusive)
Odd This value must be an odd number
Boxcar Restrictions: LOW
Valid minimum pixel value that will be used in boxcar computation. If a pixel value is less than LOW then it will not be used when computing boxcar statistics.
Type double
Internal Default Use all pixels
Less Than HIGH
Boxcar Restrictions: HIGH
Valid maximum pixel value that will be used in boxcar computation. If a pixel value is greater than HIGH then it will not be used when computing boxcar statistics.
Type double
Internal Default Use all pixels
Greater Than LOW
Boxcar Restrictions: MINIMUM
This is the minimum number of valid pixels which must occur inside the NxM boxcar for filtering to occur. For example, 3x5 boxcar has 15 pixels inside. If MINIMUM=10 then the filter will occur if
there are 10 or greater valid pixels. A valid pixel is one that is not special (NULL, LIS, etc) and is in the range defined by LOW to HIGH.
Type integer
Default 1
Minimum 1 (inclusive)
Special Pixels: PROPAGATE
This option is used to define how special pixels are handled. If the center pixel of the boxcar is a special pixel it will be propagated or set to NULL depending on the value of this parameter.
Type boolean
Default TRUE
Example 1
run svfilter
This example shows an svfilter operation using standard deviation.
Command Line
svfilter fr=../IN/peaks.cub:4 t=OUT/svfilter.sd fi=s s=5 li=5
svfilter a Terra image. Use STDDEV filter (fi=s)
GUI Screenshot
Example Gui
Screenshot of GUI with parameters filled in to perform an svfilter operation on the input image.
Input Image
Input image for svfilter
Parameter Name: FROM
This is the input image for the svfilter examples.
Output Image
Output image for svfilter
Parameter Name: TO
This is the output image that results from STDDEV filter.
Example 2
run svfilter
This example shows an svfilter operation using variance.
Command Line
svfilter fr=../IN/peaks.cub:4 t=OUT/svfilter.var fi=v s=5 li=5
svfilter a Terra image. Use VARIANCE filter. (fi=v)
GUI Screenshot
Example Gui
Screenshot of GUI with parameters filled in to perform an svfilter operation on the input image.
Input Image
Input image for svfilter
Parameter Name: FROM
This is the input image for the svfilter examples.
Output Image
Output image for svfilter
Parameter Name: TO
This is the output image that results from VARIANCE filter. | {"url":"https://isis.astrogeology.usgs.gov/8.3.0/Application/presentation/Tabbed/svfilter/svfilter.html","timestamp":"2024-11-15T00:43:20Z","content_type":"text/html","content_length":"41293","record_id":"<urn:uuid:2518e94d-1896-4f02-969d-0a7012ff89fc>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00739.warc.gz"} |
<h2>Miracles of 2LoT</h2>
I've been arguing over at
. It's one of those posts where a sceptic does an elementary analysis and makes elementary errors which contradict "consensus" science. A new scientific discovery is announced. Being Galileos, they
don't have to check their work.
The sceptic here is Hans Jelbring. He looks at a simple problem, two concentric spheres without heat sources, and checks their radiation balance to find what the temperature difference should be.
Consensus science says, of course, that there should be none, but he found one, and then spent time working out the resulting perpetual motion machine. I'm not sure what was the point of that, but
Trenberth was mentioned.
I have sometimes done these analyses myself, being intrigued when what looks like a problem determined by geometry turns out to have a solution constrained by the Second Law of Thermodynamics (2LoT).
Given the complexity, that can look like a miracle.
The concentric problem.
When you have concentric convex shapes, the radiation from the inner one ends up on the outer one. A really rookie mistake you can make is to expect the converse. Since the outer surface is larger,
the inner body ends up being very hot. Hans managed to make that error
(case II, pipes). But he avoided it in the main post, which concerned spheres. There he noted, correctly, that he needed to calculate how much of the radiation from any one outer point impinged on
the central body, and how much missed.
So he drew a plot which you can see there, but which I'll modify to the one at right. There is a small surface element at dA on S2, the outer sphere. Some of its emission, in a cone angle α impinges
on S1. The rest misses and ends up back on S2.
I've shown dA at the bottom, but all locations are equivalent, and the total incident on S1 is got by summing the various dA's. I'll assume the spheres are black. Some trig relations:
R1 = R2 sin α, r = R2 cos α
The total emitted by dA is given by the Stefan Boltzmann relation $$F= dA\ \sigma \ T_2^4 $$ To get the fraction within the impinging cone, we want the part of that that would impinge on the surface
S (if S1 wasn't there).
Hans made here a common error of assuming that the radiation from dA is as for a sphere, uniform in all directions. Then you can just divide the area of S by the area of the hemisphere of which it
is part. But dA is flat, and does not radiate uniformly. In fact, in its own plane it doesn't radiate at all (think of seeing a disc side on).
There is a Law and theory applicable here. It's called
Lambert's cosine law
, and says that the intensity of radiation is proportional to cos θ, the angle from the normal.
So that tells how the radiation incident on S should be summed. An integral is needed. You can imagine a ring element formed by an increment in θ. Its surface area will be \(dS = 2\pi\ r^2\ sin \
theta d\theta\). And if the impinging radiance is \(I(\theta) = I_0\ cos \theta\) W/m2, then the total on S will be $$ I_0\int_\alpha^0 cos\theta dS = 2\pi\ r^2\ I_0 \int_\alpha^0 cos\theta\ sin\
theta\ d\theta = \pi\ r^2\ I_0\ (1 - cos^2 \alpha) = \pi\ r^2\ I_0\ sin^2 \alpha$$ We can relate I
to F by using this formula for the hemisphere, which catches all the radiation. The lower integration limit is then not α, but π/2. So $$ F = \pi r^2 I_0$$ So the power dP transferred from dA to S1
is $$dP = F\ sin^2 \alpha = dA\ \sigma\ T_2^4\ sin^2 \alpha$$ Integration over dA is just summation, so the total power P2 from S2 to S1 is $$ P2 = 4\pi R2^2\ T_2^4 sin^2 \alpha = 4\pi R1^2 T_2^4 $$
which exactly matches the Stefan-Boltzmann emission from S1 if \(T_2=T_1\).
Heat sources
The discussion on the Tallbloke thread was quite interesting, though Hans seems to have dropped out. DocMartyn posed the problem - what if S2 was a conducting shell in space and S1 had a heat source
(Pu) generating 300W. He simplified with S2 as 2 sq m and S1 as 1 sq m. He asked what the temperature of S1 would be. It's actually enough to work out the fluxes from each body.
The above reasoning is useful here. We can say that 300W has to be radiated out to space, and the 150 W/m2 sets the temperature of S2. It means also 150W/m2 is radiated inward. An amount P of this is
absorbed by S1, which then radiates in total 300+P W.
To get P, imagine that the 300 W was now generated within S2 rather than S1. Surprisingly perhaps, this does not change the temperature of S2. It still radiates 300W outward and inward, of which P
arrives at S1.
But now we know that S1, with no source, is at the same temperature as S2. So it radiates 150 W outward (its area is 1 sq m). And P is what comes in, so P=150.
So the answer to the original problem is that the flux from S1 (with source) is 300+150=450 W/m2, and so the temperature is T1=298.48 °K. If T2 is the temperature of the shell, and T0 the temperature
corresponding to 300 W emission only (ie S1 without shell), then \(T1^4=T0^4+T2^4\)
An interesting aspect of this reasoning is that nothing was said about spheres. If you allow the bodies to be conductive enough to keep their temperature uniform, they could have been any reasonable
shape, though you have to account carefully if S1 isn't convex.
I think DocMartyn chose his numbers with a common shell model of the greenhouse effect in mind.
There's a fairly simple generalization which doesn't require the bodies to be spherical or concentric (though they need to be convex). Each has to be at a uniform temperature, which will generally
require perfect conductivity.
Suppose that S1 and S2 have respective areas A1 and A2, and there is a power source P watts. Start by assuming that is on S2. Then it must radiate P outwards, creating an emittance P/A2 W/m2. That
forces (S-B) a temperature T0. All this is steady-state, black-body.
Then S1 must also be at T0, and so also emits P/A2 W/m2. For power balance, this is also what it receives.
Now suppose P shifts to body S1. S2 still has to emit P W, so is still at the same temperature. So the environment of S1 hasn't changed, and it still receives P/A2 W/m2, or P*A1/A2 W. But with the
extra source, it must now emit P*(1+A1/A2) W, or emittance P*(1/A1+1/A2)W/m2. That determines its temperature.
22 comments:
1. Oh dear. Hans doesn't do a very good job of understanding you. What a waste of everyone's time.
Though it *could* be useful. The people who comment there could read your words, realise that you're right, and realise that therefore Hans and Tallbloke are clueless.
1. If you are going to engage in Lambert's law, you also have to make some assumptions about the surface of the inner sphere and how the absorption and reflection vary with angle.
2. Eli,
yes, the assumption (by HJ and I) was that both spheres are black, ie black bodies.
2. If you have a black body in front of another black body I cannot see (pun intended) how, at thermal eqilibrium, you would be able to dectect its presence.
3. I haven't read tallblokes guest post but I used to chase these guys away all the time from tAV. They just don't get that you ain't gonna prove AGW wrong with some basic theory about radiation or
whatever. I've posted one once just to see how people reacted - it didn't go well for them. SOD left a comment asking why I would put a thing like that up if I didn't endorse it.
More often than not, they are not technical people in general. I had one guy insist that in converting units, there was a division by 1 and that was where all of radiation theory went wrong. I
told him over and over that he was in error.
At least you set the record straight. Have you dropped a link to this over there?
4. Anon,
Indeed so - if it's isothermal then it's featureless. That's another way of seeing why you don't really have to analyze the geometry to know the answer.
5. Jeff,
Yes, it can be frustrating. There was another post by Stephen Wilde there on no backradiation etc. It got me looking for the earliest measurements I could find, and this 1837 paper by Pouillet
turned up.
I hadn't posted a link as the discussion there seemed to have gone quiet, but your post showed up there.
6. Nick,
Is there an easy way to write equations on blogs? I've worked with LaTeX a little but it is such a pain in the butt that i usually resort to looking around the web for a screen grab.
7. Jeff,
I've installed the MathJax version of Latex. It works well in posts and comments. There's some overhead - it taken a little time to render the Latex, and adds a bit to the size of every page.
There are some WYSIWYG Latex editors; I haven't tried them.
But I find that you can go a long way with the symbols in html, with <sub> and sup as well. It's trappy, though, as I found at TB's. Blog hosts often restrict their use in comments.
8. If one body is cooler than the other you can ignore its radiation to the warmer one because all that happens is that a standing wave is set up corresponding to the amount of radiation the cooler
one could emit. You then deduct this amount which the cooler one could emit from that which the warmer one could emit. The difference represents that portion of the warmer one's radiation which
is absorbed and converted to thermal energy in the cooler one.
That's what actually happens physically. I know you can get the same result by assuming radiation from the cooler one is absorbed in the warmer one, but it isn't and the warmer one's absorptivity
for radiation from the cooler one is zero.
This is why radiation from a cooler atmosphere cannot transfer thermal energy to a warmer surface. It can only slow the rate of radiative cooling by forming standing waves which do not transfer
any energy between the atmosphere and the surface. Other cooling processes like evaporation and diffusion followed by convection will increase to compensate. Carbon dioxide molecules can play no
greater role than water vapour molecules. Hence their effect in regard to any greenhouse conjecture would be less than 4% of the effect of water vapour. Work out the consequences for yourself.
9. Doug,
You've been saying this everywhere, but it's totally wrong. One thing is that you never in the real world get a standing wave in 3D over millions of wavelengths. Secondly, while an exact standing
wave won't transmit power, it takes only a slight realignment of phase to do so.
1. oh hell, the whole point of thermal radiation is that it is incoherent.
10. Very nice. If you add a third sphere?
11. Dallas,
Yes, interesting thought. The "Update" formula extends. The outer shell S3 emits P at emittance P/A3; The second receives irradiance P/A3 from S3 (again it gets the same as it would get at
isothermal), total power P+A2*P/A3. And the innermost, S1, receives that emittance P*(1/A2+1/A3) that S2 emits inward (same as outward), which is power P*A1*(1/A2+1/A3). So balancing, S1 emits
that plus the source P, with an emittance of P*(1/A1+1/A2+!/A3).
For N spheres, P*(1/A1+....+1/AN)
12. Nick,
Thanks for the good work. The Jelbring stuff is the new skydragon. Argue!
13. Anon writes "If you have a black body in front of another black body I cannot see (pun intended) how, at thermal eqilibrium, you would be able to dectect its presence."
Admiral Ackbar would know.
14. Nick, Once you have n spheres, then you can adjust for imperfection by making the spheres progressively more oblique. One thing I have noticed, is that the non-uniform distribution of water
produces forcing response more like oblique spheres than concentric spheres. You math is much less rusty than mine, http://paoc.mit.edu/labweb/notes/chap5.pdf so you might have more fun with the
potential temperature profile of moist air :)
15. Nick, I still cannot understand how the heat fluxes for your solution can possibly balance.
Initially the sphere has an output of 300 W and a flux of 300 W/m2.
When enveloped by the shell, the shell must radiate 150W/m2.
In your final steady state you state that the inner sphere emits 450 W/m2.
We know that the sphere cannot radiate to itself, so the total 450 W must be directed to the shell, so the inner surface of the shell gets a minimum of 225 W/m2. The shell will also radiate some
fraction to itself.
So what are the heat fluxs?
Outer shell efflux is known to be 150W/m2.
Inner shell efflux is known to be 150W/m2.
What is the influx from the sphere to the inner shell?
What is the influx from the inner shell to the inner shell?
16. DocM,
I think it is:
Inner sphere S1 emits 450 W, which all lands on the shell S2.
It receives 150 W from the shell, and generates 300.
S2 emits 300W out, and 300 W in. Of the 300 in, 150 W lands on S1 and the remaining 150W lands on S2.
So the balance in W for S2 is
emits 300+300
receives 450 (from S1) plus 150 (from S2)
And for S1
emits 300W+150W
receives 150W
I think it is easier to think of balance in W, then divide by respective areas for W/m2.
Maybe the simplified numbers create confusion, since 150W has 2 different roles. Suppose S2 is bigger - surface 2.5 m2. Then it still has to emit 300 W out and in. But a smaller fraction of the
300 in goes to S1; only 120W. So 180W lands on S2. Then
S1 receives 120W, emits 300+120W=420W
S2 emits 600W, receives 420+180.
17. Indeed,
"Inner sphere S1 emits 450 W"
The sphere, 1 m2, has a steady state temperature of 298.4 K. It radiates 450 W so the inner shell receives 225 W/m2 from the sphere
"the remaining 150W lands on S2"
it also gets 75W/m2 from the shell; a total of 300 W/ms, thus the inner shell, at steady state has an influx 0f 300 W/m2 and the inner surface must radiate at 300 w/m2. So the twice what the
outgoing flux should be.
The influx and the efflux of the inner surface and sphere surface MUST balance.
"S2 emits 300+300'
The material in the shell does not know it has two surfaces, the inner surface is getting 300W/m2 and must heat up until it radiates at 300 W/m2; thus it should be at 269.5 K.
However, your solution can work if the rate the shell material conducts heat is far greater that the rate at which it radiate heat. In that case, heat absorbed at the inner surface can either
reradiate (slow) or migrate into the body (fast) and will have an identical likelihood of eventually being emitted on either face.
If the rate of heat transfer in the shell is slow, then there will be a steady state temperature gradient in the shell. The Earth itself is an example of this.
If you have ever visited a hot steel rolling mill you would see this in action. The white hot steel cools quickly, by radiation and not by heat transfer into the air. you can observe the cooling
and note that the inner core is much hotter than the outer surface.
Here is a video of a hot steel forge. Not how the yellow hot surface of the pigs is transformed into the white hot center on compression.
Using a guillotine forge cutter on ingots, you ofter observe a white/red center in the two halves even though the exterior surface is dull.
18. Doc,
"However, your solution can work if the rate the shell material conducts heat is far greater that the rate at which it radiate heat. "
Your original spec at Tallblokes said the shell was 1000 atoms thick (could be hard to make). That's about 0.2 micron. I said there that 300W flowing through would produce a 0.00003C temp drop.
That was working on 1μ, at 0.2 it's less than .00001C. That compares with the 228K or whatever needed to radiate 300W.
In this problem, as stated, spherical symmetry means you don't have to worry about lateral conduction. Nor do you have to worry about the heat distribution within the sphere. That depends on the
unstated conductivity anyway.
19. "There was another post by Stephen Wilde there on no backradiation etc. It got me looking for the earliest measurements I could find, and this 1837 paper by Pouillet turned up."
Kevin McKinney has written an essay on The History of Climate Science - William Charles Wells over at SkepticalScience concerning Wells' Essay On Dew that dates to 1814 and may be of interest. | {"url":"https://moyhu.blogspot.com/2012/02/miracles-of-2lot.html","timestamp":"2024-11-08T21:37:06Z","content_type":"application/xhtml+xml","content_length":"169745","record_id":"<urn:uuid:f88a4efb-893f-4eda-8ed7-57a0357577cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00612.warc.gz"} |
04 - Daily Remix - Meteor shower effect
004 • Metor shower effect
We can create the meteor shower effect in two ways, but we'll get to them a little later. Let's start by making the effect look natural and smooth. At first it may seem a bit tricky to create this
effect, but rest assured, we will use simple maths and it will be ok.
<You can skip this step and go with the numbers I've tested>.
We need to choose the right angle for our meteors to move at based on the dimensions we know (width and height).
Now, for some quick math: we need to calculate the angle α between the adjacent side along the x-axis and the opposite side. We know the lengths of both the adjacent sides, x and y, so we can
calculate the α angle.
tg(α) = x / y -> α = arctg(x / y)
We substitute x, y and use a calculator that has an inverse tangent function (arctg) to find the value of the angle α. However, the result will be in radians, but we want the result in degrees, so we
need to convert radians to degrees. A full rotation is 2π radians, which is equivalent to 360 degrees, so:
α rad * (360° / 2π) = α degrees.
We now have all the data we need, which we can use in the next step.
<Here to start if you skipped the calculation step>.
For me, an angle of 22.6 degrees, height: y and width: x = y * 2.4 works best.
1. The no-code solution
To achieve this effect, we can utilize the "loop" effect. However, before we begin, we should create our meteor and convert it into a component for easier future editing. Once we have our meteor
component prepared, follow these steps:
1. Set the component's position to "absolute" and position it outside the window. Additionally, apply a rotation of 22.6° (or your preferred angle).
2. Now, let's proceed with creating the animation using the "loop" effect. Configure the effect's properties as follows:
Type: Loop
Delay: adjust to your requirements
Opacity: 0
Scale: adjust to your requirements
Rotate: 0
Offset: x, y (in my case 780, 325)
-> Transition:
Ease: adjust to your requirements - In my case Ease Out
Time: adjust to your requirements, less time = faster moving meteor
Delay: adjust to your requirements
Duplicate the meteor component and arrange the duplicates as desired. Modify the loop effect settings for each individual component to match your specifications. In my case, the meteors are arranged
in the following pattern
To get the best effect, I recommend experimenting a little with the position of the component and properties of the loop effect.
2. The low-code solution
We create the meteor component in a similar manner as the no-code solution. Setting its position to 'absolute,' we initially position it outside the window. We then duplicate and reposition it as
needed. After selecting all the meteors, we can optimize our workflow by applying a code override. This helps expedite the animation configuration using randomized values.
Copy the code and paste it as code override. The effect should be ready, run the preview and savour the result.🫶 | {"url":"https://dailyremix.framer.website/004-meteor-shower","timestamp":"2024-11-07T18:37:19Z","content_type":"text/html","content_length":"289656","record_id":"<urn:uuid:8d5d720c-5d7d-4d14-9745-ecf08cf82902>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00891.warc.gz"} |
Where Did I Go Wrong in Deriving Gravitational Potential Energy?
• Thread starter Dead Boss
• Start date
In summary, hikaru attempted to solve the gravitational potential energy equation with the vector form of the law, but there was an error in the substitution.
Homework Statement
Derive the gravitational potential energy from Newton's law.
Homework Equations
\mathbf{F} = -G \frac{m_1m_2}{\left|\mathbf{r}\right|^3} \mathbf{r}
W = \int_A^B \mathbf{F} \cdot{} d\mathbf{r}
\Delta{}U = W
The Attempt at a Solution
I can use the scalar version of the law, but I want to do it with the vector form.
So, first some easy substitution:
W = \int_A^B \mathbf{F} \cdot{} d\mathbf{r}
= -Gm_1m_2 \int_A^B \frac{\mathbf{r}\cdot{}d\mathbf{r}}{\left|\mathbf{r}\right|^3}
= Gm_1m_2 \int_A^B \frac{r_xdr_x + r_ydr_y + r_zdr_z}{\left|\mathbf{r}\right|^3}
= -Gm_1m_2 \left\{
\int_A^B \frac{r_x}{\left|\mathbf{r}\right|^3} dr_x +
\int_A^B \frac{r_y}{\left|\mathbf{r}\right|^3} dr_y +
\int_A^B \frac{r_z}{\left|\mathbf{r}\right|^3} dr_z
Now take one integrand, for example the x part,
\int_A^B \frac{r_x}{\sqrt{r_x^2 + r_y^2 + r_z^2}^3} dr_x =
\int_A^B r_x \left( r_x^2 + r_y^2 + r_z^2 \right)^{-\frac{3}{2}} dr_x
and perform substitution:
t = r_x^2 + r_y^2 + r_z^2
dt = 2r_x dr_x
\frac{dt}{2} = r_x dr_x
That yields:
\frac{1}{2} \int_A^B t^{-\frac{3}{2}} dt_x =
\left[ -t^{-\frac{1}{2}} \right]_A^B =
\left[ -\frac{1}{\sqrt{r_x^2 + r_y^2 + r_z^2}} \right]_A^B =
\left[ -\frac{1}{\left|\mathbf{r}\right|} \right]_A^B =
-\left( \frac{1}{\left|\mathbf{B}\right|} - \frac{1}{\left|\mathbf{A}\right|} \right)
Doing the same for other two integrands and plunging them back:
W = -Gm_1m_2 \left\{
-\left( \frac{1}{\left|\mathbf{B}\right|} - \frac{1}{\left|\mathbf{A}\right|} \right)
-\left( \frac{1}{\left|\mathbf{B}\right|} - \frac{1}{\left|\mathbf{A}\right|} \right)
-\left( \frac{1}{\left|\mathbf{B}\right|} - \frac{1}{\left|\mathbf{A}\right|} \right)
And finally
W = 3Gm_1m_2 \left(
\frac{1}{\left|\mathbf{B}\right|} - \frac{1}{\left|\mathbf{A}\right|}
Now if we take B at infinity, the potential energy will be
U = - 3G\frac{m_1m_2}{\left|\mathbf{r}\right|}
which is three times as large as it should be. Don't tell me, let me guess. There's an error somewhere up there. *sigh*
Any help would be highly appreciated.
Last edited:
This is where you were wrong:
dt = 2r_x dr_x
t is a multi-variable function, therefore: [tex]dt = 2r_xdr_x+2r_ydr_y+2r_zdr_z[/tex]
A suggestion: [tex]2\vec{r}d\vec{r} = d(\vec{r}^2) = d(r^2)[/tex]
Thanks, hikaru!
I knew I should have read that book about multivariable calculus.
FAQ: Where Did I Go Wrong in Deriving Gravitational Potential Energy?
1. What is gravitational potential energy?
Gravitational potential energy is the energy an object possesses due to its position in a gravitational field. It is the potential for an object to do work as a result of its position in relation to
other objects affected by gravity.
2. How is gravitational potential energy calculated?
Gravitational potential energy is calculated using the equation U = mgh, where U is the potential energy, m is the mass of the object, g is the acceleration due to gravity, and h is the height of the
object relative to a chosen reference point.
3. Does gravitational potential energy depend on the mass of the object?
Yes, gravitational potential energy is directly proportional to the mass of the object. This means that as the mass of the object increases, so does its potential energy.
4. How does the height of an object affect its gravitational potential energy?
The higher an object is positioned in a gravitational field, the greater its potential energy will be. This is because the higher the object is, the more work it can potentially do when it falls
towards the ground due to gravity.
5. Can gravitational potential energy be negative?
Yes, gravitational potential energy can be negative. This occurs when an object is positioned below the reference point, such as when an object is underground or below the surface of a body of water.
In this case, the potential energy is considered to be "negative" because work would need to be done to move the object to a higher position where it would have more potential energy. | {"url":"https://www.physicsforums.com/threads/where-did-i-go-wrong-in-deriving-gravitational-potential-energy.430046/","timestamp":"2024-11-11T14:46:42Z","content_type":"text/html","content_length":"84610","record_id":"<urn:uuid:7ccb66c0-c666-4fcf-9ef8-a3db43b92d41>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00429.warc.gz"} |
Balanced Self Matching Feed Line
Specifications for Feed Line at bottom of diagrams
The diagram below is of my new and improved 9 to 1 linear matching transformer that is used on the radio end of the feedline.
You can substitute this instead of the Two 300 ohm Matching Stubs /
(Transformers) and the 1 to 1 current balun that you see on the diagram above.
Balanced Self Matching Feed Line System
I Designed this Balanced Self Matching Feed Line System for people who want the very best feed line they can have on a Monobanded antenna. My new and improved 9 to 1 linear matching transformer on
the radio end of the feedline. Has such an incredibly low insertion loss you probably could not measure it accurately. I estimated it to be in the hundreds (0.01 db) of a decibel. Here is a list of
advantageous for using Balanced Feed Line
Number 1
The line loss per 100-ft. is so low on 450-Ohm Ladder Line you can run 500-ft. and have less power loss than with 100-ft. of RG-8U on all Frequencies up to 148.00-MHZ. A another way to say it is 100
feet of ladder line at 148.00-MHZ has the same loss as 100 feet of RG-8U at 3.50-MHZ. Even 7/8" Hard-Line Coax has a loss of 0.70-DB at 148.00-MHZ per 100ft. conpaired to 0.35-DB for Ladder Line.
Number 2
Feed Line Radiation is eliminated this improves the asymmetrical pattern of the antenna.
Number 3
Ground Wave and Surface Wave propagation of man made noise that induceds it's self on to the braided shield of coaxial cables is eliminated.There by improving signal to noise ratio on your receiver.
Precisely tuning the feedline
When a feedline has an excessive amount of reactance there is always two ways to adjust the feedline. You can either increase its length or decrease it.
For example What ever reading you have on the end of the feedline be it capacitance reactance, inductance reactance or a perfect match.
If you add 1/2 wavelength to the feedline the reading will not change it will be identical to what it was before you added the half wavelength more.
On the other hand if you add a quarter of a wavelength it will add inductance reactance or if you shorten the feedline 1/4 wavelength it will add capacitance reactance.
If you add three-quarters of a wavelength to the line it will add inductance reactance.
On the other hand if you subtract three-quarters of a wavelength it will add capacitance reactance.
In a lot of cases the length of the feedline may only need to be increased or decreased by no more than a 1/ 8 of a wavelength.
You will need to do whichever one of these methods is most convenient for you to obtain a perfect match on the feedline.
This is the best way to precisely tune the feedline.
One Wavelength Loops & Quads
To Match the 100-Ohm input impedance of a One Wavelength Loop or a Quad. You take a 1/4 Wavelength of 450-Ohm Ladder Line and connect it in parallel with the antenna input and the feed line .
If you look at my diagram you will see how to put the feedline together. The spacing between the 450 ohm ladder line transformer and the feedline is actually 0.0 inches. I separated it in the diagram
just for clarity. The two pieces of 300 ohm line are also sandwiched to gather I use nylon tie wraps this holds them tightly together.
This makes a 225-Ohm Matching Stub (Transformer). This Transforms the 100-Ohm input impedance of the Loop antenna up to the 450-Ohm feed line. On the radio end of the feedline you make one quarter
wavelength long matching Stub (transformer) out of 300-Ohm low loss Foam Dielectric twin-lead.
This makes a 200-Ohm Matching Stub (Transformer) this Transforms the 450-Ohm feed line down to a 200-Ohms Balanced output. You connect one end to the 450 Ohm feed line and the other end connects to
the 4 to 1 Coaxial Balun the output of the Balun goes to the radio 50 ohms Unbalance.
Or you can use my old design. On the radio end of the feed line you make two 1/4 Wavelength long Matching Stubs out of 300-Ohm low loss Foam Dielectric twin-lead and put them in parallel.
This makes a 150-Ohm Matching Stub (Transformer) and this Transforms the 450-Ohm feed line down to a 50-Ohms Balanced output. You connect one end to the feed line and the other end to a 1 to 1
current balun the output of the balun goes to the radio. To Match One Wavelength Loops on 40m 80m and 160m use a 1/4 Wavelength or a 3/4 Wavelength of 75-Ohm-Transmitting twin-lead from antenna input
back to the 1 to 1 current balun at the radio. S.W.R. will be 1-to-1.
To Match the 50-Ohm input impedance of an Inverted-V you make two 1/4 Wavelength Matching Stubs out of 300-Ohm low loss Foam Dielectric twin-lead and put them in parallel This makes a 150-Ohm
Matching Stub (Transformer).
The spacing between the two pieces of 300 ohm line is actually 0.0 inches. I separated it in the diagram just for clarity. I use nylon tie wraps this holds them tightly together.
Connect one end to the feed line and the other end to the antenna input. This Transforms the 50-Ohm input impedance of the Inverted-V antenna up to the 450-Ohm feed line. On the radio end of the
feedline you make one quarter wavelength long matching Stub (transformer) out of 300-Ohm low loss Foam Dielectric twin-lead.
This makes a 200-Ohm Matching Stub (Transformer) this Transforms the 450-Ohm feed line down to a 200-Ohms Balanced output. You connect one end to the 450 Ohm feed line and the other end connects to
the 4 to 1 Coaxial Balun the output of the Balun goes to the radio 50 ohms Unbalance.
Or you can use my old design. On the radio end of the feed line you make two 1/4 Wavelength long Matching Stubs out of 300-Ohm low loss Foam Dielectric twin-lead and put them in parallel.
This makes a 150-Ohm Matching Stub (Transformer) and this Transforms the 450-Ohm feed line down to a 50-Ohms Balanced output. You connect one end to the feed line and the other end to a 1 to 1
current balun the output of the balun goes to the radio. To Match Inverted-V's on 40m 80m and 160m use a 1/4 Wavelength or a 3/4 Wavelength of 75-Ohm-Transmitting twin-lead from antenna input back to
the 1 to 1 current balun at the radio. S.W.R. will be 1.5-To-1. With a 1.5 to 1 S.W.R., your power loss will be 4 Watts out of 100 Watts. Most antenna tuners have at least 27 Watts of insertion loss,
with a 3 to 1 S.W.R. line mismatch.
Horizontal Dipoles
To Match the 73-Ohm input impedance of a Horizontal Dipole. You take a 1/4 Wavelength of 300-Ohm low loss Foam Dielectric twin-lead and connect it in parallel with the antenna input and the feed
The spacing between 300 ohm line and the feed line is actually 0.0 inches. I separated it in the diagram just for clarity. I use nylon tie wraps this holds them tightly together.
This makes a 180-Ohm Matching Stub (Transformer). This Transforms the 73-Ohm input impedance of the Horizontal Dipole antenna up to the 450-Ohm feed line. On the radio end of the feedline you make
one quarter wavelength long matching Stub (transformer) out of 300-Ohm low loss Foam Dielectric twin-lead.
This makes a 200-Ohm Matching Stub (Transformer) This Transforms the 450-Ohm feed line down to a 200-Ohms Balanced output. You connect one end to the 450 Ohm feed line and the other end connects to
the 4 to 1 Coaxial Balun the output of the Balun goes to the radio 50 ohms Unbalance.
Or you can use my old design. On the radio end of the feed line you make two 1/4 Wavelength long Matching Stubs out of 300-Ohm low loss Foam Dielectric twin-lead and put them in parallel.
This makes a 150-Ohm Matching Stub (Transformer) This Transforms the 450-Ohm feed line down to a 50-Ohms Balanced output. You connect one end to the feed line and the other end to a 1 to 1 current
balun the output of the balun goes to the radio.To Match Horizontal Dipoles on 40m 80m and 160m use a 1/4 Wavelength or a 3/4 Wavelength of 75-Ohm-Transmitting twin-lead from antenna input back to
the 1 to 1 current balun at the radio. S.W.R. will be 1.5-To-1. With a 1.5 to 1 S.W.R. your power loss will be 4 Watts out of 100 Watts. Most antenna tuners have at least 27 Watts of insertion loss,
with a 3 to 1 S.W.R. line mismatch.
450-ohm Line=1/4-wavelength
2m=1ft. 7" 1/4
6m=4ft. 6"
10m=8ft. 2"1/4
12m=9ft. 4"1/4
17m=12ft. 10"3/4
20m=16ft. 6"
300-ohm Line=1/4-wavelength
2m=1ft. 4"3/8
6m=3ft. 9"1/2
10m=6ft. 10"3/4
12m=7ft. 10"5/8
15m=9ft. 3"1/4
17m=10ft. 10"1/4
20m=13ft. 10"1/2
1/4 Wavelength of Coax
With A Velocity Factor of 78%
Note length shown for center of Band
10 Meters 6 Ft. 8 3/4"--------- (205.105) CM
12 Meters 7 Ft. 8 1/4"--------- (234.315) CM
15 Meters 9 Ft. 0"-------------- (274.320) CM
17 Meters 10 Ft. 7"------------ (322.580) CM
20 Meters 13Ft. 6"------------- (411.480) CM
30 Meters 19 Ft. 2"------------ (584.200) CM
40 Meters 26 Ft. 8 3/4"------- (814.050) CM
80 Meters 51 Ft. 1 3/4"------ (1558.925) CM
160 Meters 102 Ft. 2 1/2"--- (3115.310) CM
1/4 Wavelength of Coax
With A Velocity Factor of 66%
Note length shown for center of Band
10 Meters 5 Ft. 8"-------(172.720) CM
12 Meters 6 Ft. 6"-------(198.120) CM
15 Meters 7 Ft 7"--------(231.140) CM
17 Meters 8 Ft. 11"------(271.780) CM
20 Meters 11 Ft. 5"------(347.980) CM
30 Meters 17 Ft. 0"------(518.160) CM
40 Meters 22 Ft. 8"------(690.880) CM
80 Meters 45 Ft. 4"------(1381.760) CM
160 Meters 90 Ft. 8"-----(2763.520) CM
Back to home page | {"url":"https://ke4uyp.tripod.com/Feed_Line.html","timestamp":"2024-11-05T18:20:29Z","content_type":"text/html","content_length":"26576","record_id":"<urn:uuid:c7aec962-d45d-4502-b82c-39b46bcf7c6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00102.warc.gz"} |
Blank Multiplication Chart 1-20 2024 - Multiplication Chart Printable
Blank Multiplication Chart 1-20
Blank Multiplication Chart 1-20 – You can get a blank Multiplication Chart if you are looking for a fun way to teach your child the multiplication facts. This will likely let your little one to fill
in the facts alone. You will discover blank multiplication charts for different product or service varies, such as 1-9, 10-12, and 15 products. You can add a Game to it if you want to make your chart
more exciting. Here are some suggestions to get the little one began: Blank Multiplication Chart 1-20.
Multiplication Charts
You may use multiplication graphs in your child’s college student binder to assist them commit to memory arithmetic details. Although many kids can commit to memory their mathematics facts normally,
it requires numerous others time to do so. Multiplication charts are an ideal way to reinforce their boost and learning their confidence. In addition to being educative, these maps may be laminated
for added longevity. Allow me to share some beneficial strategies to use multiplication graphs. Also you can take a look at these websites for useful multiplication truth sources.
This course addresses the fundamentals of your multiplication desk. Along with discovering the principles for multiplying, pupils will understand the thought of variables and patterning. Students
will be able to recall basic facts like five times four, by understanding how the factors work. They may also be able to use the house of zero and one to solve more advanced products. Students should
be able to recognize patterns in multiplication chart 1, by the end of the lesson.
Different versions
Besides the common multiplication chart, individuals may need to develop a chart with additional variables or much less variables. To generate a multiplication chart with a lot more elements, college
students need to generate 12 furniture, each with a dozen rows and about three columns. All 12 dining tables need to fit using one page of pieces of paper. Lines must be driven by using a ruler.
Graph paper is perfect for this undertaking. Students can use spreadsheet programs to make their own tables if graph paper is not an option.
Activity suggestions
Regardless if you are training a newcomer multiplication session or focusing on the competence of the multiplication kitchen table, you are able to put together enjoyable and fascinating game
suggestions for Multiplication Chart 1. Several enjoyable ideas are the following. This video game demands the individuals to stay in work and pairs on the same problem. Then, they will all endure
their greeting cards and talk about the solution for the minute. If they get it right, they win!
When you’re teaching little ones about multiplication, one of the best tools you may allow them to have is actually a printable multiplication graph or chart. These computer linens can come in a
range of patterns and may be printed using one site or a number of. Children can learn their multiplication information by copying them from your chart and memorizing them. A multiplication graph or
chart can be helpful for many good reasons, from assisting them understand their math facts to training them the way you use a calculator.
Gallery of Blank Multiplication Chart 1-20
Printable Blank Multiplication Chart 1 20 Free Memozor
Printable Blank Multiplication Chart 1 20 Free Memozor
12 Fun Blank Multiplication Charts For Kids Kitty Baby Love
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/blank-multiplication-chart-1-20/","timestamp":"2024-11-12T07:41:01Z","content_type":"text/html","content_length":"52255","record_id":"<urn:uuid:b4851934-2a21-4ad9-a310-9a29a80bc64c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00458.warc.gz"} |
Polynomial Functions
A polynomial function is a function that involves only non-negative integer powers or only positive integer exponents of a variable in an equation like the quadratic equation, cubic equation, etc.
For example, 2x+5 is a polynomial that has exponent equal to 1.
Study Mathematics at BYJU’S in a simpler and exciting way here.
A polynomial function, in general, is also stated as a polynomial or polynomial expression, defined by its degree. The degree of any polynomial is the highest power present in it. In this article,
you will learn polynomial function along with its expression and graphical representation of zero degrees, one degree, two degrees and higher degree polynomials.
Polynomial Function Definition
A polynomial function is a function that can be expressed in the form of a polynomial. The definition can be derived from the definition of a polynomial equation. A polynomial is generally
represented as P(x). The highest power of the variable of P(x) is known as its degree. Degree of a polynomial function is very important as it tells us about the behaviour of the function P(x) when x
becomes very large. The domain of a polynomial function is entire real numbers (R).
If P(x) = a[n] x^n + a[n-1] x^n-1+.……….…+a[2] x^2 + a[1] x + a[0], then for x ≫ 0 or x ≪ 0, P(x) ≈ a[n] x^n. Thus, polynomial functions approach power functions for very large values of their
Polynomial Function Examples
A polynomial function has only positive integers as exponents. We can even perform different types of arithmetic operations for such functions like addition, subtraction, multiplication and division.
Some of the examples of polynomial functions are here:
All three expressions above are polynomial since all of the variables have positive integer exponents. But expressions like;
• 5x^-1+1
• 4x^1/2+3x+1
• (9x +1) ÷ (x)
are not polynomials, we cannot consider negative integer exponents or fraction exponent or division here.
Also, see:
Types of Polynomial Functions
There are various types of polynomial functions based on the degree of the polynomial. The most common types are:
• Zero Polynomial Function: P(x) = a = ax^0
• Linear Polynomial Function: P(x) = ax + b
• Quadratic Polynomial Function: P(x) = ax^2+bx+c
• Cubic Polynomial Function: ax^3+bx^2+cx+d
• Quartic Polynomial Function: ax^4+bx^3+cx^2+dx+e
The details of these polynomial functions along with their graphs are explained below.
Graphs of Polynomial Functions
The graph of P(x) depends upon its degree. A polynomial having one variable which has the largest exponent is called a degree of the polynomial.
Let us look at P(x) with different degrees.
Zero Polynomial Function
Degree 0 (Constant Functions)
• Standard form: P(x) = a = a.x^0, where a is a constant.
• Graph: A horizontal line indicates that the output of the function is constant. It doesn’t depend on the input.
E.g. y = 4, (see Figure 1)
Figure 1: y = 4
Linear Polynomial Functions
Degree 1, Linear Functions
• Standard form: P(x) = ax + b, where a and b are constants. It forms a straight line.
• Graph: Linear functions have one dependent variable and one independent which are x and y, respectively.
In the standard formula for degree 1, a represents the slope of a line, the constant b represents the y-intercept of a line.
E.g., y = 2x+3(see Figure 2)
here a = 2 and b = 3
Figure 2: y = 2x + 3
Note: All constant functions are linear functions.
Quadratic Polynomial Functions
Degree 2, Quadratic Functions
• Standard form: P(x) = ax^2+bx+c , where a, b and c are constant.
• Graph: A parabola is a curve with one extreme point called the vertex. A parabola is a mirror-symmetric curve where any point is at an equal distance from a fixed point known as Focus.
In the standard form, the constant ‘a’ represents the wideness of the parabola. As ‘a’ decrease, the wideness of the parabola increases. This can be visualized by considering the boundary case when a
=0, the parabola becomes a straight line. The constant c represents the y-intercept of the parabola. The vertex of the parabola is given by
(h,k) = (-b/2a, -D/4a)
where D is the discriminant and is equal to (b^2-4ac).
Note: Whether the parabola is facing upwards or downwards, depends on the nature of a.
• If a > 0, the parabola faces upward.
• If a < 0, the parabola faces downwards.
E.g. y = x^2+2x-3 (shown in black color)
y = -x^2-2x+3 (shown in blue color)
(See Figure 3)
Figure 3: y = x^2+2x-3 (black) and y = x^2-2x+3 (blue)
Graphs of Higher Degree Polynomial Functions
• Standard form– P(x) = a[n] x^n + a[n-1] x^n-1+.……….…+ a[0], where a[0],a[1],………,a[n ]are all constants.
• Graph: Depends on the degree, if P(x) has degree n, then any straight line can intersect it at a maximum of n points. The constant term in the polynomial expression, i.e. a[0] here represents the
• E.g. y = x^4-2x^2+x-2, any straight line can intersect it at a maximum of 4 points (see fig. 4)
Polynomial Function Questions
Q.1: What is a Polynomial?
A polynomial is defined as an expression formed by the sum of powers of one or more variables multiplied to coefficients. In its standard form, it is represented as:
a[n] x^n + a[n-1] x^n-1+.……….…+a[2] x^2 + a[1] x + a[0]
where all the powers are non-negative integers.
And, a[0],a[1],………,a[n ]∈ R
A polynomial is called a univariate or multivariate if the number of variables is one or more, respectively. So, the variables of a polynomial can have only positive powers.
Q.2: What is the Degree of Polynomial?
The degree of any polynomial expression is the highest power of the variable present in its expression. Constant (non-zero) polynomials, linear polynomials, quadratic, cubic and quartics are
polynomials of degree 0, 1, 2, 3 and 4 , respectively. The function f(x) = 0 is also a polynomial, but we say that its degree is ‘undefined’.
To learn more about different types of functions, visit us. To enjoy learning with interesting and interactive videos, download BYJU’S -The Learning App. | {"url":"https://mathlake.com/Polynomial-Functions","timestamp":"2024-11-06T02:24:58Z","content_type":"text/html","content_length":"16641","record_id":"<urn:uuid:ba51aea7-70c8-4ea9-bd82-cf1b7818645b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00895.warc.gz"} |
Js bit operator - Moment For Technology
Relevant concepts
Concept: Bit operation operating at the base of a number (representing the 32 digits of a number). Because it is the low-level operation of the operation, so the operation speed is often the
fastest, but he is not intuitive.
Bitwise operations only work on integers. If an operation is not an integer, it is automatically converted to an integer before being run.
Inside JavaScript, values are stored as 64-bit floating-point numbers, but bitwise operations are performed as 32-bit signed integers and return values as 32-bit signed integers.
This conversion makes it possible to treat special NaN and Infinity values as zero when bitwise operations are applied to them. For non-numeric bitwise operations, the **Number()** method is used
to convert the value to a numeric value before applying the bitwise operation.
Binary: ECMAScript integers have two types, namely signed integers (positive and negative numbers are allowed) and unsigned integers (only positive numbers are allowed). In ECMAScript, all
integer literals are signed integers by default.
The signed integer uses the 31st bit to represent the value of the integer, the 32nd bit to represent the symbol of the integer, 0 for a positive number and 1 for a negative number. The value
ranges from -2147483648 to 2147483647. The symbol bit is called the symbol bit, and the value of the symbol bit determines the format of the other bit values. Where positive numbers are stored in
pure binary format, each of the 31 bits represents a power of 2, the first (called bit 0) represents a power of 2 and so on, and the unused bits are padded with zeros and can be ignored.
Example analysis
For example, the representation of the number 10
The binary of 10 is’ 1010 ‘, and it uses only the first four bits, which are significant bits. The rest of the digits are negligible.
console.log((10).toString(2)); / / 1010Copy the code
The back
Negative numbers can store bits of binary code, but in the form of a binary complement, which is computed as follows:
1. Determine the binary representation of the non-negative version of a number
2. To find the binary inverse, replace the 0 with 1, and the 1 with 0
3. Add 1 after binary reverse code
For example, determine the binary complement of -10
Step 1: the binary representation of 10 is 0000 0000 0000 0000 1010
Step 2: Swap 1 with 0 1111 1111 1111 1111 1111 1111 0101
Step 3: binary inverse +1
So the binary representation of -10 is 1111 1111 1111 1111 1111 1111 0110. One thing to note when dealing with signed integers is that 31 bits are not accessible. However, after converting negative
integers to binary strings, ECMAScript does not display them as a binary complement. Instead, ECMAScript prints them as the standard binary with the absolute value of the number preceded by a
negative sign to avoid accessing bits 31 for convenience.
For example, console.log((-10).tostring (2)) // -1010Copy the code
Bitwise operations can perform seven operations, including bitwise NOT (NOT), bitwise AND (AND), bitwise OR (OR), bitwise XOR, left shift, signed right shift, AND unsigned right shift
A concrete analysis
1, bitwise NOT (NOT)
The bitwise nonoperator, represented by a wavy line (~), returns the inverse of the numeric value, essentially subtracting the negative value of the operand by 1
<script> let n = 9; Let I = 9.9; let m = ~n; console.log(m); //-10 console.log(~~n); // console.log(~~ I); // console.log(~~ I); </script> </script>Copy the code
2, AND (AND)
The bitwise and operator is represented by an and (&), which consists of two operands. Operate directly on the binary form of a number. It aligns the digits in each digit, AND then applies the
following rules to two digits in the same position. Bitwise and operands return 1 only if the corresponding bits of both values are 1, and the result is 0 if either bit is 0
let n = 10;
let m = 15;
console.log(n & m); //10
console.log(n.toString(2), m.toString(2)); //1010 1111
Copy the code
Analysis is as follows:
3, by position OR (OR)
Bitwise or operators are represented by a vertical line (|) symbol, and there are also two operators. Bitwise or operations follow the following table. Returns 1 if one digit is 1, and 0 only if both
digits are 0. An integer can be placed in place with 0 to get itself, and a decimal can be placed in place with 0 to get the effect of rounding.
<script> let n = 10; let m = 15; console.log(n | m); //15 console.log(n.toString(2), m.toString(2)); / / 1010 1111 console. The log (3.9 | 0); / / 3 console. The log (4.1 | 0); //4 </script>Copy the code
4. Bitwise XOR
It is represented by a caret (^) and also requires two operands. Here is the truth table. Returns 0 if two values of bitwise xor are the same, and 1 if not. An integer or 0 can hold itself bitwise,
and a decimal or 0 can be rounded
let n = 10;
let m = 15;
console.log(n ^ m); //5
console.log(n.toString(2), m.toString(2)); //1010 1111
Copy the code
^ There is a special use of three consecutive xor operations on two numbers a and b, a^=b,b^=a,a^=b can be swapped. This means that using ^ allows you to swap the values of two variables without
introducing temporary variables.
let a = 10, b = 11; // 1010 | 1100
a ^= b; // 1010 ^ 1100 = 0110
b ^= a; // 1100 ^ 0110 = 1010
a ^= b; // 0110 ^ 1010 = 1100
console.log(a, b); //11 10
Copy the code
5, shift to the left
The left-shift operator, represented by two less-than signs (<<), moves all bits of a numeric value to the left by the specified number of digits.
For example, if you move the number 3 (binary 11) five places to the left, you get 96
let n=3;
let m=n<<5;
console.log(n.toString(2)); //11
console.log(m); //96
Copy the code
6. There is a sign shift to the right
Represented by two greater than signs (>>), the value is moved to the right, reserving the sign bit. A signed right-shift operation is the opposite of a left-shift operation, such as 96 moving five
bits to the right to become 3.
7. Unsigned right shift
It is represented by three greater than signs (>>>). This operator moves all 32 bits of the value to the right. For positive numbers, an unsigned right shift is the same as a signed right shift. But
for negative numbers, the unsigned right shift fills the space with 0. Another point is that the unsigned right-shift operator treats a negative binary as a positive binary and since negative numbers
are represented as the binary complement of their absolute values, the unsigned right-shift result is very large.
For example, -95 moves 5 to the right unsigned
<script> let n = -96; console.log(n >>> 5); //134217725 </script>Copy the code
Three, common operations
Use << to achieve multiplication; Use >> to achieve division operation; Swap values using ^; Small trees take the whole;
<script> console.log(4 << 1); //8 4* math.pow (2,1) console.log(4 << 2); // 4* math.pow (2,2) console.log(4 << 3); //32 4* math.pow (2,3) console.log(4 << 4); //64 4* math.pow (2,4) console.log(4 << 5); //128 4* math.pow (2,5) console.log(4 << 6); //256 4* math.pow (2,6) console.log(5 >> 3); //0 parseInt(5/ math.pow (2,3)) console.log(14 >> 2); //3 parseInt(5/ math.pow (2,2)) console.log(225 >> 5); //3 parseInt(5/ math.pow (2,2)) console.log(225 >> 5); //7 parseInt(5/ math.pow (2,5)) let a = 20, b = 4; a ^= b, b ^= a, a ^= b; console.log(a, b); / / 4 20 console. The log (~ 9.9); / / 9 console. The log (9.8 | 0); / / 9 console. The log (9.7 ^ 0); / / 9 console. The log (9.6 < < 0); / / 9 console. The log (9.3 > > 0); //9 </script>Copy the code | {"url":"https://www.mo4tech.com/js-bit-operator.html","timestamp":"2024-11-13T12:59:52Z","content_type":"text/html","content_length":"74488","record_id":"<urn:uuid:b28531ff-9fe1-4042-b8a7-203968a0f4fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00332.warc.gz"} |
Hacking glmmTMB
modifying family.R
We might be able to get away with specifying family= as a list, but it's better to implement it as a new function.
#' @rdname nbinom2
#' @export
zo_truncated_poisson <- function(link="log") {
r <- list(family="zo_truncated_poisson",
variance=function(lambda) {
stop("haven't implemented variance function")
## should figure this out ...
## (lambda+lambda^2)/(1-exp(-lambda)) - lambda^2/((1-exp(-lambda))^2)
As you can see, I haven't yet worked out the variance of a zero-one-truncated Poisson. This will only cause problems if/when a user wants to estimate Pearson residuals.
Ideally a $dev.resids() component should also be added, to return the deviance residuals (i.e., \(2 (\log L(y_i) - \log L_{\textrm{sat}}(y_i))\), where \(L_{\textrm{sat}}\) is the log-likelihood of \
(y_i\) under the saturated model; see the $dev.resids components of families built into base R for examples.
For some families, the variance and deviance-residuals function require extra information such as a dispersion parameter. For the nbinom1 and nbinom2 families, glmmTMB does some additional stuff to
store the value of the dispersion parameter in the environment of the variance/deviance residual functions (which share an environment), and to retrieve the dispersion parameter from the environment
(search for ".Theta" in the R code for the package).
You should also document your new family, probably in the ?glmmTMB::family_glmmTMB page. This material is located in R/family.R, above the nbinom2 family function.
modifying glmmTMB.R
There may not be any other R code that needs to be updated, depending on the details of the family you are adding. Again, it's best to try to work by analogy with the closest family to the one you're
adding. In this case, the only occurrence of truncated_poisson in glmmTMB.R is in the definition of which families have no dispersion parameter:
.noDispersionFamilies <- c("binomial", "poisson", "truncated_poisson",
updating enum.R
The R file that keeps the C++ and R code in sync with respect to which families are available and which numeric code corresponds to which family is enum.R. Do not edit this file by hand: instead, run
make enum-update | {"url":"https://www.stats.bris.ac.uk/R/web/packages/glmmTMB/vignettes/hacking.html","timestamp":"2024-11-03T02:57:20Z","content_type":"text/html","content_length":"23010","record_id":"<urn:uuid:0b885186-6772-4487-8bf9-4ec364c9100d>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00564.warc.gz"} |
Social Choice and Beyond
Arrow's R Notation Mood: Now Playing: Cross Blogged with Will Blog for Food Topic: Social Choice
In the arcane world of social choice, a man by the name of Kenneth Arrow looms large. In 1951 he published a book, "Social Choice and Individual Values," in which he supposedly proved that social
choice is impossible. But what is social choice? Let us say we have a society composed of N individuals numbered 1,2,3, ... . Those individuals have to order a set of M alternatives with their most
preferred alternative being their first choice etc. Let's indicate the alternatives as a, b, c, ... . Then a social welfare function accepts the individual orderings as inputs and produces as output
the social ordering which is an ordering of the alternatives that applies to the whole society.
If individual 1 prefers a to b, we write aP1b. If society prefers a to b, we write aPb. So far so good. But we also want to provide for the case in which an individual is indifferent between a and b.
We write this aI1b and aIb, respectively. Arrow's analysis then combines these two relationships into a relationship he denotes as R which means "prefers or is indifferent to" so aR1b means
individual 1 prefers a to b or is indifferent between a and b. Arrow's rationale for this is the following: "Instead of working with two relations, it will be slightly more convenient to use a single
relation, 'preferred or indifferent.'" (p. 12) (emphasis added)
Arrow then goes on to postulate two axioms. Axiom 1 states that either xRy or yRx and he notes that this does not exclude the possibility that both xRy AND yRx. Axiom 2 has to do with transitivity
which will not concern us here. Again Arrow states (p. 13): "Axioms 1 and 2 do not exclude the possibility that for some distinct x and y, both xRy and yRx. A strong ordering on the other hand, [one
with only preferences and without indifferences] is a ranking in which no ties are possible." This is blatant nonsense. One could have half the population with xPy and half with yPx [strong
orderings] and that certainly would represent a tie so a tie is possible. What Arrow is implying without coming out and saying it directly is that in his world a tie between two alternatives is to be
represented as a social indifference. This is completely arbitrary and limits his entire analysis.
One must assume that in Arrow's world each individual will submit his input in terms of R. That is individual 1 would submit aR1b, aR1c etc. until all pairwise comparisons have been made. For now we
will go along with Arrow's demand that only pairwise comparisons need to be submitted. It can be assumed that individuals are not permitted to submit a comparison using the indifference relation
since then what would be the purpose of introducing R to make the analysis "slightly more convenient." The whole idea of "slightly more convenient" is to reduce the number of relations from 2 (P and
I) to 1 (R). However, Arrow proposes (without saying so) to use the I relation in the social choice to cover the case of a tie. Therefore, the social choice could be aRb, bRa or aIb.
Now the idea of the social welfare function (or of any function for that matter) is to connect each element of the domain (consisting of all possible combinations of individual choices) to an element
of the range (consisting of all possible social choices). There are a great number of possible functions. Each function will hook up elements of the domain with elements of the range differently. The
important thing is that each possible element of the domain is hooked up to one and only one element of the range. Arrow implies that any element of the domain that represents a tie (such as half the
population having aRb and half having bRa) should be hooked up with the range element aIb. Respectfully, I disagree with this approach for the following reason: the half of the population that has
aRb could actually prefer a to b (no one is indifferent), and the half of the population that has bRa could actually prefer b to a. That represents a tie to be sure, but society is hardly indifferent
between the two alternatives. Arrow has confused a tie with an indifference! By so doing he has guaranteed that his analysis will yield the result that no social choice is possible.
Secondly, I would like to point out that individual information is lost when an individual submits his input as aR1b or "I prefer a to b or I'm indifferent between a and b." The system does not know
which, and this introduces ambiguity at the outset. Not only that, but say an individual is indifferent between a and b. He has two ways to express it! He can submit either aR1b or bR1a. The
resulting analysis becomes meaningless as the system knows not how many of the individual aRb's represent indifferences and how many of them represent preferences. Ditto for the individual bRa's!
There can be no meaningful social welfare function given these kinds of inputs.
Therefore, I suggest that Arrow's approach is not acceptable and that his conclusion that social choice is impossible is invalid. A more rigorous approach is necessary involving the possibility of
ties between orderings as elements of the range. One possibility of dealing with these ties is to randomly choose among them which I think my friend, Ben, at Oxford is considering as a doctoral
For more on this subject, please see my blog
Will Blog for Food.
Posted by jclawrence at 1:44 PM PDT
Updated: Tuesday, April 18, 2006 1:50 PM PDT | {"url":"https://jclawrence.tripod.com/scabpage9/index.blog?entry_id=1461141","timestamp":"2024-11-01T20:22:10Z","content_type":"text/html","content_length":"23898","record_id":"<urn:uuid:95564d77-d316-4875-ba67-6e98b0b4cd7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00754.warc.gz"} |
Locate, Compare & Order
6th Grade
Texas Essential Knowledge and Skills (TEKS): 6.2.C
locate, compare, and order integers and rational numbers using a number line;
Texas Essential Knowledge and Skills (TEKS): 6.2.D
order a set of rational numbers arising from mathematical and real-world contexts; and
Florida - Benchmarks for Excellent Student Thinking: MA.6.NSO.1.1
Extend previous understanding of numbers to define rational numbers. Plot, order and compare rational numbers. | {"url":"https://www.learningfarm.com/web/practicePassThrough.cfm?TopicID=2299","timestamp":"2024-11-07T03:28:14Z","content_type":"application/xhtml+xml","content_length":"32022","record_id":"<urn:uuid:8f5fa1c8-6ca6-4239-90f6-d5d54983a1b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00153.warc.gz"} |
ZPEnergy.com - New experiment corroborates photon model of Quantum Ring Theory
There are currently, 122 guest(s) and 0 member(s) that are online.
You are Anonymous user. You can register for free by clicking here
New experiment corroborates photon model of Quantum Ring Theory Don't have an account yet? You can create one. As a registered user you have some
Posted on Tuesday, June 07, 2011 @ 21:13:53 UTC by vlad advantages like theme manager, comments configuration and post comments with your
WGUGLINSKI writes: The experiment: Observing the Average Trajectories of Single Photons in a Two-Slit
Average Score: 2.87
http://www.sciencemag.org/content/332/6034/1170.abstract Votes: 8
The photon model proposed in Quantum Ring Theory is composed by two corpuscles - a particle and its antiparticle -
which move with helical trajectory.
Quantum Ring Theory proposes that all the elementary particles move with helical trajectory, which is responsible
for the wave feature of matter.
Along the 20th Century it was considered that matter has a duality wave-particle, supposedly confirmed in some
experiments with the electron, as the double-slit experiment.
When the electron crosses one unique slit, it behaves as a particle.
When the electron crosses two slits, it behaves as a wave.
According to Quantum Ring Theory, in the double-slit experiment the helical trajectory is the responsible for the
electron's wave behavior: when the electron crosses the two slits, it has interference with its own helical
Now the physicist Aephraim Steinberg , from the Toronto University-Canada- made the double-slit experiment with
photons, and the results show that Quantum Mechanics is wrong, while Quantum Ring Theory is correct, because:
1- According to Quantum Mechanics, a quantum particle can behave either as a particle or as a wave, but it cannot
behave as wave and as a particle at the same time.
2- Unlike, as Quantum Ring Theory considers that the wave-particle duality is consequence of the helical
trajectory, then the particle can have interference with its own helical trajectory when it crosses a slit.
So, according to QRT, the quantum particle can behave as a wave and as a particle as the same time.
In the Steinberg experiment, a photon crossed a unique slit, and it had inferference with itself (a wave feature),
while from Quantum Mechanics we would have to expect a particle feature only, since the photon crossed only one
So, the experiment corroborates the photon model of Quantum Ring Theory, while it contradicts a fundamental
principle of Quantum Mechanics, according to which a quantum particle cannot behave at the same time as a wave and
as a particle
"New experiment corroborates photon model of Quantum Ring Theory" | Login/Create an Account | 1 comment | Search Discussion
The comments are owned by the poster. We aren't responsible for their content.
No Comments Allowed for Anonymous, please register
The collapse of Bohr's Principle of Complementarity (Score: 1)
by vlad on Friday, June 17, 2011 @ 21:48:44 UTC
(User Info | Send a Message) http://www.zpenergy.com
1) There is a principle in Quantum Mechanics:
- a quantum particle can be either a particle or a wave, but it cannot be at the same time particle and wave.
Such principle, known as Complementarity, was proposed by Bohr. It was considered untouchable along the 20th Century.
Complementarity is consequence of de Broglie's interpretation on the duality wave-particle. Because according to his interpretation, the duality is a property of the matter.
Well, it makes no sense the matter to be particle and wave at the same time. That's why, according to de Broglie's interpretation (and Bohr's complementarity), a quantum particle cannot be wave and
particle at the same time.
2) In 2006 Guglinski published his book "Quantum Ring Theory-Foundations for Cold Fusion", in which he shows that de Broglie's interpretation on dualtiy violates a fundamental principle of Physics: a
law must be valid in any referential.
Therefore, according to Guglinski de Broglie interpretation cannot be correct. In another words: the duality cannot be a property of matter.
3) Now in june-2011 Aephraim Steinberg published a new experiment where he showed that it's wrong the untouchable Bohr's complementarity:
Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer
http://www.sciencemag.org/content/332/6034/1170.abstract [www.zpenergy.com]
Steinberg experiment shows that a quantum particle can be, at the same time, a wave and a particle.
As complementarity was based on the interpretation that duality is a property of the matter, then obviously Steinberg experiment implies that de Broglie's interpretation is wrong.
In another words: Steinberg's experiment implies that duality cannot be a property of matter.
So, Steinberg experiment reinforces Guglinski's argument that de Broglie interpretation is wrong.
4) David Bohm proposed a new interpretation of Quantum Mechanics. His theory suggests that Quantum Mechanics is just a tip of an enormous iceberg submerged under the water surface.
The rest of the iceberg, suggested by David Bohm, is unveiled by Guglinski's Quantum Ring Theory.
The 2011 Steinberg's experiment is showing that David Bohm was right: Quantum Mechanics is only a tip of an enourmous iceberg...
5) Now he need only to wait for more new experiments. They will show if Quantum Ring Theory is actually the rest of the iceberg suggested by David Bohm. | {"url":"http://www.zpenergy.com/modules.php?name=News&file=article&sid=3295&mode=&order=0&thold=0","timestamp":"2024-11-07T18:42:57Z","content_type":"text/html","content_length":"34707","record_id":"<urn:uuid:3723723c-1c03-40f7-acb5-d4badcbad134>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00121.warc.gz"} |
WhoMadeWhat – Learn Something New Every Day and Stay Smart
How long is 21 days in a month?
21 days in months. 21 days is equal to 0.69 months.
Then, How many is 21 days?
This conversion of 21 days to weeks has been calculated by multiplying 21 days by 0.1428 and the result is 3 weeks.
How many months does 28 days have? All 12 months have at least 28 days
February is the only month with exactly 28 days (except for leap years when February has 29 days).
Keeping this in consideration, How many minutes are in 21 days?
This conversion of 21 days to minutes has been calculated by multiplying 21 days by 1,440 and the result is 30,240 minutes.
What months dont have 30 days?
Rhyme to remember number of days in each month:
April, June, and November. Thirty days hath September, April, June, and November, all the rest have thirty-one.
How many months are in a year?
Answer: There are 12 months in a year. There are 4 months in a year that have 30 days.
What is broken before you use it?
An egg has to be broken before you can use it.
That’s right, the answer to this tricky teaser is the humble egg.
How much seconds are in 3 days?
In 3 d there are 259200 s . Which is the same to say that 3 days is 259200 seconds.
How much minutes are in 2 and a half hours?
So,two and a half hour=150minutes.
How many days are there in a month without weekends?
All months in the Gregorian calendar have 4 weeks, as every month on the calendar has at least 28 days. (7 days in a week divided into 28 days equals 4 weeks.) Some months have a few extra days, but
none of them has enough extra days to count as an extra week.
How can I remember the months?
Learning the Order of the 12 Months of the Year
1. A month is a collection of approximately 30 days. …
2. To remember the months of the year, use the the rhyme ’30 Days Hath September’ or use the knuckle trick. …
3. 30 days hath September, April, June and November.
How many weeks does the year have?
A calendar year consists of 52 weeks, 365 days in total.
How do the months go?
The 4 months of April, June, September and November all have 30 days. The 7 months of January, March, May, July, August, October and December all have 31 days. February is the only month that has 28
days. It has 29 days on a leap year (every 4 years).
What is white when it’s dirty?
The answer to this interesting What Becomes White When It Is Dirty? Riddle is Blackboard.
What gets broken without being held?
Gotham Quotes. I can be broken without being held. Given and then taken away. Some people use me to deceive, but when delivered, I am the greatest gift of all.
What has many keys but can’t open a single lock?
What has many keys but can’t open a single lock? The answer is: Piano.
How Much Is billion seconds?
Answer and Explanation:
One billion seconds is equivalent to 31.70979198376 years. There are 60 seconds in 1 minute and 60 minutes in one hour.
How many seconds is 3 minutes?
Which is the same to say that 3 minutes is 180 seconds.
What is 2.5 of an hour?
2.5 hours equals 150 minutes. 1 hour is 60 minutes, therefore there are 150 minutes in 2.5 hours.
How many hours is 7 days?
How Many Hours in 7 Days? – There are 168 hours in 7 days.
What is the average working days in a month?
As you can see, the range of working days per month moves between 19 and 22. 4 days. That’s a considerable variation. The least amount of work days are found in February and November.
How many Saturdays are there in a month?
Each month has a minimum of four Saturdays. Every month can also have a fifth Saturday depending on the day of the week on which the month starts. February can only have a fifth Saturday on a leap
year when the first of the month starts on Saturday.
What is the saying to remember how many days in each month?
“Thirty Days Hath September”, or “Thirty Days Has September”, is a traditional verse mnemonic used to remember the number of days in the months of the Julian and Gregorian calendars.
How many days are in a month on average?
The average month is 365/12 = 30.42 days in a regular year and 366/12 = 30.50 days in a leap year. The Gregorian (western) solar calendar has 365.2425/12 = 30.44 days on the average, varying between
28 and 31 days. | {"url":"https://whomadewhat.org/how-long-is-21-days-in-a-month/","timestamp":"2024-11-05T06:33:11Z","content_type":"text/html","content_length":"50303","record_id":"<urn:uuid:8480ccca-d2b6-4de8-869a-4e45cf8db60b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00390.warc.gz"} |
a. Create a scatter plot for the data in each table. b. Use the shape of the scatter plot to determine if the data are best modeled by a linear function, an exponential function, a logarithmic
function, or a quadratic function. $$ \begin{array}{|c|r|} \hline \boldsymbol{x} & \boldsymbol{y} \\ \hline 0 & 5 \\ \hline 1 & 3 \\ \hline 2 & 1 \\ \hline 3 & -1 \\ \hline 4 & -3 \\ \hline \end
{array} $$
Short Answer
Expert verified
The scatter plot of the given dataset forms a straight line with a negative slope. Therefore, the data is best modeled by a linear function.
Step by step solution
Create the scatter plot
Plot the given data points on a scatter plot. The values from the first column (x) will be on the horizontal axis and the values from the second column (y) will be on the vertical axis.
Analyze the scatter plot
Observe the shape of the plot. If the points on the graph form a straight line, then the data can be modeled by a linear function. If the points form a curve that rises or falls rapidly at first and
then more slowly, the data can be modeled by an exponential function. If the points form an upward or downward curve that becomes less steep as you move away from the y-axis, the data can be modeled
by a logarithmic function. If the points form a parabolic shape (opening up or down), then the data can be modeled by a quadratic function.
Determine the function type
Based on the shape of the scatter plot, make a determination about which type of function best models the data.
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Data Modeling
Data modeling is a process used to represent and understand complex sets of data. By developing a visual representation, such as a scatter plot, we can make interpretive or predictive analyses more
• Scatter Plot: A scatter plot is a type of diagram that represents individual data points on a coordinate system. This helps in identifying trends or patterns in the data.
• Purpose: The goal is to determine what kind of mathematical function can best describe the dataset. This is crucial for making predictions or understanding relationships between variables.
After plotting the data, the scatter plot provides a visual cue that helps with identifying the pattern or trend. These patterns guide us in mathematical modeling by helping to select the function
that best fits the data.
Function Types
In the context of scatter plots, several function types could potentially model data. Each function type offers a distinct way of representing data relationships.
• Linear Functions: Ideal when data points form a straight line. This implies a consistent rate of change.
• Exponential Functions: Best suited for data with rapid growth or decay, often observed in curves that rise or fall sharply.
• Logarithmic Functions: Useful when data increases or decreases quickly at first and then levels off, illustrated in curves that flatten out.
• Quadratic Functions: Suitable for parabolic patterns, where the data forms a symmetrical curve, either concave up or down.
Determining which function type best fits the data is an important analytical step. It lays the foundation for deeper analysis and modeling.
Linear Functions
Linear functions are a fundamental component of mathematical data modeling. They make up the simple function type where data is described with a straight line.
• Equation Format: The general form is \(y = mx + b\), where \(m\) is the slope and \(b\) is the y-intercept.
• Characteristics: These functions show a constant rate of change, meaning as \(x\) increases by 1, \(y\) increases by the same amount each time.
• Applications: Often used in financial models, time series analysis, and anywhere a consistent relationship between variables is observed.
When examining a scatter plot, if the data seem to form a line, it’s reasonable to consider a linear function. This directly extends into predicting future values and understanding the linear
Quadratic Functions
Quadratic functions represent data with a parabolic curve, offering insights into data sets that have symmetrical increase and decrease patterns.
• Equation Format: The general form is \(y = ax^2 + bx + c\), where \(a\), \(b\), and \(c\) determine the shape and position of the parabola.
• Characteristics: These functions create a "U" shaped curve called a parabola. The direction of opening (upward or downward) depends on the sign of \(a\).
• Applications: Used in physics for projectile motion, in economics for understanding profit maximization, and in engineering for structural designs.
When data points form a clear curve on a scatter plot, displaying heights and depths, it can indicate that a quadratic function is appropriate. Recognizing the parabolic nature is crucial for this | {"url":"https://www.vaia.com/en-us/textbooks/math/thinking-mathematically-7-edition/chapter-7/problem-18-a-create-a-scatter-plot-for-the-data-in-each-tabl/","timestamp":"2024-11-08T02:12:08Z","content_type":"text/html","content_length":"251244","record_id":"<urn:uuid:c6e7f3c4-ee0d-41ef-b639-44cd533eab0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00709.warc.gz"} |
fear on "removing task & all files"
i have never removed a torrent or a downloaded file in BitComet
this is because i am afraid.
i am afraid because i do not know how to delete it.
however, recently i downloaded a file that i dont want anymore.
the file is not completed and i want to remove it, but i dont want to risk deleting the other files.
the only option i can see is right-clicking on the file, then there is one which says "remove task only" or "remove task & all files"
what i basically want to know is how can i remove that downloaded file/torrent and everything to do with it?
(ps: i just want to save some disk space. if i can remove the file but still be able to seed, then i'll gladly want that so i can help others. i just want to kno what happens if i choose 'remove task
only' and what happens if i choose the other option)
(and yes i know it sounds like stupid topic, but i'm just a starter and i'm paranoid)
Hey real it's not a biggie lol I had the same issue a while back.
Remove task only just clears that task from your bitcomet client and keeps the downloaded files wherever you have them.
Remove task & all files removes the task from your bitcomet client and the downloaded files as well. When it says all it doesn't mean the other tasks as well just that particular one you have
If you want to be safe and don't trust my word lol just do a remove task only and then you can manually go and delete the downloaded files.
*Note* when you do select remove task and all files it automatically deletes the downloaded files COMPLETELY, it doesn't not put them in the recycle bin so be careful.
Hope that helps ;)
thank you very much.
i'll take your word for it
LoL, you can't seed when you have deleted the files. Thats obvious. :blink: | {"url":"https://www.cometforums.com/topic/1336-fear-on-removing-task-all-files/","timestamp":"2024-11-05T15:41:11Z","content_type":"text/html","content_length":"90142","record_id":"<urn:uuid:f754b8f8-5bb3-4d39-8d68-6d591c0aae69>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00164.warc.gz"} |
Lesson 3
Compose Three-digit Numbers
Warm-up: Number Talk: Add Tens and Ones (10 minutes)
The purpose of this Number Talk is to elicit strategies and understandings students have for adding by place and composing a ten mentally. These understandings help students develop fluency and will
be helpful later in this lesson when students describe base-ten representations by place and use the fewest number of base-ten blocks to represent a number.
• Display one expression.
• “Give me a signal when you have an answer and can explain how you got it.”
• 1 minute: quiet think time
• Record answers and strategies.
• Keep expressions and work displayed.
• Repeat with each expression.
Student Facing
Find the value of each expression mentally.
• \(42+42\)
• \(21+63\)
• \(50+34\)
• \(48+36\)
Activity Synthesis
• “What did you notice about the sums?” (The expressions were all different, but all had a value of 84.)
• “How could you explain why the third and fourth expressions have the same value?” (You could take 2 ones from 36 and add it to 48 to make \(50+34\).)
Activity 1: Sort Blocks by Value (20 minutes)
In this activity, students sort base-ten blocks and record the total number of blocks they have by the unit each block represents. Students work together to look for ways to compose larger units from
smaller units in order to represent the same value with the fewest number of blocks (MP7). They represent composing a hundred by exchanging 10 ten blocks for a hundred block and represent composing a
ten by exchanging 10 one blocks for a ten block. In the synthesis, students compare different ways that groups represent the total value which may include representing the value as a three-digit
This activity uses MLR7 Compare and Connect. Advances: representing, conversing
Required Preparation
• Each group of 3–4 students will need a container with 2 hundreds, 28 tens, and 15 ones.
• Each group of 3–4 students will need access to additional base-ten blocks (hundred blocks and ten blocks).
• Groups of 3–4
• Give each group a container of blocks, access to base-ten blocks, and supplies for making a group display.
• “Your group has a container of base-ten blocks.”
• “Sort the blocks by the unit they represent and record the number of each type of block on your paper.”
• “Work together to figure out how to represent the same total value using the fewest number of blocks possible.”
• 6 minutes: small-group work time
MLR7 Compare and Connect
• “Create a visual display to show the total value of the blocks. Include details such as diagrams, labels, and numbers to help others understand your thinking.”
• 2–5 minutes: group work time
• “As you look at other groups’ representations, look for different ways groups show the value. Which ways are the same as your group’s representation? Which ways are different? How do you know
they represent the same value?”
• 5 minutes: gallery walk
• “Discuss any revisions you would like to make to your representations with your group.”
• 1–2 minutes: small-group work time
• Monitor for students who:
□ create a base-ten diagram with the fewest amount of blocks represented
□ write 4 hundreds, 9 tens, 5 ones
□ write 495
□ use an expression such as \(400 + 95\) or \(400 + 90 + 5\)
Student Facing
1. Sort the blocks.
□ We have _________ hundreds.
□ We have _________ tens.
□ We have _________ ones.
2. Represent the same value with the fewest number of blocks possible.
□ We have _________ hundreds.
□ We have _________ tens.
□ We have _________ ones.
3. Represent the value of your blocks using base-ten diagrams, words, or numbers.
Advancing Student Thinking
If students represent their number with 10 or more of any unit, consider asking:
• “How do you know that you have used the fewest number of blocks possible?”
• “How can you combine the tens or ones so you don’t use as many of the base-ten blocks?”
Activity Synthesis
• Display previously identified students’ representations.
• “What is the same and what is different between the ways groups represented the total value of the blocks?” (They each show 4 hundreds, 9 tens, and 5 ones. Some just use diagrams, some use only
digits, some use diagrams, numbers, and expressions.)
Activity 2: The Same But Different (15 minutes)
In this activity, students build on their work with base-ten blocks in previous activities to use base-ten diagrams to represent a value using the fewest number of each unit possible. They first
interpret images of students’ representations of a number using base-ten blocks. When representing the same value, students may choose to draw the original representation and show composing units by
circling groups of 10 tens or 10 ones. Others may choose other methods, such as circling or labeling the images to show ways to compose a larger unit or by using what they have learned about patterns
in units from previous lessons. In the synthesis, students connect the 3 digits in a three-digit numeral to their representations (MP7).
Engagement: Develop Effort and Persistence. Chunk this task into more manageable parts. Check in with students to provide feedback and encouragement after each chunk. Consider asking specifically
about the ones first to decide whether a group of ten can be made. Then move into the tens and make connections to the work done with the ones.
Supports accessibility for: Organization
• Groups of 2
• Give students access to base-ten blocks.
• “Mai and Diego each used base-ten blocks to represent numbers.”
• “Record the number of hundreds, tens, and ones each student used.”
• “Find a way to represent the same value with the fewest number of each unit possible and represent it using a base-ten diagram.”
• “Use blocks if it helps.”
• “Together with your partner, figure out the total value of the blocks.”
• 8 minutes: partner work time
• Monitor for students who write the total as a three-digit number.
Student Facing
Mai’s Blocks
1. Mai has ______ hundreds _____ tens _____ ones.
2. Draw a base-ten diagram to represent the same total value with the fewest number of each unit.
3. What is the value of Mai’s blocks?
Diego’s Blocks
4. Diego has ______ hundreds _____ tens _____ ones.
5. Draw a base-ten diagram to represent the same total value with the fewest number of each unit.
6. What is the value of Diego’s blocks?
Activity Synthesis
• “How many did Diego have in all? Explain how you knew.”
• Select previously identified students to share.
• Write 283 while saying “2 hundreds, 8 tens, and 3 ones.”
• “This is a three-digit number. The digits represent amounts of hundreds, tens, and ones.”
• As needed, demonstrate reading the number left to right and gesture to emphasize the value of each digit.
Lesson Synthesis
“Today you represented numbers that were greater than 100 using base-ten blocks, base-ten diagrams, numbers, and words.”
“You also saw how you can write a three-digit number to represent the amount of hundreds, tens, and ones.”
Display 324.
“How would you represent this number with base-ten blocks or a base-ten diagram? Explain how you know.” (I’d draw 3 hundreds, 2 tens, and 4 ones. The first digit shows how many hundreds, the second
digit shows how many tens, and the last digit shows how many ones.)
Cool-down: How Many Blocks? (5 minutes) | {"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-2/unit-5/lesson-3/lesson.html","timestamp":"2024-11-07T23:49:27Z","content_type":"text/html","content_length":"91744","record_id":"<urn:uuid:bb180abd-e79d-4361-82ac-e81ed91ffdec>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00645.warc.gz"} |
How to Use the Rule of 72 Formula - Finance Train
How to Use the Rule of 72 Formula
In finance, rule of 72 is an important approximation rule that is used to quickly estimate the number of years it will take for an investment double in value at a given interest rate.
According to this rule, the interest rate multiplied by the number of years it will take for the investment to double is equal to 72 approximately. The rule of 72 considers exponential growth, i.e.,
continuous compounding.
Let’s see how it can be used in different scenarios.
Estimate the time it takes to double the investment
Let’s say you have $10,000 to invest. If you invest this money in a financial asset that provides 10% interest, then how much time will it take to double this investment. If the interest rate is
represented by r and time by t, then:
r*t = 72
In our case r is 10%, so t = 72/10 = 7.2 years
According to the rule of 72, our investment of $10,000 will take 7.2 years to become $20,000 if invested at an interest rate of 10%.
To test this formula, let’s use the exponential growth formula to see how much time it will actually take to reach $20,000.
10,000*(1+0.10)^t = 20,000
If we use t as 7.2, the investment will grow to 19,862. This is quite a close estimate. The actual time it will take for our investment to become $20,000 is 7.27 years. By seeing these numbers we can
say that the rule of 72 provides quite satisfactory answer for a quick calculation.
Estimating interest rate for investment
An alternative use of this formula is when you have a certain timeframe in which you want your money to double and you want to find out the interest rate at which you should invest to reach your
goal. So you want your money to double in 5 years. In this case the rate at which your money should grow will be:
r = 72/5 = 14.4%
Estimate the time to decay
We can also use the rule of 72 to find out how much time your money will take to become half in value, such as in case of inflation. Let’s say the inflation rate is 7%. How much time will it take for
$1,000 to become $500 in value?
7*t=72, or t = 10.28 years.
Estimating doubling time for higher interest rates
If you want to determine the doubling time when the interest rates are high, them the number 72 needs to be adjusted by adding 1 for every 3% greater than 8.
So, the formula will be:
t*r = 72+(r-8)/3
If interest rates were 26%, then time to double will be:
t = (72+(26-8)/3)/26 = 2.88
This adjustment gives a better estimate of the doubling time.
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
• Getting Started with R
• R Programming for Data Science
• Data Visualization with R
• Financial Time Series Analysis with R
• Quantitative Trading Strategies with R
• Derivatives with R
• Credit Risk Modelling With R
• Python for Data Science
• Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $39 (Regular $57)
JOIN 30,000 DATA PROFESSIONALS
Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python. | {"url":"https://financetrain.com/how-to-use-the-rule-of-72-formula","timestamp":"2024-11-01T20:02:38Z","content_type":"text/html","content_length":"91537","record_id":"<urn:uuid:4f3917be-e639-48eb-aeb9-d0968911f81f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00510.warc.gz"} |
Cryptography - Sebastien Varrette, PhD.
Quoting Wikipedia:
Cryptography or cryptology is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing
and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity,
authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical
engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
In 2006, I was given the opportunity (through an EGIDE funded mission), to give a serie of lectures (in french) in the context of the “Cours Sécurité DEA Informatique” of the Univ. of Yaounde I
(Cameroon). I also participated to several practical sessions on this topic within the Master in Security, Cryptology and Coding of Information systems, a joint program between Université Joseph
Fourier (UJF) (now Université Grenoble-Alpes) and Grenoble INP between 2004 and 2007.
This page offers the material I prepared for this lecture.
all items of this page are now relatively old.
Any comment to make it up-to-date is welcome -- see
Note: I have also some support slides on elliptic curves cryptography but haven’t collected the authorization to display them online. If you need them, mail me so I can put you in contact with the
primary authors of these slides.
Support Books
Several reference books can be used as support for this lecture: [1] [2] [3].
Otherwise, I strongly encourage you to check my own books related to this topic:
Exercises / Projects
Important: I feel obliged to insist: the below material (including appendix layout) are quite old now and would deserve a big refresh. They are proposed for archiving reasons in the hope they might
still be useful to students / lecturers.
You will find way more up-to-date exercises in my book.
Through the UJF/INPG Master SCCI, I prepared in collaboration with Prof. Gérard Vinel several exercises over MAPLE proposed below:
Past Exams
Below are (old) exams I prepared for lectures related to cryptography (and network security).
1. D. R. Stinson, Cryptography: Theory and Practice, 2nd ed. Chapman & Hall/CRC Press, 2002.
2. B. Schneier, "Cryptographie Appliquée", 2nd ed. NY: Vuibert, Wiley and International Thomson Publishing, 1997.
3. A. J. Menezes, S. A. Vanstone, and P. C. V. Oorschot, Handbook of Applied Cryptography, 1st ed. CRC Press, Inc., 1996. | {"url":"https://varrette.gforge.uni.lu/teachings/crypto/","timestamp":"2024-11-06T12:02:48Z","content_type":"text/html","content_length":"33335","record_id":"<urn:uuid:9edd40b6-40c2-453b-af49-beaceed5df94>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00136.warc.gz"} |
Scheme duplicate variable names in 8.15
I have the following example which works fine in 8.14 and below:
even : nat->Prop :=
evenO : even O |
evenS : forall n, odd n -> even (S n)
odd : nat->Prop :=
oddS : forall n, even n -> odd (S n).
Scheme Even_induction := Minimality for even Sort Prop
with Odd_induction := Minimality for odd Sort Prop.
Theorem even_plus_four : forall n:nat, even n -> even (4+n).
intros n H.
elim H using Even_induction with (P0 := fun n => odd (4+n));
simpl;repeat constructor;assumption.
However, in 8.15+rc1, the elim H using ... fails with the following strange message:
Error: No such bound variable P0 (possible names are: P, P and n).
If I do Check Even_induction, the P0 variable is shown just fine:
: forall P P0 : nat -> Prop,
P 0 ->
(forall n : nat, odd n -> P0 n -> P (S n)) ->
(forall n : nat, even n -> P n -> P0 (S n)) ->
forall n : nat, even n -> P n
can report this as issue if you want, but would like to confirm it's worth doing...
I think I know: Check is lying, About Even_induction won't.
About Even_induction.
Even_induction :
forall P P0 : nat -> Prop,
P 0 ->
(forall n : nat, odd n -> P0 n -> P (S n)) ->
(forall n : nat, even n -> P n -> P0 (S n)) -> forall n : nat, even n -> P n
Even_induction is not universe polymorphic
Arguments Even_induction (P P)%function_scope f (f f)%function_scope
n%nat_scope e
Even_induction is transparent
Expands to: Constant min.Even_induction
Is this also with 8.15?
it's 8.15+rc1, v8.15, master
Oh, right, can you paste the full output?
When I ran into this, the problem was that all arguments are actually named P.
and induction (and elim apparently) stopped renaming them for you.
As a workaround, I ended up applying the induction principle explicitly to its argument.
The Arguments line is correct, while the type is also correct but unhelpful, since it must rename some P's to print the type.
"thanks for all the Ps"
(possible names are: P, P and n).
issue: https://github.com/coq/coq/issues/15420. @Gaëtan Gilbert already saw this and removed it from the 8.15.0 milestone.
how annoying do you think is the workaround?
thanks. I think this is a pretty serious breakage... workaround means one has to completely rewrite the above proof
(not that the UX is very good, of course)
hm, how bad would induction H using (Even_induction _ (fun n => odd (4 + n))) or apply (Even_induction _ (fun n => odd (4 + n))) be?
the first one doesn't work at all, neither does:
elim H using (@Even_induction _ (fun n => odd (4+n))).
if we have to rewrite all this stuff using apply, why do we even have induction tactics?
yeah, I see that induction is broken here.
I just never found this pattern useful, because the Combined Scheme is the more useful one for me, and those aren’t supported by induction…
the following seems to be the only thing that works, and means we have to completely drop induction:
Theorem even_plus_four : forall n:nat, even n -> even (4+n).
apply (Even_induction _ (fun n => odd (4 + n)));
simpl;repeat constructor;assumption.
yep, and you’re even lucky — I expected you’d need an explicit pattern
FWIW you might want to add some code like this in CI since mutual inductives get very little testing, this isn’t the first mutual-only regression that I’m the first to notice. Not sure why CompCert
doesn’t catch them, probably it’s too heavy for CI?
this code is from https://github.com/coq-community/coq-art - I guess we could add that project into Coq's CI if devs are OK with this...
well, this also breaks my trust in Check quite a bit, but About is so incredibly verbose
just to be sure, Check isn’t at fault, Scheme is IMHO
if Check’s output started with forall P P : nat -> Prop, how would the rest look like?
but as you said, Check does surface-level conversion of names, so from now on I won't trust it to give me actual names (I guess I have to use SerAPI or something to get those)
it could annotate the names with numbers in a syntactically disallowed way or whatever, instead of just inventing stuff out of thin air
e.g., forall P\0 P\1 : nat -> Prop
yes, but instead of inventing a notation, it’d be simpler if top-level terms were guaranteed to avoid shadowing
sure, that's also a solution, but are we going to get that invariant anytime soon?
ah, you mean that’s not just a bug in Scheme… I was assuming this shadowing at least cannot be produced in normal Gallina?
I mean, ideally you won't even allow plugins to generate terms which have shadowing?
but I will probably not be able to sell that to type theorists
I _thought_ it was just a matter of fixing Scheme
I think this only matters for the names of arguments of top-level symbols, does apply …. with support anything else?
yeah, I guess it's a top-level thing. Not sure about the support.
BTW there’s also Set Apply With Renaming as a compat option to get the old behavior, if it helps.
[DEL:sigh, it seemingly isn't mentioned in the changelog for 8.15:DEL]
OK, looks like it was mentioned in changelog after all. But the connection to induction/elim, etc., is far from obvious (I didn't know they were apply under the hood)
Patch welcome. And https://github.com/coq/coq/pull/12756 seems more gung-ho about repeated argument, so your proposed output sounds more compelling
well, I respectfully disagree strongly with Maxime's claim:
I think it's ok to tolerate duplicated names, like OCaml does for labels.
not least because it makes Check unreliable...
Paolo Giarrusso said:
yes, but instead of inventing a notation, it’d be simpler if top-level terms were guaranteed to avoid shadowing
That is impossible. It would mean forbidding any kind of reduction. For example, Coq would fail on the following script.
Definition f := (fun y (x : nat) => y) (fun x : nat => x).
Definition g := Eval compute in f. (* should this really fail? *)
Why would that have to fail, rather than freshen names when defining g?
My intuition is that terms are deBruijn and top-level terms add a (mutable) list of unique names, usable in apply....with.
But, then we are back to Coq having to invent names on the fly, which has been a source of pain for users.
As for the actual representation, bound variables are de Bruijn, while binders (whether toplevel or not) are potentially named.
using generated names is not a good idea, even the first P is generated and can produce strange things
eg pre 8.15
Section S.
Variable P : nat.
Inductive foo : P = P -> Prop :=
with bar := X (_:foo eq_refl).
Scheme find := Minimality for foo Sort Prop
with bind := Minimality for bar Sort Prop.
Lemma bli : bar -> False.
induction 1 using bind with (P0:=fun _ : P=P => False).
so is this a WONTFIX for the Scheme behavior? In this case, at least make sure Check doesn't rename duplicates...
sigh, the following code is hitting the bug in https://github.com/coq/coq/issues/15420 hard, does anyone know an easy way to convert elim t using ntree_ind2 ... to plain apply?
Require Import ZArith.
Inductive ntree (A:Type) : Type :=
nnode : A -> nforest A -> ntree A
with nforest (A:Type) : Type :=
nnil : nforest A | ncons : ntree A -> nforest A -> nforest A.
Fixpoint count (A:Type)(t:ntree A){struct t} : Z :=
match t with
| nnode _ a l => 1 + count_list A l
with count_list (A:Type)(l:nforest A){struct l} : Z :=
match l with
| nnil _ => 0
| ncons _ t tl => count A t + count_list A tl
Scheme ntree_ind2 :=
Induction for ntree Sort Prop
with nforest_ind2 :=
Induction for nforest Sort Prop.
Inductive occurs (A:Type)(a:A) : ntree A -> Prop :=
| occurs_root : forall l, occurs A a (nnode A a l)
| occurs_branches :
forall b l, occurs_forest A a l -> occurs A a (nnode A b l)
with occurs_forest (A:Type)(a:A) : nforest A -> Prop :=
occurs_head :
forall t tl, occurs A a t -> occurs_forest A a (ncons A t tl)
| occurs_tail :
forall t tl,
occurs_forest A a tl -> occurs_forest A a (ncons A t tl).
Fixpoint n_sum_values (t:ntree Z) : Z :=
match t with
| nnode _ z l => z + n_sum_values_l l
with n_sum_values_l (l:nforest Z) : Z :=
match l with
| nnil _ => 0
| ncons _ t tl => n_sum_values t + n_sum_values_l tl
#[local] Hint Resolve occurs_branches occurs_root Zplus_le_compat : core.
Open Scope Z_scope.
Theorem greater_values_sum :
forall t:ntree Z,
(forall x:Z, occurs Z x t -> 1 <= x)-> count Z t <= n_sum_values t.
intros t; elim t using ntree_ind2 with
(P0 := fun l:nforest Z =>
(forall x:Z, occurs_forest Z x l -> 1 <= x)->
count_list Z l <= n_sum_values_l l).
- intros z l Hl Hocc. lazy beta iota delta -[Zplus Z.le];
fold count_list n_sum_values_l; auto with *.
- auto with zarith.
- intros t1 Hrec1 tl Hrec2 Hocc; lazy beta iota delta -[Zplus Z.le];
fold count count_list n_sum_values n_sum_values_l.
apply Zplus_le_compat.
apply Hrec1; intros; apply Hocc; apply occurs_head; auto.
apply Hrec2; intros; apply Hocc; apply occurs_tail; auto.
it works fine in 8.14 and below... and to be clear, this is from a coq-community project, so not just a theoretical exercise.
simplest fix is probably to use Arguments : rename on your scheme
thanks, that seems to work reasonably well
Karl Palmskog said:
the first one doesn't work at all, neither does:
elim H using (@Even_induction _ (fun n => odd (4+n))).
elim H using (fun P => Even_induction P (fun n => odd (4+n))). should work if you don't want to use Arguments
elim t using (fun P => ntree_ind2 _ P
(fun l:nforest Z =>
(forall x:Z, occurs_forest Z x l -> 1 <= x)->
count_list Z l <= n_sum_values_l l)).
for the latest code block
I tried that one (elim t using (fun ...)), didn't seem to give the same goals as before, but maybe it actually does.
FTR, on gitlab Matthieu agreed Scheme should be fixed...
I still think the argument renaming approach is the way to go though, since it's both backward and forward compatible, and doesn't require changing anything else
AFAICT, @Gaëtan Gilbert 's objection would suggest shadowing the P from the section. I don't get why, but that seems acceptable...
Nobody should be writing that code anyway (EDIT: at least arguably)
when is the nice fix by Hugo going to be rolled out to the masses? It won't be until 8.16.0, right? https://github.com/coq/coq/pull/15537
we can do a 8.15.1
Note that this means a few Coq Platform projects will need to be updated to be marked as compatible with < 8.15.1 and new package releases will be needed for the 2022.03.0 Platform (cc @Michael
I must admit I don't fully understand the discussion here (hadn't time to dig into it) but from what @Théo Zimmermann said I guess that putting this into 8.15.1 would introduce a significant
incompatibility between 8.15.0 and 8.15.1. I would try to avoid this, since in the end it would mean that it would be impossible to to a Coq Platform patch release for 8.15.1.
Did I interpret @Théo Zimmermann 's comment right?
Answering in passing: I would say that the significant incompatibilities are more about 8.15.0. That is, 8.15.1 will be more compatible with 8.14 than 8.15.0 is.
OK, so one would declare this as bug introduced in 8.15.0 and fixed in 8.15.1?
Michael Soegtrop said:
OK, so one would declare this as bug introduced in 8.15.0 and fixed in 8.15.1?
this is my understanding, and what I have seen in the projects I help maintain (i.e., they broke on 8.15.0, but are fixed by Hugo's fix)
That's also my understanding and is in line with the fact that 8.15 was only in a platform preview pick so far.
The reason why I mentioned this is because there are three packages in the platform that had to get a (backward-compatible) fix and thus will need a new release to be compatible with 8.15.1.
if the fix was backward compatible isn't it forward compatible too?
it's not obvious to me there would need to be new package releases because of the fix.
for example, all workarounds discussed in this topic work fine even after Hugo's fix
But there were overlays in the PR, why were they needed then?
because something else broke x_x
dunno what to do about it
I guess we should wait for the other regression fixes https://github.com/coq/coq/pull/15594 and https://github.com/coq/coq/pull/15577 before doing 8.15.1
I'm not sure about the relevant etiquette, but I noticed you didn't mention https://github.com/coq/coq/issues/15567
that's an issue not a fix
@Gaëtan Gilbert do you expect the fix for 15567 to be very complicated? It's a blocker for us upgrading to 8.15 so it would be good to know if this is going to take a long time. We are still hoping a
fix could go into 8.15.1.
fix for 15567 is looking ok https://github.com/coq/coq/pull/15653
I don't want to backport the scheme name fix though due to the incompatibilities it has
maybe we can go for a stupider fix that only renames for P binders, that would avoid the incompatibility
cc @Hugo Herbelin
Sorry, I'm missing context. I don't get the relation between #15653 and the P names in schemes.
there is no relation except that they are both being considered for 8.15.1
Ah, ok I see, you'd like a variant of #15420 which does not fix the eq bug name.
Maybe removing 138bae9f in the backport would be enough? Can you try or do you want me to try to submit a specific 8.15 PR?
I'll try
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/Scheme.20duplicate.20variable.20names.20in.208.2E15.html","timestamp":"2024-11-06T11:57:14Z","content_type":"text/html","content_length":"88517","record_id":"<urn:uuid:178f1f4b-b03f-4686-ba4b-b8306eed2ccf>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00552.warc.gz"} |